text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Transient Convective Heating Transport of the Micropolar Fluid Flow Between Asymmetric Channel with Activation Energy
Here modelling and computations are presented to introduce the concept of activation energy involved in the micropolar fluid between the porous vertical channel with the boundary conditions of third kind. The fluid is considered to be gray, absorbing-emitting but non scattering medium. The resulting system of equations are solved numerically by Crank-Nicolson implicit finite difference method. The effect of various physical parameter such as activation energy, micropolar parameter, reaction rate, Reynolds number, Schmidt number, heat and mass transfer Biot numbers on the velocity, temperature and concentration field are discussed by means of pictorial representation.
Introduction
The hydrodynamics of the fluids are based on the fact that the fluid fragments does not have any intrinsic structure, this behaviour results in the known Navier-Stokes equations which describes a lot of hydrodynamic phenomena. Micropolar fluids may embody the fluids comprises of rigid randomly sized particles. Moreover, the particles are suspended in viscous medium where the deformation of fluid particle is ignored. These fluids are helpful in describing the detailed behaviour of colloidal solutions, polymeric suspension, liquid crystals, animal blood, etc. Eringen [1] developed the concept of Micropolar fluid theory which is designed to meet the needs of the system that do not comfort the Navier-Stokes equation. The Reynolds number for the three dimensional case is generalized by Singh and Sinha [2]. Sharma and Gupta [3] analyzed the impact of medium permeability in thermal convection of micropolar fluids. They have analyzed the thermal instability of micropolar fluids in porous medium. Lukaszewicz [4] presented mathematical aspects of the micropolar fluid flow theory in his book. From this concept, he described the non-Newtonian behaviour of certain fluids. By using the Optimal Homotopy Analysis method, Rashid et al. [5] discussed the heat dissipation and transverse magnetic fields effects on micropolar rotating fluid flow between two lateral plates.
Thermal radiation affects the heat dissipation effects and the temperature distribution of the micropolar fluid in a channel. Radiation effects on a micropolar fluid flowing through a porous medium with and without magnetic field has been considered by many authors [6][7][8][9]. On the other hand, Muhammed et al. [10] investigated the influence of activation energy in non-linear IOP Publishing doi:10.1088/1757-899X/1130/1/012049 2 radiative stagnation point flow of nanofluid. They investigated about the species concentration which enhances for higher estimation of activation energy. The activation energy is characterized as a meager amount of energy required to initiate a specific chemical reaction. The chemical reactions requiring lesser number of activation energy are referred to as spontaneous reactions. The role of the activation energy is also highly recommanded in the areas of chemical engineering, food processing and mechanics of oil water synthesis [11][12][13].
The intent of the commentary is to investigate the consequence of activation energy on the fully refined free convection flow, mass and heat dissipation of micropolar fluid amidst the two lateral porous vertical plates. The numerical results for the flow, heat and mass transfer attributes are furnished by means of graphs and tables.
Mathematical Formulation
In this paper, we consider a laminar flow of a micropolar fluid which flows inbetween two vertical plates. These two erect plates are situated at y = 0 and y = L, where the breadth of the channel is given by L. The solute concentration of the internal surface of the left and right plate is marked by C 1 and C 0 . T 1 and T 0 are the uniform reference temperature of the left and right plate accordingly. The plates are feigned to have a trifling compactness and dissipates the heat by means of convection and mass transfer co-efficients h 1 , h 2 , h 3 and h 4 . The micropolar fluid is pretended to be a gray but non-scattering medium, which is analogously implanted or disparate with the celerity v 0 to or going back to the walls. The velocity of Microrotation is indicated as (0, 0, n), where the portion of the microrotation is represented by n and it is perpendicular to the x − y plane in which the microrotation is attained by grabing the curl of velocity vector at microscopic level. The physical properties describing the fluid are reckoned to be fixed except the variation in density which gooses the buoyancy force. The governing equations for this problem is given by: and the suitable initial and boundary conditions are given by For t ≤ 0 where β is the coefficient of heat expansion, the solidity is represented as ρ , the specific heat capacity at constant pressure is noted as c p , the thermal conductivity of the fluid is k, ν is the kinematic viscosity of the fluid, K is the gyroviscosity, γ represents the material constant, the microinertial effect is given by j, T 0 is the temperature whereas C and C 0 are the concentration. The molecular diffusivity is given by D. The suction or injection of the fluid is denoted by v 0 , v 0 ≤ 0 represents the injection at y = L and suction at y = 0, while the contradiction occurs for v 0 > 0. The last term in the conservation of mass presents due to the essence of Activation of energy, in which T ∞ is the ambient temperature. k r , n, E a , are the reaction rate, fitted rate constant and activation energy respectively.
Since the Cogley-Vincent-Gilles mechanism is adopted to formulate the radiation portion of heat dissipation. The heat flux term in Eq. (4) is interpreted as where K λ 3 h is the absorption coefficient, λ 3 is the wave length, e λ 3 p is the Planck's function, whereas at time t ≤ 0, the temperature of the walls are enacted as T .
After using Eq. (7) in Eq. (4), the energy equation (4) can be modified as, The dimensionless variables are introduced to non-dimensionalize the governing equations which is given by, where λ 1 can be the micropolar parameter, λ 2 , B be the micropolar constants and the Reynolds number is represented as Re.
By using Eq. (10), the modified form of governing equations (1)-(5) are termed by, the dimensionless form of the boundary conditions (6) is given by, where ξ = P rGrβgL cp , η = P rGmβgL cp , are the dimensionless groups, = T 2 −T 0 T 1 −T 0 , ς = C 2 −C 0 C 1 −C 0 , are the non-dimensional parameters, k w = temperature conductivity , D w = molecular diffusivity, N R = 4IL 2 k is the parameter for thermal radiation, the Grashof number is given by are the heat and mass dissipation Biot numbers. Accordingly, the shear and couple strains, the heat flow rate intensity and the mass flux on the walls are assigned as follows,
Numerical Procedure
The non-linearity equations (11)- (14) along with the conditions (15)-(17) are numerically solved by Crank-Nicolson methodology. The domain (0 < τ < ∞) -(0 < Y < 1) is divided into a web of lines which is collateral to τ and Y coordinates. After applying the Crank-Nicolson methodology for equations (11)-(14) the governed equations followed by the boundary conditions are altered into the algebraic equations which are as follows: the initial and boundary conditions are given by, For Y and time directions, the mesh sizes are represented by ∆Y and ∆τ respectively. To validate the present numerical results, we compared these results with previously published results [14] in Table 1 and we found that it was in good agreement.
Results and Discussion
The resultant governing equations (11)- (14) along with boundary conditions which is numerically solved by finite difference method. The results that we obtained by using the numerical methods are plotted and discussed for the flow and heat dissipation attributes against the pertinent parameters in this section. The default parameters (σ = 4, n = 0.5, B = 0.001, P r = 0.72, λ 1 = 5, λ 2 = 1, 1 = 1, Bi 1 = 10, Bi 2 = 1, Re = 0.1, Sc = 1, N R = 1, E = 1) are fixed unless otherwise specified. Fig.3. It is observed that the temperature and concentration profile increased with adding resistance to the fluid as expected. Also, in the case of highly viscous fluid (Re << 1), the temperature and concentration change between the two plates are more than in the case of inviscid fluid (Re > 1). The influence of radiation parameter imposing on temperature and concentration profile is displayed in fig.4. It is seen that an increment in the radiation source is to decrease the temperature and increase the concentration for both cases of external heating at the plates. By setting the values of higher radiation parameter, the temperature and the concentration change is observed more in the absence of radiation source. Fig.5 is sketched to visualize the deviations of concentration with respect to activation parameter with different cases of highly viscous/inviscid fluid. It is identified that an enhancement in the activation energy results in the growth of concentration profile. Generally in the case of highly viscous fluid, the solutal concentration may be lesser than in the case inviscid fluid. The expected same trend has been observed in fig.5.
In order to visualize the flow characteristics with time and space variables, we have plotted the graphs against some pertinent parameters in the case of highly viscous fluid which is shown in fig.6. The drastic change has been noted in highly viscous fluid than in inviscid fluid. The contour plot for temperature and concentration have been shown in fig.7 and fig.8. From fig.7, it is pointed out that with the inclusion of radiation source, the amount of heat exchange from the left hot wall to the micropolar fluid is more than in the absence of radiation parameter. From fig.8, it is observed that there is a gradual improvement in the transient state which is more in the case of absence of activation energy but in the presence of activation energy there is a marginal depression in the steady state of the fluid.
Conclusion
The modelling and computations have been implemented to predict the combined effect of activation energy and radiation on the flow and heat transfer trait in the low and high Reynolds number regime with the assumption of external heating on both plates of the channel. The | 2,438.8 | 2021-04-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Prediction of Oscillations in Glycolysis in Ethanol-Consuming Erythrocyte-Bioreactors
A mathematical model of energy metabolism in erythrocyte-bioreactors loaded with alcohol dehydrogenase and acetaldehyde dehydrogenase was constructed and analyzed. Such erythrocytes can convert ethanol to acetate using intracellular NAD and can therefore be used to treat alcohol intoxication. Analysis of the model revealed that the rate of ethanol consumption by the erythrocyte-bioreactors increases proportionally to the activity of incorporated ethanol-consuming enzymes until their activity reaches a specific threshold level. When the ethanol-consuming enzyme activity exceeds this threshold, the steady state in the model becomes unstable and the model switches to an oscillation mode caused by the competition between glyceraldehyde phosphate dehydrogenase and ethanol-consuming enzymes for NAD. The amplitude and period of metabolite oscillations first increase with the increase in the activity of the encapsulated enzymes. A further increase in these activities leads to a loss of the glycolysis steady state, and a permanent accumulation of glycolytic intermediates. The oscillation mode and the loss of the steady state can lead to the osmotic destruction of erythrocyte-bioreactors due to an accumulation of intracellular metabolites. Our results demonstrate that the interaction of enzymes encapsulated in erythrocyte-bioreactors with erythrocyte metabolism should be taken into account in order to achieve the optimal efficacy of these bioreactors.
Introduction
Erythrocytes with encapsulated enzymes that are not present naturally in these cells are known as erythrocyte-bioreactors (EBRs) and may be used for many biological and medical tasks [1,2]. Such EBRs can be used for the consumption or production of different substances during circulation in the blood. The efficiency of EBRs may be affected by different factors, such as the permeability of the erythrocyte cell membrane to substrates and/or products of encapsulated enzymes, the stability of encapsulated enzymes inside erythrocytes, etc. In addition, the interaction of encapsulated enzymes with erythrocyte metabolic systems may affect the function of both encapsulated enzymes and metabolic systems. Many different applications of EBRs have been described in the literature, including the removal of asparagine from blood for antitumor therapy [3,4], removal of excess adenosine to treat adenosine deaminase deficiency [5], transformation of thymidine to thymine to Int. J. Mol. Sci. 2023, 24, 10124 2 of 16 treat thymidine phosphorylase deficiency [6,7], removal of ammonium [8][9][10][11] and alcohol (ethanol) [12][13][14][15][16][17] from blood in cases of intoxication, etc.
The usefulness of mathematical models for the analysis and simulation of experimental data obtained with EBRs, as well as for analysis of the interaction between encapsulated enzymes and erythrocyte metabolic systems, was demonstrated in several our publications [10,17,18]. For instance, it has been shown previously that when encapsulated in erythrocytes, ammonium-neutralizing enzymes may significantly affect energy metabolism in cells [18]. In this work, for the first time, the effect of encapsulated enzymes on the metabolism of EBRs has been studied using mathematical modeling. It has been shown that the oxidation of NADPH in the glutamate dehydrogenase reaction causes an elevation of NAD levels that, in turn, activate metabolic flux through the pentosephosphate pathway. A significant activation of the pentosephosphate pathway in human erythrocytes can cause a decrease in [ADP] and, as a result, a decrease in the pyruvate kinase reaction rate, the disappearance of the steady state in the system, the permanent accumulation of glycolysis intermediates, and, finally, the osmotic destruction of EBRs [18,19].
In this study, using mathematical modeling, we investigated the effects of the interaction of glycolysis with enzymes encapsulated in EBRs prepared for the removal of alcohol from the blood.
Excessive alcohol consumption is an important problem in modern society. Therefore, the fight against alcohol intoxication is not only an important social task, but also an urgent task in respect of modern medicine [20]. Several attempts have been described in the literature to produce EBRs that remove ethanol from the bloodstream [12][13][14][15][16][17]. The most effective and promising version of such EBRs involves those prepared with the simultaneous encapsulation of alcohol dehydrogenase (ADH) and acetaldehyde dehydrogenase (ALDH) [15][16][17]. In such EBRs, ethanol is first oxidized to acetaldehyde in an ADH reaction: where ETH denotes ethanol and ACALD denotes acetaldehyde. Next, the acetaldehyde produced in the ADH reaction is further oxidized to acetate in an ALDH reaction, which is irreversible under intracellular conditions [21]: ACALD + NAD → ACT + NADH.
Since glycolysis and the EBR-encapsulated ADH and ALDH enzymes use NAD and NADH as common substrates, they can affect each other via these metabolites. The general scheme demonstrating the interaction of glycolysis with the ADH and ALDH reactions via NAD and NADH (NAD/NADH loop) is shown in Figure 1.
The aim of the present study was to determine the possible effects of the interaction between glycolysis and enzymes encapsulated into erythrocytes ADH and ALDH on glycolysis itself, and on the efficiency of ethanol consumption by such erythrocytes (EBRs). In this work, we constructed and analyzed a mathematical model of EBRs, which describes the metabolic system shown in Figure 1. Analysis of the model revealed that the interaction of glycolysis with EBR-encapsulated ADH and ALDH enzymes via the NAD/NADH loop can cause metabolic oscillations and the disappearance of the steady state in glycolysis. This, in turn, can decrease the efficiency of alcohol consumption and cause EBRs' death.
Kinetics of Ethanol Consumption
In the absence of ethanol, or with zero activity of ADH and ALDH, the model has a single steady state, with variable values close to those obtained in other models and in intact erythrocytes [10,22,23] (Table 1).
The simulation of experimental conditions in the presence of ethanol shows that EBRs can consume ethanol, either added to the blood or to the erythrocyte suspension, at a rate proportional to the activity of ADH and ALDH in the EBRs and to the ethanol concentration (Figure 2A,B).
The NAD concentration decreases in proportion to ADH and ALDH activity and ethanol concentration ( Figure 2C). If EBRs contain only ADH, at the initial ethanol concentration of 10 mM, equilibrium in the ADH reaction is achieved after the consumption of 40 µM of ethanol and accumulation of 40 µM of acetaldehyde, and no further ethanol consumption can be achieved ( Figure 2D). If we set the acetaldehyde concentration in the model equal to zero, the initial rate of ethanol consumption is slightly higher compared with the model in which the acetaldehyde concentration is calculated as an intermediate between the ADH and ALDH reactions ( Figure 2D). However, the overall kinetics of ethanol consumption are almost the same in these two versions of the model ( Figure 2D). Thus, our results reveal that EBRs loaded with ADH alone cannot provide efficient ethanol removal. For the efficient consumption of ethanol, EBRs must contain ALDH in addition to ADH. The model predicts very low acetaldehyde concentrations, which even decrease during ethanol consumption in EBRs containing both ADH and ALDH ( Figure 2E). These Interaction of glycolysis with encapsulated enzymes in erythrocyte-bioreactors consuming ethanol. The encapsulated enzymes are shown inside the blue frame. The following abbreviations are used: GLC-glucose, G6P-glucose-6-phosphate, F6P-fructose-6-phosphate, FDPfructose-1,6-diphosphate, DAP-dihydroxyacetone phosphate, GAP-glyceraldehyde phosphate, 1,3-DPG-1,3-diphosphoglycerate, 2,3-DPG-2,3-diphosphoglycerate, 3-PG-3-phosphoglycerate, 2-PG-2-phosphoglycerate, PEP-phosphoenolpyruvate, PYR-pyruvate, LAC-lactate, PO 4orthophosphate, ACALD-acetaldehyde, ACT-acetate, HK-hexokinase, GPI-glucose-6phosphate isomerase, PFK-phosphofructokinase, ALD-aldolase, TPI-triosephosphate isomerase, GAPDH-glyceraldehyde phosphate dehydrogenase, PGK-phosphoglycerate kinase, PGMphosphoglycerate mutase, ENO-enolase, PK-pyruvate kinase, LDH-lactate dehydrogenase, ADH-alcohol dehydrogenase, and ALDH-acetaldehyde dehydrogenase. ATP-ases denotes the summary rate of ATP consumption by cell ATP-ases.
Kinetics of Ethanol Consumption
In the absence of ethanol, or with zero activity of ADH and ALDH, the model has a single steady state, with variable values close to those obtained in other models and in intact erythrocytes [10,22,23] (Table 1). The simulation of experimental conditions in the presence of ethanol shows that EBRs can consume ethanol, either added to the blood or to the erythrocyte suspension, at a rate proportional to the activity of ADH and ALDH in the EBRs and to the ethanol concentration (Figure 2A,B). The NAD concentration decreases in proportion to ADH and ALDH activity and ethanol concentration ( Figure 2C). If EBRs contain only ADH, at the initial ethanol concentration of 10 mM, equilibrium in the ADH reaction is achieved after the consumption of 40 µM of ethanol and accumulation of 40 µM of acetaldehyde, and no further ethanol consumption can be achieved ( Figure 2D). If we set the acetaldehyde concentration in the model equal to zero, the initial rate of ethanol consumption is slightly higher compared with the model in which the acetaldehyde concentration is calculated as an intermediate between the ADH and ALDH reactions ( Figure 2D). However, the overall kinetics of ethanol consumption are almost the same in these two versions of the model ( Figure 2D). Thus, our results reveal that EBRs loaded with ADH alone cannot provide efficient ethanol removal. For the efficient consumption of ethanol, EBRs must contain ALDH in addition to ADH. The model predicts very low acetaldehyde concentrations, which even decrease during ethanol consumption in EBRs containing both ADH and ALDH ( Figure 2E). These low acetaldehyde levels decrease the reverse ADH reaction rate and thus provide efficient ethanol consumption in the model. The model successfully describes the results obtained earlier in in vitro experiments with EBRs containing ADH and ALDH [15] ( Figure 2F). Figure 3 shows the dependence of the steady-state ethanol consumption rate in EBRs on the activity of encapsulated ADH and ALDH ( Figure 3A) and on the concentration of ethanol ( Figure 3B). The graphs were created under the assumption that the ethanol concentration is constant.
Effects of ADH and ALDH Activity
i. 2023, 24, x FOR PEER REVIEW Figure 3. Dependence of the steady-state ethanol consumption rate in the model on t ADH and ALDH and on the ethanol concentration. (A) The ethanol consumption ra lated at an ethanol concentration of 10 mM, assuming that the ADH activity was equal activity. (B) The ethanol consumption rate was calculated assuming that the ethanol was constant and the activities of ADH and ALDH were equal to 40 mM/h. The oran cyan lines indicate stable steady states, unstable steady states with oscillation mode, steady states with metabolite accumulation, respectively. The ethanol consumption rate was calculated at an ethanol concentration of 10 mM, assuming that the ADH activity was equal to the ALDH activity. (B) The ethanol consumption rate was calculated assuming that the ethanol concentration was constant and the activities of ADH and ALDH were equal to 40 mM/h. The orange, black, and cyan lines indicate stable steady states, unstable steady states with oscillation mode, and unstable steady states with metabolite accumulation, respectively. Analysis of the model shows that the steady-state rate of ethanol consumption increases with the increase in the activity of ADH and ALDH inside the EBRs, or with the increase in the ethanol concentration ( Figure 3). At an ethanol concentration of 10 mM, the model has a single non-zero stable steady state in the range of ADH and ALDH activity from 0 to 101.5 mM/h ( Figure 3A). The glycolysis rate remains constant, as well as most of the model variables, except for the concentrations of FDP, DAP, G3P, PYR, LAC, NAD, NADH, and acetaldehyde ( Figure 4). The steady-state rates of the HK and PFK reactions are 1.12 mM/h, the steady-state rates of the GAPGH and PK reactions are 2.24 mM/h, and the steady-state rate of the PGK reaction is 1.61 mM/h. The rate of the PGK reaction is lower compared with the rates of the GAPDH and PK reactions, since part of the glycolytic flux bypasses the PGK via the 2,3-DPG shunt (Figure 1). After the increase in the ADH and ALDH activity above 101.5 mM/h, the non-zero steady state in the model becomes unstable and the system switches to an oscillation mode (Figures 4 and 5). In Figure 5A-C, one can see synchronous oscillations in groups of metabolites interconnected via rapid, near-equilibrium enzymatic reactions. The amplitude and period of oscillations increase with the increase in the ADH and ALDH activity from 101.5 to 108.5 mM/h ( Figure 6). Interestingly, the oscillations in the ethanol consumption rate (V ADH ) are relatively small compared with the steady-state rate of ethanol consumption ( Figure 5F). Thus, the steadystate rate can be used to characterize the rate of ethanol consumption in the oscillation mode. The increase in the ADH and ALDH activity above 108.5 mM/h causes a switch from the oscillation mode to an infinite accumulation of glycolysis intermediates, such as FDP, DAP, and GAP ( Figure 7A). Finally, the steady state in the model disappears when the activity of the encapsulated enzymes is equal to 157 mM/h, which is also associated with the accumulation of glycolysis intermediates ( Figure 7A).
Effects of Ethanol Concentration
Similar model behavior was observed with increasing ethanol concentration ( Figure 3B). For ADH and ALDH activity of 40 mM/h, the model has one stable steady state in the ethanol concentration range from 0 to 31 mM. The increase in the ethanol concentration to above 31 mM causes the instability of the steady state, which switches the model into the oscillation mode. With a further increase in the ethanol concentration (above 34 mM), the oscillation mode is replaced by the infinite accumulation of glycolysis intermediates (FDP, DAP, and GAP). Eventually, the model loses its stationary state at an ethanol concentration of 59 mM.
Mechanism of Oscillations
The interaction of ethanol-consuming enzymes with glycolysis in the EBRs is provided via NAD, a common substrate for GAPDH, ADH, and ALDH. The increase in the ethanol consumption rate due to the increase in the ADH and ALDH activity, or due to the increase in ethanol concentration, causes a decrease in the NAD concentration ( Figure 4E), and thus a temporal decrease in the GAPDH reaction rate (and the ATP production rate). This causes the accumulation of GAP, the second substrate of GAPDH ( Figure 4B).
The increased GAP concentration allows the rate of the GAPDH reaction (and the rate of ATP production) to be maintained at a normal physiological steady-state level, despite the decrease in [NAD]. Thus, an increase in the ethanol consumption rate in EBRs is associated with a decrease in [NAD] and an increase in [GAP] (Figures 3 and 4).
The concentrations of GAP, DAP, and FDP are in equilibrium. These concentrations increase significantly (up to 0.26, 0.55, and 2.3 mM, respectively) with the increase in the ethanol consumption rate to 1.49 mM/h ([NAD] decreases to 0.031 mM), and small fluctuations in the concentration of NAD cause significant changes in the concentrations of GAP, DAP, and FDP ( Figure 4B,E). As mentioned above, a decrease in [NAD] causes an accumulation of GAP that is associated with a decrease in the ATP production rate and, thus, with a decrease in ATP levels. An increase in [NAD] should cause an increase in GAP consumption in the GAPDH reaction and, thus, an increase in the ATP production rate and ATP levels. In this way, changes in [NAD] should cause changes in ATP levels as well Finally, the steady state in the model disappears when the activity of the encapsulated enzymes is equal to 157 mM/h, which is also associated with the accumulation of glycolysis intermediates ( Figure 7A).
Effects of Ethanol Concentration
Similar model behavior was observed with increasing ethanol concentration ( Figure 3B). For ADH and ALDH activity of 40 mM/h, the model has one stable steady state in the ethanol concentration range from 0 to 31 mM. The increase in the ethanol concentration to above 31 mM causes the instability of the steady state, which switches the model into the oscillation mode. With a further increase in the ethanol concentration (above 34 mM), the oscillation mode is replaced by the infinite accumulation of glycolysis intermediates (FDP, DAP, and GAP). Eventually, the model loses its stationary state at an ethanol concentration of 59 mM.
Mechanism of Oscillations
The interaction of ethanol-consuming enzymes with glycolysis in the EBRs is provided via NAD, a common substrate for GAPDH, ADH, and ALDH. The increase in the ethanol consumption rate due to the increase in the ADH and ALDH activity, or due to the increase in ethanol concentration, causes a decrease in the NAD concentration ( Figure 4E), and thus a temporal decrease in the GAPDH reaction rate (and the ATP production rate). This causes the accumulation of GAP, the second substrate of GAPDH ( Figure 4B).
The increased GAP concentration allows the rate of the GAPDH reaction (and the rate of ATP production) to be maintained at a normal physiological steady-state level, despite the decrease in [NAD]. Thus, an increase in the ethanol consumption rate in EBRs is associated with a decrease in [NAD] and an increase in [GAP] (Figures 3 and 4).
The concentrations of GAP, DAP, and FDP are in equilibrium. These concentrations increase significantly (up to 0.26, 0.55, and 2.3 mM, respectively) with the increase in the ethanol consumption rate to 1.49 mM/h ([NAD] decreases to 0.031 mM), and small fluctuations in the concentration of NAD cause significant changes in the concentrations of GAP, DAP, and FDP ( Figure 4B,E). As mentioned above, a decrease in [NAD] causes an accumulation of GAP that is associated with a decrease in the ATP production rate and, thus, with a decrease in ATP levels. An increase in [NAD] should cause an increase in GAP consumption in the GAPDH reaction and, thus, an increase in the ATP production rate and ATP levels. In this way, changes in [NAD] should cause changes in ATP levels as well as in the levels of ADP and AMP. Changes in ATP, ADP, and AMP concentrations should, in turn, affect the glycolytic flux at the levels of HK and PFK. Because of the high summary concentration of GAP, DAP, and FDP at the border of the oscillation mode, their concentrations change relatively slowly. Indeed, the normal glycolysis rate at the level of the PK reaction is about 2 mM/h. Therefore, it would take about one hour to increase the summary concentration of GAP, DAP, and FDP by 2 mM, even if the total glycolytic flux goes into these metabolites. Under these conditions, glycolysis cannot respond fast enough to changes in the system, which causes a loss of the steady-state stability and changes the system to an oscillation mode.
An analysis of the phase shifts between oscillations in the concentrations of different metabolites (Figure 8) allows us to suggest the following explanation for the oscillations. At high ATP concentrations, corresponding to low AMP concentrations in cells (Figure 8A), the PFK reaction is suppressed, since ATP is a strong inhibitor while AMP is a strong activator of this enzyme [24]. Under these conditions, the doubled rate of the PFK reaction is lower than the rate of the GAPDH reaction ( Figure 8B), which causes a decrease in [GAP], a decrease in the GAPDH reaction rate, and a decrease in ATP production in the following glycolytic reactions. As a result, the ATP level decreases while the AMP level increases, which activates the PFK reaction and, finally, the doubled PFK reaction rate begins to exceed the GAPDH reaction rate. At this moment, the GAP level begins to increase, followed by an increase in the GAPDH reaction rate. This causes an increase in the ATP production rate, an increase in ATP concentration, a decrease in AMP concentration, and a decrease in the inhibition of the PFK reaction. Then, the next cycle begins. Thus, oscillations appear due to the interaction between the upper and lower parts of glycolysis under conditions where the decreased NAD level limits the GAPDH reaction rate and the ATP production rate in the lower part of glycolysis. Ethanol-consuming ADH and ALDH enzymes decrease the NAD levels. This decreases the GAPDH reaction rate, providing conditions for the oscillations in glycolysis.
as in the levels of ADP and AMP. Changes in ATP, ADP, and AMP concentrations should, in turn, affect the glycolytic flux at the levels of HK and PFK. Because of the high summary concentration of GAP, DAP, and FDP at the border of the oscillation mode, their concentrations change relatively slowly. Indeed, the normal glycolysis rate at the level of the PK reaction is about 2 mM/h. Therefore, it would take about one hour to increase the summary concentration of GAP, DAP, and FDP by 2 mM, even if the total glycolytic flux goes into these metabolites. Under these conditions, glycolysis cannot respond fast enough to changes in the system, which causes a loss of the steady-state stability and changes the system to an oscillation mode.
An analysis of the phase shifts between oscillations in the concentrations of different metabolites (Figure 8) allows us to suggest the following explanation for the oscillations. At high ATP concentrations, corresponding to low AMP concentrations in cells ( Figure 8A), the PFK reaction is suppressed, since ATP is a strong inhibitor while AMP is a strong activator of this enzyme [24]. Under these conditions, the doubled rate of the PFK reaction is lower than the rate of the GAPDH reaction ( Figure 8B), which causes a decrease in [GAP], a decrease in the GAPDH reaction rate, and a decrease in ATP production in the following glycolytic reactions. As a result, the ATP level decreases while the AMP level increases, which activates the PFK reaction and, finally, the doubled PFK reaction rate begins to exceed the GAPDH reaction rate. At this moment, the GAP level begins to increase, followed by an increase in the GAPDH reaction rate. This causes an increase in the ATP production rate, an increase in ATP concentration, a decrease in AMP concentration, and a decrease in the inhibition of the PFK reaction. Then, the next cycle begins. Thus, oscillations appear due to the interaction between the upper and lower parts of glycolysis under conditions where the decreased NAD level limits the GAPDH reaction rate and the ATP production rate in the lower part of glycolysis. Ethanol-consuming ADH and ALDH enzymes decrease the NAD levels. This decreases the GAPDH reaction rate, providing conditions for the oscillations in glycolysis. Interestingly, within the oscillation mode, the model predicts the existence of a sharp transition between relatively fast low-amplitude oscillations and relatively slow highamplitude oscillations ( Figure 6).
Accumulation of Glycolysis Intermediates
Above the ethanol consumption rate of 1.50 mM/h (with [NAD] decreased to 0.030 mM), the oscillation mode is replaced by an infinite accumulation of glycolysis intermediates. Finally, when the ethanol consumption rate increases to 1.55 mM/h (with [NAD] decreased to 0.025 mM), the increase in the GAP concentration cannot compensate for the decrease in the GAPDH reaction rate caused by the decrease in the NAD level. The steady state in the model disappears.
Interestingly, the rate of glycolytic intermediate accumulation decreases with the increasing activity of ADH and ALDH ( Figure 7A). This happens because the accumulation of the glycolytic intermediates is associated with a significant decrease in ATP concentration ( Figure 7B). Under these conditions, the PFK reaction rate decreases because [ATP] drops so significantly that PFK is limited by ATP as a substrate.
Conclusions
In conclusion, we note the following: Along with ADH, ALDH is an essential component of EBRs, and is necessary to provide an efficient removal of ethanol from the blood.
Our data show that the maximal rate of ethanol consumption by EBRs loaded with ADH and ALDH is about 1.5 mM/h (Figure 3). Attempts to increase this rate by increasing the activity of encapsulated ADH and ALDH in EBRs make no sense. Indeed, at very high ALD and ALDH activities, EBR glycolysis can move into the oscillation mode or into the endless accumulation of glycolytic intermediates. In the oscillation mode, the concentrations of metabolites can increase by more than 30 mM above their normal physiological levels ( Figures 5 and 6), which can cause the osmotic damage of EBRs [25]. Of course, the endless accumulation of glycolysis intermediates (FDP, DAP, and GAP) (Figure 7) will also cause such osmotic damage. We note that the oscillation mode, as well as the permanent accumulation of glycolysis intermediates, were obtained in the model under an assumption regarding the constant ethanol concentration, which is not the case in most real situations. However, these modes may be realistic in other EBR applications, when the function of the encapsulated enzymes is associated with the consumption of intracellular NAD. A significant number of papers have been devoted to the theoretical and experimental study of oscillations in glycolysis [26][27][28][29][30]. However, the possibility of oscillations in human erythrocyte glycolysis, caused by the modulation of the GAPDH reaction rate by [NAD], has not been reported so far to our knowledge.
Mathematical Model
The mathematical model comprises a system of 16 ordinary differential equations and 3 algebraic equations describing the kinetics of concentrations of glycolysis intermediates, ATP, ADP, AMP, NAD, NADH, ethanol, and acetaldehyde ( Table 2). The equations for the reaction rates in glycolysis and for the rates of ATP consumption, as well as the parameter values, were taken from [22] with a few modifications ( Table 3). The model was supplemented with equations describing the pyruvate and lactate transport across the erythrocyte cell membrane (Table 2), in accordance with the information presented in [31]. Equations for the ADH and ALDH reaction rates and the parameter values were taken from [17,32,33]. All model equations describing the enzymatic reaction rates and the pyruvate and lactate transport across the cell membrane are presented in Table 3. Table 3. Cont.
The following assumptions were made regarding the model: 1.
The rate of the hexokinase reaction does not depend on glucose concentration because the normal physiological glucose concentration in the blood is significantly larger than the value of the hexokinase Michaelis constant for glucose [35].
2.
The concentration of orthophosphate is constant and equal to 1.0 mM [22] The adenylate kinase reaction is in equilibrium because of the high activity of adenylate kinase in erythrocytes [22,36].
The permeability of the erythrocyte cell membrane for ethanol and acetate is high, and intracellular concentrations of these metabolites are equal to the extracellular concentrations [38,39].
8.
The extracellular concentration of acetate is zero. 9.
Acetaldehyde produced in the EBRs from ethanol does not leave the cells. 10. The possible effect of the transmembrane potential on the transport of metabolites was not taken into account. 11. The erythrocyte cell volume is constant.
All metabolite concentrations, enzyme activities, and enzymatic reaction rates were normalized per liter of cells (erythrocytes) and expressed in mM or mM/h.
All numeric solutions of the model system of equations were obtained in MATLAB (version 9.9.0 (R2020b)) using a variable-step variable-order solver based on numerical differentiation formulas from order 1 to 5 (the ode15s function) [40]. The MATLAB code of the model and files for in silico experiments are presented in a Supplementary Materials. Steady-state values of concentrations and reaction rates were obtained as solutions of algebraic equation systems with a trust-region algorithm (fsolve function).
The kinetics of ethanol consumption by EBRs was simulated using the metabolite concentrations shown in Table 1 as initial values. The initial ACALD concentration was set equal to zero. Volume fractions occupied by erythrocytes and external medium were taken into account in such simulations. | 6,290.2 | 2023-06-01T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Smart monitoring system for risk management in the underground space
This paper presents a smart monitoring system for risk management in the underground space. This space has a major role in cities, but it suffers from high vulnerability to hazards such as fire, flood and accidents. The paper provides analysis of the vulnerability of the underground space and shows how a smart monitoring system of human presence, indoor conditions, underground space access and equipment performances with advanced data analysis could help in improving the risk management in this space. 1. Risk in the underground space 1.1 Role of the underground space in urban area The high urbanization requires an increasing use of the underground space, which become crucial for urban development. This space is used in urban transportation for underground roads, subways and logistic facilities. It hosts urban utilities such as water, energy and communications. It is also used for the construction of buildings and facilities for residential, industrial and services purpose. In some cities, this space covers large areas and constitutes real districts. 1.2 Vulnerability of the underground space The underground space suffers from vulnerability to urban hazards such as fire, flood and accidents. Safety constitutes the great challenge of this space. Indeed, an incident in the underground space could be catastrophic and lead to huge economic and social damages. Safety in the underground space is more critical than that in the ground surface, because of access restriction. Accidents could concern structural instability, water infiltration, fire, electrical outage and air contamination, etc. An incident such as flood or fire could lead very quickly to serious damages of underground system equipment. The International Tunneling Association published guidelines for tunneling risk management in 2004 (Eskesen et al., 2004). According to these guidelines, risk management should be included in the planning and construction of any major underground , 0 (2019) https://doi.org/10.1051/matecconf /201928 MATEC Web of Conferences 281 INCER 2019 0 05 10 10 10 5 © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). project. In China, the first guideline for risk management in tunnels and underground works was published in 2007, and then it was updated as a national code for urban rail transit in 2012. Hu and Huang (2014) indicated that risk in large underground projects in China was generally managed through engineering decision, which resulted in shortcomings concerning the use of an integrated risk management approach. The key point of the implementation of risk management in underground space concerns the identification of sources of potential risk, elaboration of corresponding preventive measures and preparing emergency rescue plan. Dynamic risk management should be established on an information-based construction and site monitoring (Hu and Huang,
Role of the underground space in urban area
The high urbanization requires an increasing use of the underground space, which become crucial for urban development. This space is used in urban transportation for underground roads, subways and logistic facilities. It hosts urban utilities such as water, energy and communications. It is also used for the construction of buildings and facilities for residential, industrial and services purpose. In some cities, this space covers large areas and constitutes real districts.
Vulnerability of the underground space
The underground space suffers from vulnerability to urban hazards such as fire, flood and accidents. Safety constitutes the great challenge of this space. Indeed, an incident in the underground space could be catastrophic and lead to huge economic and social damages. Safety in the underground space is more critical than that in the ground surface, because of access restriction. Accidents could concern structural instability, water infiltration, fire, electrical outage and air contamination, etc. An incident such as flood or fire could lead very quickly to serious damages of underground system equipment. The International Tunneling Association published guidelines for tunneling risk management in 2004 (Eskesen et al., 2004). According to these guidelines, risk management should be included in the planning and construction of any major underground project. In China, the first guideline for risk management in tunnels and underground works was published in 2007, and then it was updated as a national code for urban rail transit in 2012. Hu and Huang (2014) indicated that risk in large underground projects in China was generally managed through engineering decision, which resulted in shortcomings concerning the use of an integrated risk management approach.
The key point of the implementation of risk management in underground space concerns the identification of sources of potential risk, elaboration of corresponding preventive measures and preparing emergency rescue plan. Dynamic risk management should be established on an information-based construction and site monitoring (Hu and Huang, 2014).
Fire hazard
Fire is one of the most serious disaster in the underground space. Almost one-third of accidents in the urban underground space are associated with fire hazard. In 1999, China suffered from 4059 fire accidents in the urban underground space, which caused around 340 deaths. While in the same time, China suffered of 1122 fires with bout 66 deaths in fires in high-rise buildings (Yao et al., 2012). Obviously, fire accidents are more serious in the urban underground space than those in high-rise buildings. Indeed, the underground space is a narrow space with limited access, poor natural ventilation and restricted natural light. Fire in the urban underground space has specific features. It consumes oxygen, which is also essential for human being. During fire in the urban underground space, air refreshing is ensured by the ventilation system, which is also vulnerable to fire. In addition, fire generates high quantity of toxic smoke. Trapped persons suffer not only from oxygen shortage and toxic smoke, but also from reduced visibility, which causes difficulties in the evacuation and emergency operations. Firefighting in the underground space is more complicated than in the surface. Researches conducted after the fire at King's Cross of London Underground in 1987 (Moodie, 1992), showed the "trench effect": fire has high capacity in propagation in the narrow underground ground space. Moreover, the fire cannot be detected easily in the underground space. Firefighters should access to the underground via spaces, which are already occupied by users, smoke hot air flux and evacuations. In order to prevent fire propagation in the underground space, this space must be strictly divided into fire prevention zones. Each zone should have independent ventilation system. In case of fire incident, communications between fire zones could be closed rapidly. According to the Chinese national norms, the fire zone in urban underground space is limited to 500 m 2 . This limit could be extended to 2000 m 2 by the installation of specific fire preventions equipment. According to regulations, fire protection facilities such as emergency lighting system, emergency evacuation indication sign, automatic fire and smoke alarm, system and emergency broadcast system should be configured to ensure normal use during fire incident. Important facilities are encouraged to use innovative technology for firefighting. For example, after two serious fire accidents, the Channel Tunnel was equipped with automatic temperature detection system for fire early detection.
Flood hazard
Flood in the underground space could lead very quickly to dramatic catastrophe, because of the limited space, the high gravity water flow and access restriction. The rapid increase in the water level could lead to electrical outage, which could stop water pumps and Two strategies are combined for flood risk of mitigating in the underground space: structural and non-structural. The former includes engineered solutions to reduce impact of flooding, such as building barriers, dams, large storm water facilities, flood gates and raised entrances at underground stations. The second concerns reduction of both flood risks and impact through policies, laws, public awareness and education. It includes also preventing loss of permeable surfaces, and early warning systems. In the underground space, managers can install backup power for pumps to reduce fault risk due to electrical outage. However, flood risk continues to be a serious hazard for the underground space in some cities. The use of the smart technology will help in reducing the impact of this risk.
Smart Solution
The smart technology could be used to establish an integrated solution for the risk management of the underground space. Figure 1 shows the architecture of this solution. It includes 4 layers: -Digital model: This layer includes data concerning the underground space architecture and equipment as well as their attributes. BIM constitutes a powerful tool for the construction of this layer; it offers a friendly graphic environment and data processing capacity for collaborative design and management of complex projects. The digital model includes digital identification and geo-localization of the components of the tunnel and its equipment. Each component is equipped by RFID tags (Radio-Frequency Identification), which provide dynamic information about the component including manuals, guides, photos and videos. -Monitoring system: This layer includes the monitoring system which is used for the security of the underground space; sensors and cameras are used to track indoor conditions, tunnel access status, ventilation system and pumps functioning, smoke extractors, emergency exist indications, water level. -Data analysis: This layer includes engineering, safety and advanced Information Technology tools such as the Artificial Intelligence. It allows to detect early faults and to take actions to stop their extension, to protect people and material, to operate recovery actions and to realize post-fault analysis to learn from the incident to improve the underground resiliency.
Smart monitoring of the underground space
The underground space is organized in security areas, which could be public areas or restricted-access areas (technical areas). Each security area is equipped by a monitoring system to track (i) human presence, (ii) indoor conditions, (iii) access control and (iv) equipment performances (Figure2).
Human presence:
The underground space includes generally public and restricted areas. For security management, it is of major concern to know at each instant the human presence in these areas. For public areas, cameras are used to track the human presence. Image processing tools allow to have a good estimation of the density of human presence, which is necessary for emergency actions. For restricted-access area, specific digital identification systems could be used for the human presence control. Each authorized employee is identified through a digital ID, which provides information about the employee's identity, contact, hierarchy and authorized area. Presence in the tunnel is tracked using internal geo-localization system (Khoury and Kamat, 2009;Lin et al. 2013). At each instant, the system provides the identity and location of employees in the underground space.
Indoor conditions
Sensors are installed in each area of the underground space to track the indoor security parameters such as temperature, humidity, air quality, smoke concentration, gas concentration and water infiltration.
Access control
The underground space accesses are equipped by electronic devices that indicate the status of the access (open/close) and enables the access control. Devices concerned by the underground space security (access control, ventilation, smoke extraction, firefighting, water pumping, emergency lighting…) are equipped by electronic systems that can control regularly the devices performances and indicate to staffs' any anomaly in the security equipment and their consequences.
3D graphic system interface
A 3D graphic interface is used for a real time visualization of the security condition in the underground space. For each security area, it indicates in 3D graphic environment for each security indicator (human presence, indoor condition, access control, equipment performances) the current value and security operating interval (min, max); in case of violation of security conditions, the graphic system indicates the location, severity, impact and actions to be taken.
Conclusions
The underground space is largely used in urban development, in particular for transportation, water and energy supply and commercial activity. This use is subjected to serious urban hazards such as fire and flood, which could lead to dramatic catastrophes. In order to improve the risk management of this space, this paper presented an integrated solution that uses the smart technology to monitor human presence, indoor conditions, underground space access and equipment performances. Thanks to advanced analysis tools and 3D graphic interface, this system provides for each security indicator the current value as well as the accepted operating interval; in case of violation of security conditions, the system indicates the location, severity, impact and actions to be taken. | 2,835 | 2019-01-01T00:00:00.000 | [
"Computer Science"
] |
Designating Regional Elements System in a Critical Infrastructure System in the Context of the Czech Republic
Critical infrastructure is a complex system whose disruption or failure results in significant impacts on state interests, i.e., territorial security, economy, and the basic needs of the population. The current European Critical Infrastructure Protection Model does not allow the direct identification of critical elements at the regional level. Based on this, the paper brings a proposal for a unified system of critical infrastructure design based on a bottom-up approach. It is a progressive approach, utilizing contemporary trends in the application of science-based knowledge to critical infrastructure. A holistic view of this issue allows us to take into account the needs and preferences of the population, the preferences of the stakeholders and the local conditions of the region under consideration. The novelty of this approach is seen, in particular, in the identification of regional critical infrastructure elements through an integral assessment of these elements’ failure impact, not only on the dependent subsectors, but also on the population (population equivalent) in the assessed region. The final part of the paper presents a case study demonstrating the practical application of the proposed system to the road infrastructure in the Pardubice Region of the Czech Republic.
Introduction
The provision of a continuous supply of commodities and services through an infrastructure system is the basic requirement for sustaining and developing state economies (not only in the European Union) and for maintaining the required level of security and welfare in society [1]. The functionality of these infrastructures and the provision of a continual supply of products and services are constantly exposed to the impacts of natural and anthropogenic threats [2]. The extent of the unacceptable impact depends mainly on the severity of the failure, its cause (the nature of the threat) and the importance (criticality) of the affected element or sector. This impact is often expressed by economic loss, the number of people affected, the size of the affected area and other similar factors [3]. By exceeding the socially defined limit values of unacceptable impact (sectoral and cross-sectional criteria), these infrastructures are referred to as critical and together they form the so-called critical infrastructure system [4].
That is why considerable attention has increasingly been dedicated to selected vital and critical infrastructures [5], including methods of risk analysis [6], critical elements evaluation [7] and its protection [8], and impact assessment of their failure [9]. The focus is now exclusively devoted to these countries than in the Czech Republic. From the point of view of identifying the elements at the regional level, it was found that Switzerland has an approach based on detailed process analyses using a set of different criteria [16]. On the other hand, the Netherlands places the primary emphasis on the so-called vital products and services that make up the core of the infrastructure system 2 [29]. Secondary emphasis is then placed on products and services that support this important part of the system. Element identification in the United Kingdom is based on a traditional top-down approach. This approach relies on the National Risk Register [23], the outcomes of which are the basis for risk assessment and the identification of critical infrastructure elements at the regional level. The latest approach used in New Zealand allows the projection of vital infrastructure priorities into territorial systems [27] and the direct identification of their key elements.
Considering the facts above, there is currently no uniform approach in the European Union countries for identifying critical infrastructure elements at the regional level. As a result, most European Union states lack the system of identifying and subsequently protecting regionally significant elements. The results of the analysis of selected countries indicate that the analysed approaches provide a suitable basis for developing a unified approach that would be applicable not only in EU countries. At the same time, it is important to point out that existing approaches to identifying critical infrastructure elements at the national level cannot be used at the regional level as they do not allow for the variability of national criteria and the possibility of regional element analysis. Another reason is the need to accept sectoral criteria 3 , which are inappropriate for the regional level as they have been set at the national level of distinction. The last major obstacle to the use of these approaches for identifying regional critical infrastructure elements is that they are oriented solely to impact assessment in the critical infrastructure system and do not take into account the impacts on the population of the region under assessment.
Based on the above, the paper aims to develop a simple and effective tool for identifying critical regional infrastructure elements and analysis of suitable techniques and tools that will be the main methodological apparatus of this proposed tool. Implementing this approach in practice would not only contribute to improving the continuity of service delivery [30], but it will also strengthen the preparedness of the authorities responsible for preparing the territory for crisis situations; and improve the security of the area as a whole. The identification of critical infrastructure components is an initial step in protecting critical infrastructure. The follow-up steps are the analysis of the resilience of elements of these infrastructures [13], risk analysis [31,32], and the last step is the implementation of adequate measures to increase safety [33,34]. The practical application of the proposed approach is demonstrated at the end of the paper in the form of a case study.
Methodology: Techniques and Tools of Infrastructure Analysis at the Regional Level
This part of the paper focuses on the selection of techniques and tools suitable for infrastructure analysis at the regional level. Attention is particularly paid to such techniques and tools that are publicly available, user-friendly and well-known (although not too specific) and suitable for analysing the critical infrastructure system in the context of the regional analysis. Firstly, the issue of identifying regional critical infrastructure is set into an appropriate framework for critical infrastructure analysis. As a suitable framework, the classification for critical infrastructure analysis by Ghorbani and Bagheri [35] appears to be appropriate, as it uses the classification elaborated in Rinaldi et al. [36]. This classification enables us to carry out the analysis at the necessary width while taking into account the specific dimensions of the critical infrastructure. This part also states techniques and tools that appear to be 2 In this context, vital elements are perceived as synonymous with critical infrastructure elements. 3 Sectoral criteria mean technical or operational values to determine the critical infrastructure element in the various sectors, i.e. energy, water, food and agriculture, health, transport, communication and information systems, the financial market and currency, emergency services and public administration.
Systems 2020, 8,13 4 of 24 most applicable and suitable in the framework of the analysis of techniques and tools for the design of a system of regional critical infrastructure elements.
The area of critical infrastructure should be viewed as a complex adaptive system, which can be aptly described as a 'socio-technical system' [37,38]. Various aspects of the legitimacy of such system categorization can be analysed and described from different perspectives. The classification of perspectives may prove beneficial and practical in terms of infrastructure analysis. It will certainly be useful to divide the issue of critical infrastructure into five dimensions [35]: system analysis, behavior analysis, knowledge discovery, visualization and knowledge sharing. Based on this classification, a lot of valuable information about the entire system can be obtained. Moreover, this framework corresponds with the defined characteristics of infrastructure interdependencies according to Rinaldi et al. [36]. See Figure 1 below for a representation of said critical infrastructure framework. perspectives may prove beneficial and practical in terms of infrastructure analysis. It will certainly be useful to divide the issue of critical infrastructure into five dimensions [35]: system analysis, behavior analysis, knowledge discovery, visualization and knowledge sharing. Based on this classification, a lot of valuable information about the entire system can be obtained. Moreover, this framework corresponds with the defined characteristics of infrastructure interdependencies according to Rinaldi et al. [36]. See Figure 1 below for a representation of said critical infrastructure framework.
Behavior Analysis
Knowledge Discovery Visualisation Information Sharing Figure 1. Selected framework for critical infrastructure analysis (taken and modified from [35]).
The first dimension concerns system analysis. The analysis is required to better understand how the infrastructure is organized and to identify its basic components. The applied risk analysis models help determine the types of threats, risks, and vulnerabilities that compromise the proper functioning of the infrastructure. These may include infrastructure profiling [39] based on the correct understanding of infrastructure organization; the IRAM model [40] designed to identify, model, assess and control significant risks threatening the infrastructure system; hybrid system modelling [41] derived from a mathematical methodology for complex system modelling and simulation, and other useful techniques and tools (for more details see [35], see other examples specified in [42,43]). This area also includes Risk Management, which will be addressed separately.
With its focus on behavior analysis, the second dimension can provide a good deal of information based on action sequence monitoring (simulations). The outcome may then include information on infrastructure behaviour within itself as well as concerning the whole system. Furthermore, it is possible to obtain useful information on the functioning of a specific element within the given infrastructure. Examples of applicable techniques and tools include Monte Carlo Simulation [44], the Scenario Building set of tools [45] designed to create alternative scenarios, the Cascade tool [46] presenting scenarios of functioning and interacting components, and the CARVER2 method [47]. Other effective tools include TRAGIS 4 [48], designed exclusively for transportation network simulation, various multi-agent systems [49] facilitating the general understanding of exceedingly complex systems, and numerous other techniques and tools. The final representative of this dimension is the Input-Output Model of Wassilly Leontief, which will also be discussed in a separate section.
Knowledge discovery as part of the third dimension in the aforementioned framework stems from the fact that human mechanisms may not always be capable of detecting all dependencies and behavioural patterns [35]. Even though the results of long-term observation may be equally reliable, the use of simulation tools significantly reduces the time required for such observation. Knowledge discovery may be further supplemented based on the outcomes of different hypothesis testing or analytical studies. Of particular interest are the Hypothesis Testing model [50], based on specific assumptions about the behaviour of structures under certain conditions, and the Bayesian network model [51], derived from probability theory, where probabilistic relationships are formed between individual phenomena. Additional effective methods include Exploratory Data Analysis [52], focused on the systematic identification of hidden dependencies. The first dimension concerns system analysis. The analysis is required to better understand how the infrastructure is organized and to identify its basic components. The applied risk analysis models help determine the types of threats, risks, and vulnerabilities that compromise the proper functioning of the infrastructure. These may include infrastructure profiling [39] based on the correct understanding of infrastructure organization; the IRAM model [40] designed to identify, model, assess and control significant risks threatening the infrastructure system; hybrid system modelling [41] derived from a mathematical methodology for complex system modelling and simulation, and other useful techniques and tools (for more details see [35], see other examples specified in [42,43]). This area also includes Risk Management, which will be addressed separately.
With its focus on behavior analysis, the second dimension can provide a good deal of information based on action sequence monitoring (simulations). The outcome may then include information on infrastructure behaviour within itself as well as concerning the whole system. Furthermore, it is possible to obtain useful information on the functioning of a specific element within the given infrastructure. Examples of applicable techniques and tools include Monte Carlo Simulation [44], the Scenario Building set of tools [45] designed to create alternative scenarios, the Cascade tool [46] presenting scenarios of functioning and interacting components, and the CARVER2 method [47]. Other effective tools include TRAGIS 4 [48], designed exclusively for transportation network simulation, various multi-agent systems [49] facilitating the general understanding of exceedingly complex systems, and numerous other techniques and tools. The final representative of this dimension is the Input-Output Model of Wassilly Leontief, which will also be discussed in a separate section.
Knowledge discovery as part of the third dimension in the aforementioned framework stems from the fact that human mechanisms may not always be capable of detecting all dependencies and behavioural patterns [35]. Even though the results of long-term observation may be equally reliable, the use of simulation tools significantly reduces the time required for such observation. Knowledge discovery may be further supplemented based on the outcomes of different hypothesis testing or analytical studies. Of particular interest are the Hypothesis Testing model [50], based on specific assumptions about the behaviour of structures under certain conditions, and the Bayesian network model [51], derived from probability theory, where probabilistic relationships are formed between individual phenomena. Additional effective methods include Exploratory Data Analysis [52], focused on the systematic identification of hidden dependencies.
Visualisation constitutes a significant component of the framework and, as such, represents the fourth dimension. While the preceding dimensions offer a range of options, they may not always be able to reveal all properties, dependencies, and behavioural patterns. They are, above all, monitoring tools, ensuring that all relevant information has been revealed and no omission has occurred. Therefore, visualisation facilitates the identification of patterns in order to detect errors in the preceding phases. One of the advantages of visualisation techniques and tools is that they are readily applicable for revealing the visual structure of infrastructure and organizing it [53]. Other models worthy of note include the graph theory [54] and the widely used geographical information systems [55,56].
The last dimension of the presented framework is information sharing. Information sharing across all dimensions should facilitate the detection and possibly the management of unpredictable catastrophic scenarios. Information sharing is a prerequisite for the entire risk management system [6] and is also one of the bases for resilience building [57]. At the same time, it may provide new opportunities for improving the function of the entire system.
System Analysis
As part of 'system analysis', risk management is covered in general terms here [6,58], although these processes can also be applied to system analysis (infrastructure) or as a part of the critical infrastructure protection process [59,60]. The aim of all of the adopted procedures is to provide the data required to address specific risks and to select suitable solution alternatives. It is also necessary to define the key parameters of the examined system and to set the scope of applicability and the criteria [61].
General risk management methods are used to identify and assess the degree of risk involved in a process or event, and normally include three basic stages [62] of risk assessment, i.e., identification, analysis, and evaluation.
Risks related to critical infrastructure elements or processes can arise from many different causes or scenarios [63]. In order to properly analyse, assess and consider individual system risks, they must be allocated to the relevant system components. Correct identification of the source of risk, impact validation, and selection of the most appropriate strategy are among the key processes.
Risks can be assessed to varying degrees, using one or more techniques (for additional risk assessment methods see [35,64]). Risk management is a suitable cornerstone for developing a critical infrastructure identification system. The key steps of the risk management process are the basis that can be elaborated in more detail with the use of other appropriate tools.
Behaviour Analysis
The CARVER2 method [47] is considered to be a classic example of simulation models, with typical outputs including visualisation of potential criminal activity objectives. That is why this method essentially pertains to 'behaviour analysis'. For an explanation of input data see Table 1 below.
As this method is primarily employed in the area of physical security, there is a strong link to the potential disruption of system function by criminal activity. The CARVER2 method is also readily applicable to critical infrastructure, specifically in the pre-analysis phase, to explore all the elementary parts of the system. Based on the general equilibrium theory, the input-output model [65] also belongs to the wide group of techniques of 'behavior analysis'. The model is often applied to the input-output analysis, where the outputs of one economic sector make up the inputs of other sectors, and vice versa. It takes into account the sequential linkages between economic (infrastructure) system activities and has primarily been designed for sectors of the economy which can, in a sense, form critical infrastructure sectors. Specific models that are also widely applied to critical infrastructure can be developed based on general rules, e.g., [66,67].
The IIOM (Inoperability Input/Output Model) is based on the original model of Wassily Leontief. Initial sector barrier settings were incorporated into this model [35], which can allow the modelling of failures such as cascading effects [68] and multiple failures [36]. The dynamic setup of the method allows for a thorough analysis of the progress of such events. The IIOM is suitable for modelling dependencies in the following sectors [50]: energy, drinking water supply infrastructures, information and telecommunication technologies, virtual networks and information systems, the transportation sector (freeway and road networks in particular). It can also model political and regulatory dependencies. Based on the above, it is possible to use this model, for example, to assess the effects of a function failure within the entire functioning system.
Knowledge Discovery, Visualization and Information Sharing (Network Analysis)
Network analysis is a good example of interconnections between the above-mentioned dimensions (i.e., Knowledge Discovery, Visualization, and Information Sharing). Mutually interconnected functioning systems, relationships, and linkages within and without a system-all of these are viewed as a whole presenting network properties [69]. For the sake of simplicity, networks are viewed as a set of nodes (elements) and edges (links between nodes). The general rules applicable to networks are perceived to be equally pertinent to complex systems [70]. Network models designed for the demonstration of mutual linkages can also be used for local level infrastructures. Numerous approaches to tackling the vulnerability [71], risks and failure propagation within a network [72,73], as well as the general issue of network topology [31], have been put forward recently.
However, the current research direction [72,74] stems from the natural sciences and turns to the basic findings on networks. The set of the following three network properties seems to be key in determining infrastructure behaviour, as well as in selecting the approach to the system analysis [74]: (a) Network density, (b) Network homogeneity vs. network heterogeneity, (c) Network symmetry.
Accordingly, the analysis should focus on the aforementioned key network properties [69,74]. The individual properties have been explained in more detail below and also illustrated in Figure 2. a) Network density, b) Network homogeneity vs. network heterogeneity, c) Network symmetry.
Accordingly, the analysis should focus on the aforementioned key network properties [69,74]. The individual properties have been explained in more detail below and also illustrated in Figure 2. An explanation of individual determinative properties [74] follows below. Network density effects network behaviour in the event of a failure in the function of one element (node). A dense network with many linkages (edges) between nodes forms large quantities of node interconnections. If one node fails, there will be numerous other edges interlinking the remaining nodes. However, in a low-density network, with few links (edges) between its nodes, the failure of a single node can lead to the failure of the entire network.
Homogeneity vs. heterogeneity is an indication of whether there is a substantial difference between the number of nodes and the number of links between them. Homogeneity shows an identical/similar number of linkages with respect to each node, i.e., the quantities are the same. Conversely, heterogeneity suggests that the network contains nodes with both high and low numbers of linkages, i.e., there are substantial differences in the network.
Network symmetry expresses the direction of linkages (edges) between nodes. In a symmetrical network, the linkages will be mostly bidirectional. Conversely, an asymmetrical network will mostly consist of unidirectional linkages.
The assessment of the impacts of natural catastrophes on the road network [75] provides a good example of this type of analysis. Using the findings presented in Barabasi's book [69], an assessment was carried out by calculating the vulnerability of the entire network. Concrete proposals include the adoption of risk mitigation measures to reduce network vulnerability relative to the manifestation of natural disasters. The above methods can be used to assess the impact of a failure of the function of a particular element within an infrastructure of a network structure. Other methods that can be used to analyse the network structure include the critical path method [76]. This mathematical technique applies a network diagram with certain patterns.
Based on the theoretical analysis of the problem solved, the analysis of approaches in selected world countries and the analysis of techniques and approaches suitable for infrastructure analysis; the following section presents a proposal of a progressive system. Its ambition is to develop a unified approach to identifying regional elements of critical infrastructure in European countries.
The techniques and tools presented above were analysed in three basic areas (dimensions), system analysis, behaviour analysis, and network analysis. All presented techniques and tools are currently used for analysis in the critical infrastructure system. For this reason, attention was particularly paid to the applicability of these techniques and tools in the framework of the identification of critical elements at the regional level. Based on the results of the analysis, it can be concluded that for the identification of elements, it is appropriate to use the risk management framework [6,58], CARVER2 method [47], network analysis [74], critical path method [76] and Inoperability Input/Output Model [50], which are considered to be the most suitable for elemental analysis. The method of applying these methods to the proposed system is described in detail in the following section of the paper.
The Proposed System of Regional Critical Infrastructure Designation
The system of designating regional critical infrastructure elements can be set up in two different ways. The first approach is based on the system of designating national and European critical infrastructure elements utilizing a set of cross-cutting and sectoral criteria [4,77]. This involves a 'top-down' approach, also known as a 'conservative' model [30]. This approach has already been published and discussed at length by the authors [78]. The second approach to designating regional critical infrastructure elements is based on the 'bottom-up' assessment of elements and allows for the optional implementation of individual criteria and preferences. Furthermore, it makes it possible to designate regional critical infrastructure entities directly. The approach is labelled 'progressive' and has yet to be published.
For the above-mentioned reasons, the proposed system for the designation of regional critical infrastructure entities and elements pursues the bottom-up approach across the whole system [79]. It constitutes an approach reflecting current trends in the application of science-based findings to the area of critical infrastructure [80,81].
The designation process of the system consists of individual steps which will be explained in more detail below. Colour coding was used for convenience and to provide a clear overview of responsibilities for individual steps, the inputs, and outputs of each step and the tools and methods employed. The applied colour codes can be explained as follows: • Yellow highlights analytical components, where these components are intended for working groups composed of experts in the field. • Brown-green represents the decision-making component of the process, which should be undertaken exclusively by the relevant authorities.
•
Light blue marks information entering the process or individual steps, or additional input information that may be required for analysis or decision-making purposes. • Conversely, dark blue represents output information produced by a process or individual steps. The majority of output information is simultaneously used as input information for subsequent steps in the process.
•
Grey identifies tools recommended for individual steps of the process or tools that may be utilized for a particular step.
White has been used as a complement only and has no bearing on the specification of responsibility. It does not affect the performance of individual steps as it only represents a kind of 'supergroup' or 'subgroup' of the steps described below. For an illustration of the entire process, including the colour coding, see Figure 3.
The nature and form of the input and output information, as well as of the utilized and recommended tools and methods, are explained in the process description that follows. The proposed process is built using a risk management framework [6,58] that enables a sequence of logical steps to form a system solution for identifying regional critical infrastructure elements. Generally, the process has been divided into four phases, with each containing one or more steps of the process. The four phases of the process are equivalent to the basic risk management framework [6], which first includes "scope and context" (equivalent to "Phase 1"), then "identification" (equivalent to "Phase 2"), "analysis" (equivalent) "Phase 3"), and last but not least, the "assessment" area (corresponding to "Phase 4"). Another essential part of the process are ongoing activities (corresponding to "Ongoing Activities") complementing the basic risk management framework, and in the areas of "communication and consultation" and "monitoring and revision".
The broad representation of participants in the proposed process has been described above generally as two main groups of participants in the process. The first is the "working group", which should include experts from the field of critical infrastructure, system analysts and experts from the critical infrastructure sectors (e.g., transport, energy, etc.). In addition, owners or infrastructure managers located in the area under consideration should also be part of this group. The second main group is then the "relevant authorities", which should include representatives of the public administration related to the territory (e.g., Regional Authority, Security Council and the relevant Emergency Staff or security forces). The individual steps of the process are always carried out by the "working group", "relevant authorities" in a manner corresponding to the colour representation in Figure 3 ("working group" in yellow, "relevant authorities" in brown-green).
•
Light blue marks information entering the process or individual steps, or additional input information that may be required for analysis or decision-making purposes. • Conversely, dark blue represents output information produced by a process or individual steps. The majority of output information is simultaneously used as input information for subsequent steps in the process. • Grey identifies tools recommended for individual steps of the process or tools that may be utilized for a particular step.
White has been used as a complement only and has no bearing on the specification of responsibility. It does not affect the performance of individual steps as it only represents a kind of 'supergroup' or 'subgroup' of the steps described below. For an illustration of the entire process, including the colour coding, see Figure 3. The nature and form of the input and output information, as well as of the utilized and recommended tools and methods, are explained in the process description that follows. The proposed process is built using a risk management framework [6,58] that enables a sequence of logical steps to form a system solution for identifying regional critical infrastructure elements. Generally, the process Other participants in the process may include non-profit organizations and interest groups using the solved infrastructure. This group of process participants can participate in ongoing process activities. The representation of the company and its interests are taken into account in the proposed process through "authority" as representatives of the local-government representing the society on the basis of their election. Representatives of local-government must comply with the legal requirements regarding confidentiality. These requirements relate to the security area (classified information, the disclosure of which could be detrimental to the security of the territory concerned). As the area of critical infrastructure is related to the security of the area, it is classified information that is not disclosed to the public. The public can only make its proposals through an elected representative of the local-government.
Phase 1: System Description
The initial phase focuses on general aspects of the designation process. While the first step consists of determining the context and boundaries of the system, the second step deals with the decomposition of the system thus established.
The determination of basic rules, the context, and boundaries of the system is the first step of the whole process (see also [6]). The basic parameters of the process must be determined in a way that allows the requirements of stakeholders involved in the process to be factored in 5 and that does not prevent the credibility of the outcomes of the designation process from being contested. Furthermore, it is necessary to determine the parties to be involved in the performance of individual steps of the process, i.e., analytical and decision-making steps. This is followed by the establishment of system boundaries, within which the designation process is to take place [82]. The system boundaries may not always be identical. Basic rules and responsibilities for monitoring and review should be set up similarly. As it is part of decision-making, this step should be carried out by the responsible authority.
System decomposition involves its separation into infrastructure and society 6 . Infrastructure represents the means whereby the basic needs of a society are met. Criticality is thus linked to the society and relates to restrictions in the supply of services and the dependence thereon [83,84]. For a graphical representation of the system decomposition, see Figure 4 below. society and relates to restrictions in the supply of services and the dependence thereon [83,84]. For a graphical representation of the system decomposition, see Figure 4 below.
Society
Infrastructure Figure 4. System decomposition into infrastructure-and society-focused components.
The basic system data represent the input information for both of these steps. By contrast, the output information yielded by the first step will be the established context of the process and the boundaries of the system. The output of the second step will consist of more detailed information about the infrastructure, the society and the linkages existing between them (interrelations).
Phase 2: Element Identification
The general focus and objective of the second phase is the creation of a list of identified infrastructure elements following specific rules and limitations. These elements are proposed by stakeholders. Similarly, to the highest level, these are institutions that, within the scope of their competence, propose individual elements for identification. The regional bodies concerned are those whose territorial scope corresponds to the region being addressed. These are the following bodies in particular: the Regional Office of the relevant region, the Regional Fire Department, the Regional Directorate of the Police of the Czech Republic, the Regional Health Emergency Service, the Regional Hygiene Station of the respective region, the Regional Veterinary Administration for the respective region, the regional office of the Czech Labor Office, and other institutions with competence in the respective region.
The preliminary analysis aims to examine all elementary system components (or elements) following specific rules. The input information consists of more detailed data on the infrastructure, the society and the linkages existing between them (interrelations). One of the tools suitable for the performance of a preliminary analysis is the CARVER2 method [47], which follows specific rules. An additional objective of this step is to identify the basic needs of the population (or society) on the understanding that these needs may vary from case to case [85,86]). Based on the completed analysis, the output of this step should include a summary list of infrastructure elements and a list of identified needs of the population concerned.
The acceptance of restrictions for detailed analysis is another step involving decision-making (at a political level). Specific restrictions can be adopted based on the established context and boundaries of the system. The purpose of introducing restrictions is to make further specifications to The basic system data represent the input information for both of these steps. By contrast, the output information yielded by the first step will be the established context of the process and the boundaries of the system. The output of the second step will consist of more detailed information about the infrastructure, the society and the linkages existing between them (interrelations).
Phase 2: Element Identification
The general focus and objective of the second phase is the creation of a list of identified infrastructure elements following specific rules and limitations. These elements are proposed by stakeholders. Similarly, to the highest level, these are institutions that, within the scope of their competence, propose individual elements for identification. The regional bodies concerned are those whose territorial scope corresponds to the region being addressed. These are the following bodies in particular: the Regional Office of the relevant region, the Regional Fire Department, the Regional Directorate of the Police of the Czech Republic, the Regional Health Emergency Service, the Regional Hygiene Station of the respective region, the Regional Veterinary Administration for the respective region, the regional office of the Czech Labor Office, and other institutions with competence in the respective region. 5 These are interested parties entitled to comment on issues concerning community-wide security (see [57]) and may include various public institutions, law-enforcement agencies, called-in experts, relevant owners or operators, non-profit organizations and amateur groups. 6 There are linkages between infrastructure and society. Simply put, commodities supplied to the society by the infrastructure flow in one direction, while infrastructure requirements flow in another direction [83]. Only the first direction has been shown here for illustrative purposes.
The preliminary analysis aims to examine all elementary system components (or elements) following specific rules. The input information consists of more detailed data on the infrastructure, the society and the linkages existing between them (interrelations). One of the tools suitable for the performance of a preliminary analysis is the CARVER2 method [47], which follows specific rules. An additional objective of this step is to identify the basic needs of the population (or society) on the understanding that these needs may vary from case to case [85,86]). Based on the completed analysis, the output of this step should include a summary list of infrastructure elements and a list of identified needs of the population concerned.
The acceptance of restrictions for detailed analysis is another step involving decision-making (at a political level). Specific restrictions can be adopted based on the established context and boundaries of the system. The purpose of introducing restrictions is to make further specifications to facilitate the subsequent analysis. The input information consists of a summary list of infrastructure elements and the list of identified needs of the population. The potential loss of function of some parts of the system, their degree of substitutability and their recovery potential should all be taken into account when deciding whether or not to adopt any restrictions. Subsequently, the lists of infrastructure elements and population needs should be reasonably reduced following the adopted restrictions. For example, a political decision to introduce restrictions on the amount of provided services could stem from the basic idea that society can function even without them [87,88]. Therefore, the output produced by this step should be a list of population needs that will be addressed and prioritized as required (existential needs should, however, invariably be given precedence-see [88,89]). Another output should be a list of identified infrastructure elements (i.e., elements selected for an in-depth analysis) reflecting the established restrictions.
Phase 3: Element Analysis
The third phase focuses on the analysis of infrastructure elements according to the set rules (see the preceding phases) using a predefined set of criteria. Its objective is to compile a list of analysed elements which is to be used as the basis for the subsequent decision-making process.
Criteria application consistent with the predefined set is based primarily on system decomposition philosophy. The criteria are then divided into quantitative and qualitative criteria, as shown in Figure 5, below. services could stem from the basic idea that society can function even without them [87,88]. Therefore, the output produced by this step should be a list of population needs that will be addressed and prioritized as required (existential needs should, however, invariably be given precedence-see [88,89]). Another output should be a list of identified infrastructure elements (i.e., elements selected for an in-depth analysis) reflecting the established restrictions.
Phase 3: Element Analysis
The third phase focuses on the analysis of infrastructure elements according to the set rules (see the preceding phases) using a predefined set of criteria. Its objective is to compile a list of analysed elements which is to be used as the basis for the subsequent decision-making process.
Criteria application consistent with the predefined set is based primarily on system decomposition philosophy. The criteria are then divided into quantitative and qualitative criteria, as shown in Figure 5, below.
Qualitative criteria
Quantitative criteria The quantitative criterion is directly linked to the impact of service failure on the population. Conversely, the qualitative criterion is derived from the effect of an infrastructure element failure on the entire functional unit (sector/system).
The impact assessment for element function failure involves the application of the qualitative component of the criteria. It first assesses the impact the failure of an element has on the relevant sector and then on the entire functional unit (i.e., the system as a whole). The impact of the failure of an element can be expressed qualitatively as follows: The quantitative criterion is directly linked to the impact of service failure on the population. Conversely, the qualitative criterion is derived from the effect of an infrastructure element failure on the entire functional unit (sector/system).
The impact assessment for element function failure involves the application of the qualitative component of the criteria. It first assesses the impact the failure of an element has on the relevant sector and then on the entire functional unit (i.e., the system as a whole). The impact of the failure of an element can be expressed qualitatively as follows: • Consequences without a negative impact-flawless redirection of the load to another part of the system. • Threshold loading-the system may be loaded to the near-breaking point due to the redirection.
•
Overloading-the redirection may cause the remaining parts of the system to overload.
Even though the presented qualitative expression may seem rather general, it allows for an effective assessment of the impact by clearly defining the effects the element failure exerts. The input information necessary to assess the impact of a failure of a particular element should include a list of the population needs to be addressed, a list of identified infrastructure elements and information concerning the infrastructure, the society and the linkages existing between them (interrelations). For this purpose, the network analysis method [74] or the critical path method [76] should be used. The interdependencies between critical infrastructure sectors and their impact on society must also be taken into account in the context of assessing the impact of a function failure on an element [36]. For this purpose, it is appropriate to use the Inoperability Input/Output Model [50].
The expression of population equivalent 'PE' is used to define the impact a function failure has on the population. It is a quantitative expression of the size of the population that depends on the services provided by a particular element (infrastructure) and which would be adversely affected by the failure thereof. The input data are identical to the data processed in the preceding phase. Most of the information required to perform the quantification should be available by now if the above-mentioned tools are utilized. More detailed information about the infrastructure concerned will also make it possible to quantify the effects of failures in the function of infrastructure elements.
The decision on the threshold limit of PE is another step that should be carried out by the responsible authority and consists of defining an appropriate limit value for the number of people affected by a service failure. If the number of people affected by a service failure is shown to be higher than the established limit, the relevant element will be prioritized in the subsequent steps. Specific threshold limits may be set for each sector or service separately. The dynamic value of cross-cutting criteria based on the 'conservative' approach can be used as an example of input information applied to set the threshold limit (see [30]).
The output yielded by the third phase should be a list of analysed infrastructure elements. This list will essentially contain an enumeration of elements with allocated values based on both qualitative and quantitative criteria. A list such as this should provide a clear picture of the consequences that a failure in the function of a particular element can result in concerning the infrastructure and the population. Moreover, applying the interdependencies between the society and the infrastructure in combination with the proposed tools will make it possible to assess the impact on the whole system (infrastructure and society).
The application of the set of quantitative and qualitative criteria can also be seen as an application of one of the principles arising from the Council Directive [4], which focuses on the utilisation of cross-cutting and sectoral criteria. The quantitative part of the criteria, expressing the population equivalent, can be regarded as a cross-cutting criterion. Conversely, the qualitative part of the criteria, which consists of assessing the impact of the failure of an infrastructure element both on a particular sector and on the functional unit as a whole, can be treated as a sectoral criterion.
Phase 4: Element Evaluation
The final phase of the process is aimed at evaluating the elements identified and analysed in the preceding phase. Subsequently, a list of these elements can be designated as regional infrastructure. Since both steps of the fourth phase follow on from each other, they can be performed collectively.
Infrastructure element classification into specific groups is the responsibility of the relevant authorities. The input data consist of the list of analysed infrastructure elements, from which the impacts of element failure can be directly derived. The classification may include several groups; numerous professional publications can be consulted for inspiration (e.g., [90]). However, some basic rules must be established to ensure that individual elements are assigned to the appropriate category of the regional infrastructure. These rules should be clearly defined by the relevant authorities. The following scale is based on element classification into four groups, which seems to be quite adequate:
•
The special level is represented by elements whose non-inclusion in higher-level critical infrastructure might be considered unacceptable. This may involve, for example, the failure in the function of an element with far-reaching implications for the entire region, while at the same time impacting on critical infrastructure at higher levels. Even though the probability of infrastructure failure at higher levels is generally low, it cannot be ruled out (see the Blackout in the United States and Canada: [91]). Furthermore, an element may, for example, meet the criteria for inclusion in higher-level critical infrastructure. Such elements should be re-examined and assigned to the proper level of critical infrastructure.
•
Regional critical infrastructure comprises the second group of elements which are essential (critical) to the smooth functioning of the region. These are elements of 'vital' importance (for an equivalent thereof see [18]), forming the core of the relevant regional infrastructure. The impact of a failure in the services provided by such elements may be noticeable, but without any negative implications for the higher-level critical infrastructure. Concerning critical infrastructure and its protection, such elements should be given top (level 1) priority in terms of their protection within the region. This may include elements whose failure would be assessed as likely to result in the 'overload' of the infrastructure, in whole or in part (see the qualitative expression regarding the Impact assessment for element function failure above), with the impact on the number of people exceeding the set PE threshold limit, and elements providing services that are essential to the population (see [87,88]).
•
Regional key infrastructure constitutes the third group of elements, which may include structures likely to compromise the functioning of the regional critical infrastructure under a specific set of circumstances. These are regional elements of 'non-vital' importance (for an equivalent see [18]), complementing the infrastructure function (see [19]). In terms of their protection, these elements will be given a priority lower (level 2) than 'regional critical infrastructure' elements. They may include elements whose failure would be assessed as likely to lead to the 'threshold loading' of the infrastructure, in whole or in part (see the qualitative expression regarding the Impact assessment for element function failure above), with the impact on the number of people exceeding the set PE threshold limit, and elements providing services that are essential to the population (see [87,88]).
•
Unclassified elements represent a group composed of elements whose failure would lead to 'consequences without a negative impact' on either the infrastructure or the society.
However, the elements should be classified in a way that considers all associated issues and factors in element interdependencies as well as all dependencies existing between the infrastructure and the society [92]. The proposed classification scale can be adjusted to fit the requirements of the relevant authorities. The output produced by this step should be a list of classified (rated/evaluated) elements divided into the suggested categories. The list is only a proposal. The list of elements may be expanded or narrowed down based on stakeholder preferences as part of the final step of the process.
Infrastructure element designation is the final step of the presented process, wherein a final decision is made. The input data consist of the list of classified (evaluated) elements and individual stakeholder preferences. The elements proposed as part of the classification scale should be the subjects of a final discussion. The preference issue should be one of the key points on the agenda of this discussion, the outcome of which should be a consensus among the relevant authorities and stakeholders. Besides, the stakeholders may also find some elements to be essential for the functioning of the region as a whole. Based on the consensus, the list of classified elements may be further supplemented following the above-mentioned stakeholder preferences.
The issue of the prioritisation of measures for regional infrastructure element protection could be resolved by including the relevant elements in the level 1 group. This level may be designated to indicate a direct dependency (see also [18]) between the failure of the element and the failure in the availability of a particular service within the region. Conversely, the classification of structures in the level 2 group may be understood to indicate an indirect dependency (see also [18]). The inclusion of structures in the 'unclassified' group means that under current conditions set for the designation of 'regional critical infrastructure' and 'regional key infrastructure', there are elements within the region that need to be protected primarily (and secondarily). However, the subsequent designation of elements included in this group also cannot be ruled out, depending on stakeholder preferences.
Preferences can be implemented, for example, by directly designating specific regional structures regarded by the relevant authority as highly important (or even 'critical') for maintaining the functionality of a region. Foreign experience shows (e.g., [26]) that this approach is consistently effective and is even applied to directly designating some structures or their owners (entities). Structures of high cultural importance (similar to 'Assets' specified in [26]) could also be included here. The proposed process could then be used directly to designate elements to the 'regional critical infrastructure' category or the 'regional key infrastructure' category. In terms of logic, direct inclusion of some elements in the 'special category' and the 'unclassified elements' category is not possible.
The relevant authorities should resolve to designate 'regional critical infrastructure' and 'regional key infrastructure'. The output produced by this step (and the entire process) will be a list of elements and entities included in the two aforementioned infrastructure groups.
Ongoing Activities
This part of the process is focused on ongoing activities that are carried out during the previous phases. These are activities with a high emphasis on maintaining the quality of the whole process and the quality of its outputs. The two ongoing activities below can be performed independently of each other.
Feedback: review and monitoringconstitute a fully-fledged component of the process (see [6]). Individual steps of the process can be affected retrospectively by feedback [93]. Feedback can have a positive effect in that it can be applied to enhancing the process by validating individual steps and their results, or process results. Conversely, negative feedback can counteract any irregularity (deviation) and help in the suppression thereof. The process as a whole is bound to benefit from both types of feedback.
Monitoring would be best performed for each step of the process following set rules. At the same time, an emphasis should be placed on the continuity and effectiveness of the entire process by duly correcting any irregularities. Delays in the indicators of feedback effectiveness should be viewed similarly. Feedback as part of a dynamic process may lead to some delays in the manifestation of the required changes across the process [93]. Monitoring and review should focus on the function and completeness of the presented approaches, i.e., to ensure that all elements deemed vital to maintaining infrastructure functions have been analysed [18]. For this purpose, it is advisable to put in place some sort of metrics to help evaluate individual components of the system approach. Periodic inspections and reviews of the process are crucial to maintaining the quality of the process and, as such, should be performed at regular intervals. As part of this phase [6], the whole process should be analysed to ensure its efficiency and its potential improvement by the implementation of new approaches.
Stakeholder communication and consultation is the cornerstone of the whole process. Good communication should be maintained throughout the process [6] and should include effective consultations aimed at enhancing the quality of the process itself. Care should be taken to ensure that all parties involved in the process fully understand any decisions made by the relevant authorities, and also that the authorities understand the reasons for, and the outputs of, any analytical work performed. Moreover, clear communication also facilitates the implementation of stakeholder preferences, both in the designation phase and throughout the entire process. As stakeholders show a general tendency to view the issue primarily from their perspective, all steps of the process must be fully discussed and properly communicated. Stakeholder opinions may greatly affect the decision-making of the authorities.
Awareness of the importance of communication leading to the subsequent adoption of views held by different groups of stakeholders, as well as their direct involvement in the process, can be seen as a form of governance 7 across the process (and constitutes the application of the UNISDR [57]). In this way, communication is instrumental in building trust between partners (stakeholders) and, in general, to enhance the process and its outcomes.
The practical application of the proposed progressive approach to identifying critical infrastructure elements must also respect some of its limitations. The main limitation is the so-called territorial limitation, on the vertical level, i.e., the definition of the region. For this reason, the most appropriate solution is to regard the region as a regionally defined territorial area falling within the competence of local government. In the Czech Republic, these are mainly larger territorial units under the administration of regional authorities and smaller municipalities under municipal administration. Other important limitations include, for example, the necessary knowledge of techniques and tools for infrastructure analysis at the regional level or the need for coordinated cooperation among stakeholders.
Accepting restrictions for detailed analysis is a step whose content must be approved by stakeholders. However, the final decision depends on the responsible public institution or the responsible authority of the owner or operator of the critical infrastructure element (or their mutual consensus).
Case Study
An example of the practical application of the proposed system for identifying regional critical infrastructure elements is demonstrated by a case study that focuses on road infrastructure as a part of the technical infrastructure in the Pardubice Region (NUTS-CZ053), which represents the societal system. The territorial limitation of the assessed region is defined by the borders of the administrative district of the Pardubice Region. The map of the area under consideration is presented in Figure 6.
Case study
An example of the practical application of the proposed system for identifying regional critical infrastructure elements is demonstrated by a case study that focuses on road infrastructure as a part of the technical infrastructure in the Pardubice Region (NUTS-CZ053), which represents the societal system. The territorial limitation of the assessed region is defined by the borders of the administrative district of the Pardubice Region. The map of the area under consideration is presented in Figure 6. [94,95].
In order to verify the proposed procedure in reality, a stakeholder team was set up containing representatives of: 1. Regional Authority of the Pardubice Region. 2. Regional Security Council of the Pardubice Region. 3. Regional Directorate of the Police (for the Pardubice Region). 4. Road Administration and Maintenance Director of the Pardubice Region. 5. Road transport expert/system analyst. 6. Critical infrastructure expert. Figure 6. Road network infrastructure selected for detailed analysis (created on data provided by [94,95]. 7 For more information visit the International Risk Governance Council website at: https://www.irgc.org/risk-governance/ what-is-risk-governance/.
In order to verify the proposed procedure in reality, a stakeholder team was set up containing representatives of: 1.
Regional Authority of the Pardubice Region. 2.
Regional Security Council of the Pardubice Region. 3.
Regional Directorate of the Police (for the Pardubice Region). 4.
Road Administration and Maintenance Director of the Pardubice Region.
Association of Road Transport Operators.
For the purpose of carrying out the case study, Category 1-3 members represent the "relevant authorities", Category 4-6 members represent "working group", and Category 7 members represent other process participants (interest groups).
Phase 1: System Description
The assessed critical infrastructure sector is the road transport sector. The input data for this infrastructure consists of a type of road, a unique identifier (road section number), traffic intensity and interest information [96]. Based on the consensus of stakeholders, the assessment was limited to roads that allow interconnection of the region (i.e., motorways, 1st class roads, 2nd class roads, 3rd class roads)-local roads were not assessed. Concerning the accepted limitation, 500 sections of roads and 691 structures of interest (i.e., bridges, tunnels, intersections, etc.) were subjected to further assessment. This case study is based on the results of the National Transport Census [97], uniform vector transport maps [98] and available background data for GIS [94,95].
Phase 2: Element Identification
The preliminary analysis is aimed at identifying the needs of the population and compiling a comprehensive list of infrastructure elements.
In terms of applications carried out a sector with relationships being solved (road infrastructure) was supported and following the basic needs of the population [88]: (a) biological physical needs (relationship in particular to the food supply, pharmaceuticals, health); (b) the need for safety and security (mainly related to security services, electricity supply).
The basis for compiling a comprehensive list of infrastructure elements is the implementation of the Transport Census Results [97]. In this case, the traffic census replaces the preliminary analysis, which is to identify the individual sections of the road network. The assignment of certain values to these sections was performed via the CARVER2 method [47]. In this way, a total of 539 assessed sections and a total of 1055 buildings of interest within the Pardubice Region were identified. Due to the scope of the underlying data, this data are not part of this article (detailed information can be found in [97]).
The acceptance of restrictions for the detailed analysis is conducted by the responsible public institution (i.e., the Regional Authority of the Pardubice Region) or the responsible authority of the owner or operator (i.e., the Regional Authority of the Pardubice Region and the Road and Motorway Directorate of the Czech Republic). In this case, it was the consensus of both authorities. On that basis, the following restriction was adopted: For a detailed analysis, only roads allowing connection within the Pardubice region will be considered. Such roads cannot include local roads. Moreover, these are roads that are managed by the municipality, not by the regions. Based on this, the previous list of identified infrastructure elements was reduced to 500 assessed sections and 691 interest structures within the Pardubice Region.
Phase 3: Element Analysis
For the purpose of analysis, a modified critical pathway method [76] and elementary network analysis [74] were applied. Based on the application of the modified critical path method, it was possible to present the intersection of roads as nodes and sections of roads as connectors. The implementation of this application should be supplemented by basic knowledge of networks, specifically the assessment of the density and heterogeneity of the network. The following facts can be stated here: • The solved infrastructure of the road network is rather sparse [74], in terms of density of the network of motorways and 1 st class roads. Some nodes usually have only one connection, and some nodes form larger irreplaceable centres (nodes with multiple connections).
•
The solved infrastructure of the road network is very dense in terms of density of the 2nd class road network [74], i.e., there are several possible connection variants between the nodes. • The solved infrastructure of the road network in terms of the density of the 3rd class road network forms a discontinuous network and the parts of the network consist of small island (isolated) systems [69,72]. This is, however, logical, since 3rd class roads are complementary to 2nd and 1st class roads. Subsequent network analysis of 3rd class island class systems will always have the same result [72,74].
Applying the critical pathway method [76], places of high importance can be identified in the networks of different types of roads, such as a crossing of the roads of a particular class or the nodes mentioned. The list of identified elements has been supplemented by their importance in the system and one of the following options was always selected: • Impact of the element's function failure on the entire system of the road network in the region concerned. There is no alternative route on the same type of road in the region. • Impact of failure of the element function on a part of the system (i.e., a limit load of a part of the region) in the region concerned. For a solution, there is an alternative route on the same type of road.
•
No negative effects of a failure of the system or its part in the region being solved.
An extract from the entire list of identified elements for interest buildings is presented in Table 2. The methodology developed by the National Cooperative Highway Research Program [99] may also be used to identify elements of critical transport infrastructure. This approach is based on assessing the consequences of critical infrastructure protection in the context of threat variability and probability.
The population equivalent EP is expressed as the number of people who will not be able to use the relevant section of road primarily. Secondarily, the consequences for other parts of the population can also be deduced, for example, the loss of supply with a consequent impact on the population. However, in this case study, attention was only given to the primary consequences. The population equivalent can be determined based on the average occupancy of cars and trucks [100] and the number of vehicles passing through the 24-h section [97]. The recalculated number of inhabitants thus obtained is always related to a certain section of road. E.g., the busiest section of the motorway in the Pardubice Region reached in 2016 the value of population equivalent at 32,021 persons in 24 h.
The decision on the level of population equivalent limit was made by the responsible authority (i.e., the Regional Authority of the Pardubice Region), which decided to use the recalculated dynamic value of the affected population (proposed by authors in [30]). This value was set at the level of 6112 inhabitants of the Pardubice Region for the year 2016.
Phase 4: Element Evaluation
Since there are only some sections of the motorway in the region, the failure of these sections will always have an impact on the regional motorway system not only in the Pardubice region but also in adjacent regions. In terms of 1st class roads, these are only selected sections. The impact on the entire system is related to the structures of interest on selected sections of roads I/35 and I/43. The structures of interest on the road sections, which form connections with the nearest node in the region up to the border of the Pardubice Region will have an impact on the part of the system. On the other hand, there is no negative impact on the structures of interest on the sections between the nodes for which there are one or more alternative routes.
Similar patterns can be observed with structures of interest in the 2nd class road system, specifically with the impact of the failure of the function of a structure on a part of the system, without a negative impact on the system. No structure of interest on 2nd class roads will affect the entire system when it is disturbed. As already mentioned, 3rd class roads will in some cases function as an island system. Therefore, the damage to the interest buildings on the 3rd class road sections in the Pardubice Region may only have a partial effect or no negative impact on the 3rd class road system.
The input data for assessment of the social part of the system are mainly formed by the number of inhabitants in the municipalities, as with the increasing number of inhabitants there are growing demands for maintaining the functionality of the service supply to the population. The expression of the population equivalent was based on the results of the National Transport Census in 2016 [91]. This document indicates that the maximum equivalent number of inhabitants is reached by some sections of 1st class roads (more than motorways). The list containing the recalculated dynamic values of the affected population served as a basis for identifying the elements of the infrastructure. To protect information on potential critical infrastructure, only the general conclusions of the proposed classification can be presented:
•
Special level comprises the elements that disable the road infrastructure for levels higher than the regional units (i.e., national level), namely motorway sections and one nationally significant building (tunnel) in the territory of the Pardubice Region. For this reason, these elements also have a significant impact on the elements of adjacent regions. The reason for this may be the fact that the motorway serves mainly for transit traffic between states and their parts. Similarly, the mentioned tunnel forms a significant obstacle to the national transit route. Routes alternative to the tunnel route are unable to provide adequate throughput for the transport capacity. A graphical representation of the special level elements in the evaluated region is presented in Figure 7. The elements of the other levels are not included in the figure due to the large amount.
•
Regional critical infrastructure consists of elements that are non-replaceable for securing the equivalent transport capacity in the region. These are roads, which, if not in use, would make the regional transport impossible. Similarly, there are structures of interest whose rehabilitation would be very difficult. In particular, it can be a single tunnel and bridges typically located on the busiest routes and also on busy routes of larger towns in the region. It may also be the busiest crossing of the 1st class roads.
•
Regional key infrastructure, on the other hand, is a group of elements that can be crucial to maintaining the road infrastructure function. Here, in particular, the requirements for alternative routes have been taken into account. Some sections may be exposed to a limit load and may collapse the entire infrastructure or system. These can, for example, be some elements with a partial impact on the system under consideration.
• Unclassified elements form an important part of all the infrastructure elements. It cannot be stated that their non-inclusion reduces their importance within the system under consideration.
would be very difficult. In particular, it can be a single tunnel and bridges typically located on the busiest routes and also on busy routes of larger towns in the region. It may also be the busiest crossing of the 1st class roads.
•
Regional key infrastructure, on the other hand, is a group of elements that can be crucial to maintaining the road infrastructure function. Here, in particular, the requirements for alternative routes have been taken into account. Some sections may be exposed to a limit load and may collapse the entire infrastructure or system. These can, for example, be some elements with a partial impact on the system under consideration.
•
Unclassified elements form an important part of all the infrastructure elements. It cannot be stated that their non-inclusion reduces their importance within the system under consideration.
Figure 7.
Graphical representation of special level elements in the evaluated region (created on data provided by [94,95]).
The element classification was conducted through holistic thinking when the importance of the element for the whole infrastructure and the system of society was considered. The preferences of [94,95]).
The element classification was conducted through holistic thinking when the importance of the element for the whole infrastructure and the system of society was considered. The preferences of stakeholders were also included in the identification process. Particularly, these were the comments of experts in the field of road transport, who specified the behaviour of the road transport system.
Ongoing Activities
Ongoing implementation of monitoring and revision activities was carried out for each phase. The reason for this was the practical prevention of errors or irregularities being introduced into the subsequent stages of the case study process. To this end, a "Monitoring Group" has been established, comprising one representative of each stakeholder in any way involved in the process (representatives from all categories 1-7). The "Monitoring Group" carried out a careful review each time the relevant phase was completed and also monitored the relevance of the input data to each step. The "Monitoring Group" then gave feedback on the conclusions of the monitoring and revision-always to those groups that were scheduled to take steps within the relevant phase.
The "Monitoring Group" also reviewed the implementation of the communication and consultations across the research team. Communication and consultation were carried out to the extent necessary to verify the feasibility of the proposed process. One of the recommendations of the "Monitoring Group" was to re-familiarize with the results of Phase 1, including "Scope and Context". The recommendation was addressed to "responsible authorities", which tended to slightly adjust the results to the needs of "responsible authorities".
Conclusions
The proposed system of regional critical infrastructure designation, as presented in this article, has stemmed from the necessity to come up with a systemic approach applicable to different countries at the regional level. Greater safety of regional elements is essential to ensuring the overall security of the critical infrastructure system. The presented progressive approach is based on the 'bottom-up' evaluation of elements and allows for the optional implementation of individual criteria and preferences. Furthermore, it makes it possible to designate regional critical infrastructure entities directly. Thanks to its characteristics, it can be applied to both public institutions and the private sector at the regional level. The progressive element designation process has been designed as a model presenting a basic philosophy, leaving room for further elaboration. The novelty of this approach can be particularly seen in identifying regional critical infrastructure elements through an integral assessment of these elements' failure impact, not only on the dependent subsectors, but also on the population (population equivalent) located in the assessed region.
The proposed system presents a possible managerial approach to the identification of critical infrastructure elements at the regional level. In this context, the system is intended primarily for crisis managers of local authorities. In the event of a crisis situation, it may also be an appropriate support tool for the work of the region's Security Council and the relevant Emergency Staff. Implementing this approach in practice would contribute not only to improving the continuity of service delivery of the company, but also to strengthening the preparedness of the authorities responsible for preparing the area for crises and increasing the security of the area as a whole.
From the perspective of the public sector, the primary reason for applying the presented approach to regional critical infrastructure designation is its ability to ensure the required level of regional security. Concerning the private sector, it addresses the issue of maintaining the continuity of activities that secondarily contribute to profit maximization. Finally, it can be concluded that a common understanding of the significance of the issue concerned is the social responsibility of both the public and private sectors, and is in their interests, for maintaining the function of individual regions. | 16,112.6 | 2020-04-22T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
SYNERGISTIC AND COMPLEMENTARY SAFETY SUPERVISION MODE IN COAL MINES : A CASE OF COAL MINING COMPANIES IN CHINA
Original scientific paper Effective safety supervision guarantees coal mine safety production. However, the traditional supervision mode in Chinese coal mines cannot fully realize the synergistic and complementary effects of elements (i.e., safety inspectors) of safety supervision system, which causes information-isolated island, application-isolated island, and resource-isolated island. To overcome these deficiencies and improve the efficiency of safety supervision, a synergistic and complementary safety supervision mode (SCSSM) was explored in this study. After the applicability analysis of the synergistic and complementary management idea, this study creatively introduced the proposed idea to the safety supervision in the coal mine. On the basis of the deficiencies of the existing safety supervision mode and the advantages of synergistic and complementary management, the SCSSM was then established and its definite operation method was also provided. Finally, the new safety supervision mode was applied in Xinwen Mining Group Xin Julong Co., Ltd. with definite implementation method and comparative analysis. Results show that the new safety supervision mode uses lesser safety supervision resources than the traditional one. Meanwhile, the checking and rectification efficiency of hidden dangers and the project qualified rate are improved. Moreover, the number of accidents is reduced, which has significant safety, economic, and social benefits. This study can realize the synergistic and complementary effects of elements of safety supervision to improve the level of coal mine safety supervision and has high value of popularization and application.
Introduction
Safety management is the foundation of coal mine safety production and its effectiveness is determined by specific management measures and regulations, as well as the safety supervision methods and mechanism that ensure the effective implementation of the first two [1].Therefore, coal mining companies must establish an efficient internal safety supervision mode that is suitable to its own development by adopting scientific safety supervision method and systematic regulation mechanism to ensure the safety and health of coal miners [2].This finding is of vital importance in avoiding the occurrence of accidents and ensuring safety production.As high-risk workplaces, coal mines have complex and diverse working environments [3,4], and unsafe operation processes cause most coal mine accidents.Thus, coal miners must comply with relevant operating rules and safety regulations in the operation process.Moreover, the safety supervision of underground working face is particularly important.
Currently, Chinese coal mining companies adopt a safety supervision mode where one safety inspector supervises one working face.This safety inspector takes charge of hidden danger management and project quality supervision of the working face.This traditional mode is good in consistently keeping a safety workplace; however, it cannot fully realize the synergistic effect of elements (i.e., safety inspectors) of the safety supervision system and it wastes safety supervision resources.All these shortcomings cause information-isolated island, application-isolated island, and resource-isolated island in the safety supervision system, as well as laziness of safety inspectors.These problems become more prominent with the improvement of mechanization and safety management levels of coal mines in China.To address these issues, scholars have applied numerous theories, including game theory, rent-seeking theory, and principalagent theory, to improve the safety supervision level in coal mines by improving the safety supervision institution, optimizing the supervision system, and researching safety supervision subjects [5÷7].Subsequently, certain safety supervision modes have been proposed, such as the mixed and autonomous supervision mode and polycentric governance mode of coal mine safety [8,9].However, supervision work in coal mines is still conducted by individual operation, and the interaction of supervision system elements is not fully realized.Moreover, mining operations have more common characteristics and are gradually transforming from individual projects to daily operations, along with the improvement of mining mechanization level.In view of the development trend of coal mine safety supervision mode, safety supervision mode is transforming from the division of labour into division-cooperation, and individual operation is transforming into group operation.Thus, the existing safety supervision mode cannot meet the demands of the development of coal mine safety, thereby necessitating a safety supervision mode that can fully realize the synergistic and complementary effects of the elements of safety supervision system from the perspective of team collaboration.
Based on the above mentioned conditions and given the advantages of synergistic and complementary management, this study aims to overcome the deficiencies of the existing supervision mode and improve the safety supervision level of coal mining companies by establishing a new type of synergistic and complementary safety supervision mode (SCSSM).
State of the art
As the guarantee of the safety production of coal mining company, safety supervision has been researched and practiced by many scholars, and most of these studies have focused on the safety supervision system [10][11].Studies abroad have mainly focused on the safety and health supervision problems in coal mines.Andrew studied the formation, evolution, and social impact of occupational safety and health administration in America and utilized the existing regulation theories in the empirical study to suggest an alternative theory that more accurately reflects the findings [12].Mills researched the emergence and growth of state responsibility for a safer and healthier British mining industry, as well as the response of labour and industry to the expanded regulation and control [13].The practice of worker representatives to the health and safety in the Queensland coal mining industry was explored, and the vital function of worker representatives was illustrated [14].Studies in China have also achieved success.Game theory was applied in the analysis of the interest conflict and behavior decision model in the safety supervision system in China, as well as the stability control scenarios in the coal mine safety inspection system [15,16].The problems and impacts of safety supervision system were analyzed in certain studies, which adopted the principle of preventing conflicts of interest [17], principal-agent theory [6], and rent-seeking theory [5,18].These studies have focused on microcosmic problems, such as supervision subjects, stakeholder, and interrelationships of subjects, which are helpful to the continuous improvement of safety supervision system and safety decision accuracy.However, these studies have failed to combine practical situation and demands of safety production underground effectively and neglected the operational problems of safety supervision implementation and the deficiencies of operation procedure.Moreover, their findings have emphasized on theoretical guiding significance, and the problems of safety supervision implementation process in hazardous areas have not been solved effectively.
In the aspect of safety supervision mode, Ruffennach reviewed the development process of coal mine safety supervision mode and indicated the existing problems and improvement direction [19].Alders and Wilthagen confirmed the importance of government regulation and enterprise supervision and suggested the adoption of a type of mixed and autonomous supervision mode [8].Furthermore, the public management theory was applied in some studies on safety supervision of coal mine in China.For example, Li proposed the establishment of a polycentric governance mode of coal mine safety [9].Yan and Liu established a game model of coal mine safety supervision, which was interfered by an insurance company [7].However, these studies have neglected the feasibility and practicability of the supervision method and its operation model in coal mines, and the existing problems of safety supervision resource waste in China have yet to be solved effectively.Moreover, these studies have not fully considered the development demands of coal mine safety supervision mode and have neglected to investigate the interaction of the elements of safety supervision system.Supervision work in coal mines is mainly conducted by individual operation, and the synergistic and complementary effects of the elements of the coal mine safety supervision system are not yet fully realized.Thus, the safety supervision system in coal mines is not optimized, and the efficient operation of the safety system and the effectiveness of safety supervision function cannot be achieved.
Therefore, based on the synergistic theory and the complementary value-adding principle, this study proposes the synergistic and complementary safety supervision principle by analyzing the advantages of synergistic and complementary management, thereby addressing the insufficiencies of the existing studies.Subsequently, a new type of SCSSM is established to fully realize the synergistic and complementary effects of the elements of safety supervision system, and its operation model is provided from the perspective of safety inspector and working face.Finally, the validity of the proposed mode is tested, and the contrastive analysis of the application results can highlight the advantages of the mode and provide the basis of popularization and application.
The remainder of this study is organized as follows.In Section 3, the proposed SCSSM is established with its operation model, and an application method is designed to test the validity of this new mode.In Section 4, the application results are comparatively analyzed and discussed.Finally, the conclusions and limitations are presented in Section 5.
Synergistic and complementary safety supervision mode (SCSSM)
Based on the synergistic theory and the complementary value-adding principle, a new type of SCSSM is established in this section.
Synergistic and complementary safety supervision principle
Efficient system operation depends on the function and coordination degree of system elements, whereas the friction, conflict, and dissociation of system elements can increase the internal consumption of the management system and reduce system performance [20,21].To implement the overall functional effect of the system effectively, the synergistic and complementary management adopts the basic idea and method of the synergistic theory and the complementary value-adding principle to research the synergy and complementation law of management objects and its management implementation [22,23], and highlights the ideas of synergy and cooperation among elements, and the necessary condition to achieve synergy and complementation is that partners should have a common goal and share in decision making, understanding, and knowledge; moreover, the synergistic and complementary effects of each element can be realized in terms of knowledge, experience, and ability; finally, different elements obtain synergy and complementarity on the overall system function [21÷23].From its theoretical foundation and management idea, the synergistic and complementary management attempts to utilize the synergies and complementarities among the elements in various management systems to improve the overall system efficacy; thus, it is applicable to the research of safety supervision system.Therefore, based on the deficiencies of the existing safety supervision mode and advantages of the synergistic and complementary management, the synergistic and complementary management idea can be introduced to the safety supervision in the coal mine to fully mobilize the resources of safety supervision system and promote the synergistic and complementary effects of safety inspectors.This approach is reliable in solving the problems of the existing safety supervision mode and promoting coal mine safety production.
Under the guidance of the synergistic and complementary management idea, the recombination of time, space, and function structure of each element (safety inspector) in the safety supervision system can be realized by establishing a new type of SCSSM, and an effect of "competition-cooperation-complementation-coordination" can be generated.Through the reasonable utilization of the advantages of each element (safety inspector), the problems of information-isolated island, applicationisolated island, and resource-isolated island can be solved, and the synergistic effect and complementary effect of safety inspectors can be obtained.The essence of SCSSM is the realization of the promotion, integration, and innovation of all safety inspectors through the communication and sharing of safety information.The purpose of SCSSM is to make the overall safety supervision level of the organization much greater than the sum of each individual safety inspector on the basis of the synergistic and complementary of resources and abilities and to improve the safety production level.
Establishment of SCSSM
The proposed SCSSM is based on the synergistic and complementary safety supervision idea and the existing safety supervision mode.It is an efficient, scientific, and perfect safety supervision mode and method system.By circularly supervising around the working faces in the work area by the safety inspectors, the proposed mode can implement networking management, where N safety inspectors manage M working faces.N and M are assigned on one work area; N<M, where M is the number of inspectors in the work area, and N is the number of working faces in the work area.SCSSM can achieve one working face that is supervised by multiple safety inspectors, and one safety inspector inspects many working faces.The entire mode is divided into two parts, namely, supervision and management, which are interactive with each other.The management and control of hidden dangers and unsafe behaviors are conducted based on the circulatory supervision.The SCSSM flowchart is shown in Fig. 1.SCSSM reasonably arranges and combines the partial powers of safety supervision system.Through the recombination of each system element, SCSSM makes the new time, space, and functional structure, all of which can fully arouse the work enthusiasm of safety inspectors and improve work efficiency.Then, the free time of the safety inspector is decreased, whereas the operational use time is increased.This rational utilization of the advantages of each element (i.e., safety inspector) in the safety supervision system can promote the cognitive degree of safety inspectors to safety problems, and the utilization ratio of safety supervision resources is improved, thereby synergizing information, operation, and resources.All these findings are useful in realizing the integration and innovation of safety supervision level and obtaining the maximum benefit of safety supervision system.
Operation model of SCSSM
This section discusses the operation model of the SCSSM.
Supervision operation model of each working face
Each working face is simultaneously supervised by multiple safety inspectors who circularly supervise around the working faces in the work area of the coal mine.This process is supported by appropriate ensuring system and communication and coordination mechanisms.Safety inspectors supervise and inspect each working face one by one, and their knowledge, experience, ability, specialty, and thinking mode are interactive and synergistic on each working face.This cognitive sharing can deepen the cognition degree within the organization and enhance individual cognitive ability.Moreover, the synergy and complementation of cognitive abilities of different safety inspectors on each working face can be realized.The supervision operation model of each working face is shown in Fig. 2.
Supervision operation model of each safety inspector
One safety inspector manages many working faces through monitoring around the working faces in the work area and supervises the working faces one by one sequentially.In this safety supervision mode, safety inspectors can gain more experience and expand their horizon.Moreover, the habitual domains of safety inspectors are widened, and their cognitive experience in safety problems is also enriched.Through cognitive abilities such as memory, analogy, and thinking, safety inspectors can complete the accumulation of supervision and management experience in each working face, and their cognitive perspectives are widened.All these capabilities contribute in eliminating the blind spots of safety problems and finding more unsafe problems; a safe production environment can also be created.The operation model of the interactive and circulatory supervision of each safety inspector is shown in Fig. 3.This supervision and management process is also supported by appropriate ensuring system and communication and coordination mechanisms.
Application design of SCSSM
To characterize the validity of the SCSSM objectively, a one-year application was conducted in Xinwen Mining Group Xin Julong Co., Ltd., which is a large-scale and new mine that has the productivity of 6 million tons per year and is the largest mine in Shandong Province.This company focuses on the advanced configuration idea of large-scale, heavy-duty, and intelligent main and auxiliary systems in the coal mine and actively promotes high-end equipment assembly, advanced technology integration, and high-level talent management.Moreover, it has built highly specialized safety supervision team and developed a high-level safety management system.The safety management level and personnel quality in this coal mining company are high.However, the problem of insufficient safety resource exists, and the number of safety inspectors is insufficient to conduct the mode where one safety inspector supervises one working face.Thus, this coal mine provides suitable internal and external environments for the implementation of this mode, which is appropriate to be applied in this new safety supervision method.
Result analysis and discussion 4.1 Comparative analysis of the application results
According to the application results of the implementation method, the supervision operation process of the traditional supervision mode and SCSSM is drawn.Figures 5 and 6 show the flowchart of the supervision operation process of the traditional supervision mode and SCSSM, respectively.As shown in the figures, the vertical lines in the flowchart represent the working procedures of safety inspector, whereas the horizontal lines represent the time of every working procedure.The working face column is the name of each work face, and the sequence of every working procedure is in the sequence number column.Moreover, the working procedure column is the name of each working procedure, working hours are in the working time column, and the number of time numbers is the number of working procedure that occurred in this working face.In the flowcharts, the single lines represent the circulatory supervision of safety inspector A, the double amaranth lines represent the circulatory supervision of safety inspector B, and the three red lines represent the circulatory supervision of safety inspector C. Based on the operation process of the two safety supervision modes, SCSSM shows its advantages over the traditional supervision mode in many aspects, as shown in Fig. 7. Figs. 5 and 6 indicate that the SCSSM required two safety inspectors, and the traditional one required three safety inspectors.Thus, the number of safety inspectors decreased in the new mode.In the same working time, the number of evaluations per shift in each working face was two in the traditional mode, which increased to three in the new one.During the application of SCSSM, the range of activities of safety inspectors was extended from one working face to three working faces.In each shift, the number of supervision of each safety inspector in each working face increased from one to two.In the traditional mode, three safety inspectors worked independently, and each working face was supervised by one safety inspector, who had relatively more free time.By contrast, in the SCSSM, two safety inspectors were present, who coordinated and communicated with each other and had less free time; each working face was supervised by different safety inspectors.
Number of safety inspectors
Working time of safety inspector
Discussion
To highlight the validity and advantages of SCSSM, the effects of the traditional supervision mode and SCSSM were compared on the aspects of benefit, economic, and social benefits based on the data recorded in the implementation process.The data on the security effects of the two modes shown in Tab. 1 and Fig. 8 were collected during the operation of these two modes in this work area.In Table 1, the data of traditional safety supervision mode are the average value in the last two years, and those of the SCSSM are from the records and reports of the application year.The number of accidents per million hours of each quarter (2015÷2016) is shown in Fig. 8.As shown in Fig. 7 and Tab. 1, the number of safety inspectors in the SCMM decreased in each work team, and the free time of the safety inspectors also decreased.Consequently, the labour cost was reduced, thereby solving the problem of insufficient safety resource in the company by saving safety supervision resource.For this work area, the number of safety inspectors decreased; the labour costs decreased by 20,000 Yuan per month and by 240,000 Yuan per year.More economic benefit can be obtained if the new mode is applied in all work areas.At the same time, SCSSM motivates the supervision initiative and enthusiasm of safety inspectors and ensures the work quality and working time.Moreover, the safety benefits are obtained from the following aspects: 1) After the application of SCSSM, the number of checked hidden dangers increased from 241 to 326 per month, and the rectification rate of hidden dangers also increased by 7,69 %.Thus, the circulatory operation can enhance the checking of hidden dangers and make safety inspectors manage hidden dangers timely and effectively.Then, the defective condition where hidden danger rectification was not timely in this company has been eliminated.
2) As shown in Tab. 1, the number of unsafe behaviors of coal miners decreased from 298 to 223 per month in the circulatory inspection process.These data indicate that the safety level of coal miner behaviors is improved because safety inspectors circularly supervise coal miner behaviors, interactively correct the behaviors in violation of regulations, and punish coal miners for their violation.This way, the safety awareness of coal miners is strengthened, and their safety quality is improved.
3) As shown in Tab. 1, the qualified rate of project quality increased from 86,31 % to 94,05 %.Through the exertion of synergistic and complementary effects in the supervision process, the supervision and inspection of work on-site are synergistic and complementary; thus, supervision contents are more elaborate and comprehensive.Then, the safety and quality management on-site is enhanced.At the same time, time availability of safety inspectors is improved, and the enhancement of work initiative and execution of safety inspectors contributes to the improvement of their professional skill.
4) The improvement of safety factors on the above three aspects directly contributes to the decrease in the number of accidents.As shown in Fig. 8, the number of accidents per million hours of each quarter was recorded from 2015 to 2016.As the production tasks in the second and third quarters of every year are relatively heavy according to the safety production plan of the mine, the two quarters have high level of coal production and tunnel advance, and the probability of accidents is relatively high.Thus, the number of accidents in the two quarters is higher than that in other quarters.As shown in Figure 8, the number of accidents recorded in the traditional supervision mode in the four quarters of 2015 is significantly higher, and the development trend of the number of accidents is relatively smooth.In SCSSM, the number of accidents in the four quarters of 2016 significantly decreased and was lower than the corresponding quarters in 2015, especially in the second and third quarters.Meanwhile, the results have the trend of a continuous decline.These results show that the effect of SCSSM is more significant in accident-prone quarters and has a sustainable and progressive increase, which is helpful to the continuous enhancement of safe production level in the coal mine.
Moreover, the SCSSM has great social benefit.This mode is a new breakthrough in the safety supervision and management methods.It also enriches the existing theory of safety production management of coal mine to set the theory and method foundation for the long-term stability of safety production.The application of this mode makes safety supervision progress toward the dynamic, circular, and synergistic team supervision mode.In addition, it promotes the wide penetration of this new mode into the coal industry and provides example and experience for other coal mining companies.Then, the new mode can make the safety supervision system more integrated, systematic, and operable to gain more social benefit.
All these favorable factors contribute to the increase in profit and improvement of efficiency, and significant safety and social benefits can be gained in the coal mining company.Therefore, SCSSM has high value of popularization and application.
Conclusion
To overcome the deficiencies of the traditional coal mine safety supervision mode and fully optimize the coal mine safety supervision system in China, the SCSSM in the coal mine was researched with its application effect tested based on the synergistic theory and the complementary value-adding principle.The main conclusions are as follows.
1) Synergistic and complementary management can realize the reasonable permutation and combination of time, space, and function structure of each element (i.e., safety inspector) in the safety supervision system.Moreover, it achieves the purpose to optimize the structure of the safety supervision system, thereby generating the "competition-cooperationcomplementation-coordination" effect.
2) The habitual domains and cognitive perspective of safety inspector are expanded through the circulatory supervision around the working faces in the work area, and the accumulation of supervision and management experience is realized.Meanwhile, the free time of safety inspectors is decreased, whereas the operational use time is increased.Consequently, the utilization ratio of safety supervision resources is improved.
3) The interactive and circulatory supervision of safety inspectors promotes the synergy of the abilities of different safety inspectors on each working face, thereby achieving the synergistic effect.Moreover, it promotes the formation of complementary advantages, ability motivation, and mutual supervision of safety inspectors, and the complementary effect in the safety supervision system can then be generated.
4) The comparison of the application of the two modes shows that SCSSM can eliminate the blind spots of safety problems, find more unsafe problems, and improve the checking and rectification efficiency of hidden dangers, as well as the project qualified rate.Furthermore, the number of accidents decreases, though lesser safety supervision resources are utilized.SCSSM provides remarkable safety, economic, and social benefits and has high value of popularization and application.
SCSSM can properly replace the traditional mode and improve the level of coal mine safety supervision.However, SCSSM was established to adapt to the improvement of coal mine safety production equipment and safety management level in China and on the condition that the coal mining company had a relatively high level of safety management and production.Thus, a coal mining company with a low level of safety management cannot effectively implement this mode.Moreover, SCSSM cannot be completely applicable in some special mining operations, such as the mining of island coal face and informal mining of boundary coal, because these operations have the characteristics of particularity and uniqueness.Thus, daily operation management is not suitable to these mines, and the mode where one safety inspector supervises one working face should be adopted in these operation processes.
Figure 2
Figure 2 Supervision operation model of each working face
Figure 3 Figure 4
Figure 3 Operation model of interactive and circulatory supervision of each safety inspector The implementation was conducted according to the layout of working faces and actual situation of safety supervision in the coal mine.One typical work area with three working faces was selected to implement the SCSSM in 2016.These working faces were −950 deepening jitty between sump and pump chamber (#1 working face), −950 #1 assistant entry (#2 working face), −950 #1 assistant entry (#3 working face).The interactive and circulatory supervision process of each work team in this work area is shown in Fig. 4.
Figure 5 Figure 6
Figure 5 Supervision operation process of the traditional supervision mode
Figure 7
Figure 7 Comparison of the operation effect of the two modes
Figure 8
Figure 8 Number of accidents per million hours of each quarter (2015÷2016)
Table 1
Safety factors of the two modes | 6,080.2 | 2017-07-31T00:00:00.000 | [
"Business",
"Engineering",
"Environmental Science"
] |
Using Online Banking Among Covid-19 Pandemic: A Systematic Literature Review
The main objective of the study is to review the research conducted about online banking, focusing on the effect of the Covid-19 pandemic on using online banking services as a marketing channel. The study used is the systematic literature review of previous studies, so that these studies included the period before and after the Covid-19 pandemic. From reviewing the online banking research, the study find that the Covid-19 pandemic had a positive impact on the online banking services market, as the pandemic resulted in social distancing laws that led to the resort to electronic services to conduct business, including banking. In addition, this study concluded that among the most important factors affecting the quality of online banking services; availability of security, privacy, ease of use, and trust was the most important from the customer's point of view. The recommendations, we need more research about online banking services, focusing on the limitations and the factors affecting them in the developing countries.
Introduction
Online services have crossed the luxury barrier to become one of the necessities of daily life, this change in the perception of online services has occurred gradually for a period. However, with the outbreak of the Covid-19 pandemic, technology has moved from luxury to necessity once. It is today the engine of our daily lives, and everyone is forced to deal with online services and learn how to use them if in time have the option to ignore them. At the same time, pressure is increasing on individuals and institutions, Individuals have to learn how to use online services, where institutions have to adopt online services in providing their services. Moreover, it has become imperative for countries to provide appropriate infrastructure to use online services and provide the requirements, especially in developing countries (Benmoussa & Almaoui, 2020) (Shihadeh & Liu, 2019).
For companies and institutions, if online services are used to reduce costs, today it has become a necessity to achieve a competitive advantage in line with the needs and desires of customers (Dauda & Lee, 2015), especially in the service sector. Banks are one of these institutions that have adopted online services in providing their services. The advantages of using online services; reducing costs, and on the other hand, the needs and desires of customers, it was shown to banks how online services increase customer retention and market share (Li, Lu, Hou, Cui, & Darbandi, 2021).
In this study, online banking services are defined as those services that banks provide via the internet (Bani Ismail & Alawamleh, 2017), (Salihu, Metin, Hajrizi, & Ahmeti, 2019). We propose that the Coronavirus pandemic with the accompanying health and social laws have led to the development and expansion of the online banking services market (Baicu , Gardan, Gardan, & Epuran, 2020). This development requires companies to keep pace with it for two reasons, the first is to achieve competitive advantage and growth in financial performance (Seiam & Abu Alnadi, 2014), the second is because this development is imposed because of the new laws, and banks must be ready for it, as universities did in their readiness for distance education. Therefore, the current study reviews the online banking services literature, and linked with the Covid-19 pandemic, therefore focusing on how the pandemic created an urgent need for online banking services, thus expanding its market and increasing its marketing opportunities.
Methodology
This study analyzes the online banking services literature and linked with the Covid-19 pandemic, thus, finding how individuals and institutions deal with the current issue to cover their needs and achieve their targets. The selected criteria for this review based on papers were published from 2014 to 2021.
Systematic Literature Review
Online services have become a need in our daily lives, especially after the outbreak of the Covid-19 pandemic, which imposed on society the laws of social distancing. This matter has imposed on all institutions of all kinds to work remotely, as are banks, banks that used to provide online banking services have become obligated to provide these services, to conduct business under the laws of social distancing.
Many studies have dealt with the subject of online banking services, as they have studied these services and their impact on consumers, as well as the nature of consumers' demand for these services, studying the distribution channels of electronic banking services, and how they are affected by consumer trends, and the factors affecting the quality of these services, and determining Quality elements of the online banking services. Many studies have found that electronic banking services are required by consumers and that it is an important marketing advantage for all banking institutions. Also, post Covid-19 pandemic studies have found that these services have been significantly affected by the pandemic positively, which made the pandemic a marketing opportunity for online banking services.
Online Banking Services and the Pandemic
The study of (Moşteanu, Faccia, Cavaliere, & Bhatia, 2020) analyzed changes in demand and supply of online banking services because of the socio-financial disruption caused by the Covid-19 pandemic situation. In addition, the financial digitization and digital changes in the banking sector at a global level, and the benefits of digitization for individuals and businesses at the same level, using data from international databases, the study found that the current rules of social restrictions helped companies and individuals learn and apply new communication technologies. If financial institutions, particularly the banking system, were afraid at the start of the pandemic, many services are now remote online, rather than communicating face-to-face, financial institutions are now redesigning the structure of their operations including digitization, responding to the demands of new customers.
When measuring the impact of the Covid-19 pandemic on consumer behavior in the online banking services market in Romania, a study of (Baicu , Gardan, Gardan, & Epuran, 2020) Using the descriptive approach through a questionnaire distributed to the targeted consumers, the questionnaire measured a set of variables affecting consumer behavior in the retail banking services market in the field of internet and telephone. The study showed that there was an increase in the consumption of online banking services during the Corona pandemic period than in the previous period in Romania. What the research recommended is that banks spread the awareness and education about online banking services to increase the ability of customers to use them, especially the elderly who are not immersed in technology.
The results are corresponding to what we mentioned previously about the need for banks to carry out awareness marketing campaigns for online banking services, and the way they are used, and that one of their quality factors is to be easy to use.
In India, the results were not different, were study by (Jindal & Sharma, 2020) using the descriptive approach through a questionnaire targeting consumers in the city of Bulandshahr in India, looked at evaluating the role of online banking services in fighting the Covid-19 pandemic, the results of study showed that online banking has an important role in fighting the Corona pandemic and that people who use online banking services feel more secure under the conditions of the spread of the virus. Thus, technology has not only gone beyond the daily need, but has become a need for life and a sense of security, and online banking services are required by every banking institution to preserve people's lives. This is confirmed by a study by (Kushnir & Hevorhyan , 2020) on Ukrainian banks using the descriptive approach, which found that banks forced to work in uncertain conditions and to organize ways and procedures for working with customers under difficult conditions during the imposed quarantine period, and banks must provide remote services to customers, which in turn is an appropriate solution and safe for both the bank and its customers.
The study recommended a set of recommendations that can say to be appropriate to confront any situation similar to the Covid-19 pandemic. By providing convenient online services in mobile applications, increasing the number of possible transactions in these applications, canceling the commission for using point-of-sale devices and mobile applications, and extending the validity of the card, and pay other services with the card. Implementing such procedures and establishing an electronic financial market that does all these actions will bring great benefits to customers and greater benefits to banks that will get rid of a lot of paperwork in the branches.
The need to transition to a digital banking model has been talked about for a long time, but the Covid-19 pandemic has made the need to cut operational costs and improve digital experiences more important than ever (Kelecic, 2020). With an electronic financial system and online banking services that keep pace with the requirements of individuals and institutions, companies can provide their services better than before the Covid-19 pandemic, and continue to work remotely to transform into it as fully as possible. (Benmoussa & Almaoui, 2020) examined the limitations that banks face in providing online banking services, using a literature review methodology, and how to overcome them to provide online banking services according to typical requirements. One of its results is that there is a lack of legislation and laws that regulate online Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697 (Paper) ISSN 2222-2847 (Online) Vol.12, No.18, 2021 banking in developing countries. Were (Salem, Baidoun, & Walsh, 2019) found that there is a lack of studies dealing with the status of online banking services in developing countries such as the Arab world in general, this study, which dealt with online banking services in Palestine, also recommended that the state work on formulating laws and regulations necessary for the work of online banking services.
Online Banking Services
However, other challenges could be added, such as people's knowledge of technology. Banking services are received from everyone and there is a segment of customers who do not master the use of online services. Therefore, the provision of online banking services requires cooperation between banks as a service provider, the state as a legislator of the laws to provide this service, and the customer who learns the technology will help build an electronic financial system that is beneficial for both parties, the banks, and the customers. (Li, Lu, Hou, Cui, & Darbandi, 2021) using a descriptive and questionnaire model, found that the use of online banking channels is associated with higher customer retention rates; it also found evidence that the market share of the sample company in the research is also systematically higher in markets with high usage rates of online banking channels. This confirms the aspect of online banking services in that it is a basic competitive advantage in the institutions, and its transformation, in light of the Covid-19 pandemic period into a need for the institution, not just a competitive advantage.
From the financial point of view, the study of (Seiam & Abu Alnadi, 2014) using a questionnaire distributed over a sample study that consists of financial managers at Jordanian commercial banks. found how the impact of online banking services was on the financial performance of commercial banks in Jordan, and how it is affected by the efficiency of online banking services, provide that with time, effort, and daily business volumes are saved, and thus reduce expenses and employee costs. Therefore, it recommended that the obstacles that prevent the effective application of online banking services been overcome by paying attention to the quality of online banking services that are provided. (Shihadeh F. , 2020) analyzed the relationship between consumer behavior and the use of online banking services in daily life, by collecting information from World Bank's 2014 Global Findex Database, and found that the use of online banking services is linked to several factors: the educational level of individuals, age, gender, income level. And that the use of online banking services is still low in the countries of the MENAP region, and that individuals who have a high level of education are more likely to use online banking services. The study recommended that lawmakers work side by side with banking service providers such as banks to provide online banking services that are less expensive and easier to use. The study also recommended studying the relationship between the Covid-19 pandemic and the use of online banking services.
Online Banking Services and Quality
A study by (Rawwasha, et al., 2020) aimed to identify the factors affecting the use of online banking services in Jordan, through a field survey of the study population using a questionnaire. The study found four factors affecting online banking services in Jordan, which are expected benefits, privacy, ease of use, and trust. Based on the results of this study, trust is the strongest contribution to online banking services adoption compared to other variables. The Bank provides adequate security protection to prevent unauthorized intrusion so that the users of online banking services feel comfortable when using the online banking services for their business. Online banking can help Jordanian banks gain a competitive advantage by lowering labor costs and improving customer service; Enhance flexibility, improve access to information, and eliminate most manual or paperwork.
The recommendation that Jordanian banks need to carry out more intensive marketing campaigns, which aimed to increase customers' awareness regarding to the online banking services. Further, banks could focus their efforts on security issues and provide safer online services to customers, and Jordanian banks need to reevaluate user interfaces of the online banking services to ensure ease of use by customers.
Previously (Bani Ismail & Alawamleh, 2017) through a questionnaire and interviews clarified the impact of online banking services on customer satisfaction in Jordanian banks and identified the strengths and weaknesses of online banking systems in Jordanian banks. The results of this study indicated that account access or control, ease of use, privacy, and security are important determinants of customer satisfaction with online banking services. In Saudi Arabia, a study by (Abualsauod & Othman, 2019) identified the gaps in the quality of providing online banking services, which are the technology used, dependence on the service, electronic knowledge, and security. This is consistent with the previous studies of ( Bani Ismail & Alawamleh, 2017) and (Abualsauod & Othman, 2019), since online banking, services are provided through programs. The main issues in the quality of this program are ease of use, security, and the ability to manage operations through this program, which will naturally reduce the problems associated with providing the service (Salihu, Metin, Hajrizi, & Ahmeti, 2019), and of course, trust in the institution provides these services. In the study of (Salem, Baidoun, & Walsh, 2019), the factors affecting the Palestinian customers' use of online banking services were identified. The study consistent with the other studies. That's, the technology used, trust, customer affiliation, security, and privacy were the factors affecting them.
Research Journal of Finance and Accounting www.iiste.org ISSN 2222-1697(Paper) ISSN 2222-2847(Online) Vol.12, No.18, 2021 From previous studies, we can note that adoption the online banking services needs laws to achieve safety and confidence in use. Therefore, the demand of customers for these services, but at the same time, the main role of the customer remains to learn the necessary technology, which is what banks must do through their marketing Awareness programs, (Shihadeh, Gamage, & Hannon, 2019). (Dauda & Lee, 2015) analyzed the adoption of the technology approach for customers and its impact on the development of online banking services in Nigerian banks by a random utility model using discrete choice models. What this study found was that banks, to achieve a competitive advantage, should provide smart and practical online banking services. Especially services of a personal service nature and the addition of other services that make online banking services more enjoyable, effective, and fast, such as electronic wallets, and that smartphones and websites can increase the approach of adopting technology in the online banking services sector by consumers.
Online Banking Services and Distribution Channels
Therefore, we return to the results of a study of (Benmoussa & Almaoui, 2020) that governments in developing countries should provide the necessary capabilities to implement online banking services because customers are ready to use these services. In the study of (Li, Lu, Hou, Cui, & Darbandi, 2021) the factors that affect customer satisfaction with online banking services are identified using the descriptive-analytical method and the questionnaire. The factors are the use of cloud services, security, e-learning, and the quality of services provided. The paper recommended focusing on people who do not use the internet, motivating and encouraging them to use online banking, and this will be easy with the presence of a good internet infrastructure that facilitates the process of establishing an effective electronic financial market, especially in the developing countries.
Conclusions and Recommendations
Banking services are the main driver of the market (Shihadeh, Hannon, Guan, ul Haq, & Wang, 2018) and online banking services are the convenient method to carry out financial operations, whether to protect against a pandemic such as a Covid-19 or to accelerate business and reduce costs.
From the previous studies, that were reviewed, we found that the Covid-19 pandemic provided the online banking services market with an opportunity, that everyone was forced to use online banking services. This opportunity for which institutions were doing many marketing works, to motivate customers to move to use online banking services, which made the customers accept these services, even when there is a defect in their quality.
Banks could take advantage of the opportunity that everyone is compelled to use online banking services, and make their marketing efforts to facilitate this need, to create a competitive advantage in obtaining or retaining customers. In addition, to pay attention to the basic issues in the quality of banking services, which are security, ease of use, and privacy, which is strengthened by the bank's image, and customers trust in it, as well as the existence of laws that protect all parties in these electronic transactions.
Moreover, we recommend more research to find the relationship between the pandemic and using online banking services. More research could be conducted in developing countries, and also in poor areas. | 4,208.4 | 2021-09-01T00:00:00.000 | [
"Business",
"Economics"
] |
Financial incentive programs and farm diversification with cover crops: assessing opportunities and challenges
Farmers in the Great Lakes region of the U.S. face tremendous pressure to reduce nutrient losses from agriculture. Increasing crop rotation diversity with overwintering cover crops can support ecological processes that maintain productivity while improving multiple ecosystem functions, including nutrient retention. We conducted a mixed-methods study to understand how financial incentive programs impact transitions to cover cropping in Michigan. Michigan farms span a wide range of soil types, climate conditions, and cropping systems that create opportunities for cover crop adoption in the state. We tested the relationship between Environmental Quality Incentives Program (EQIP) payments for cover crops and cover crop adoption between 2008–2019, as measured by remote sensing. We coupled this quantitative analysis with interviews with 21 farmers in the Lake Erie watershed to understand farmers’ perspectives on how incentive programs could support greater cover crop adoption. Panel fixed effects regressions showed that EQIP increased winter cover crop presence. Every EQIP dollar for cover crops was associated with a 0.01 hectare increase in winter cover, while each hectare enrolled in an EQIP contract for cover crops was associated with a 0.86–0.93 hectare increase in winter cover. In semi-structured interviews, farmers reported that financial incentives were instrumental to cover crop adoption, but that program outcomes fall short of intended goals due to policy design problems that may limit widespread participation and effectiveness. Thus, strengthening EQIP and related conservation programs could support broader transitions to diversified farming systems that are more sustainable and resilient.
Introduction
Simplified agricultural systems depend on synthetic inputs to sustain yields, causing water pollution, soil degradation, biodiversity loss, and greenhouse gas emissions [1,2].Intensified agriculture also harms farmers who take on record levels of debt while facing narrow profit margins [3], contributing to the decline of rural communities [4].Both research and farmer experience demonstrate that diversified farming systems can help address these crises [5].Farmers can manage crop diversity to support beneficial ecological interactions that maintain crop yield while improving ecosystem functions, which reduces input costs and increases resilience to shocks [6][7][8].
Cover crops (i.e.non-harvested crops) are a diversification practice that supports key functions such as minimizing soil erosion, supplying and recycling soil nutrients, improving water quality, and enhancing soil health [9][10][11].They can also mitigate greenhouse gas emissions by reducing synthetic fertilizer inputs and restoring soil organic carbon [12,13].In temperate climates, overwintering cover crops can maximize these potential benefits by covering the soil for a longer period compared to cover crops that are killed by frost.Replacing winter bare fallows with living plant cover is essential for reducing nutrient losses that cause eutrophication and harmful algal blooms [14].As a result, cover crops are receiving renewed attention and relatively high levels of government funding compared to other conservation practices [15].Yet, to date, only 7.5% of U.S. farms include cover crops on about 1.7% of U.S. cropland [16].We conducted a mixed-methods study in the Great Lakes region to understand whether financial incentive programs for cover crops are increasing cover crop presence on the ground, as well as farmer perceptions of the benefits and challenges of these programs.Our analysis focused on the state of Michigan, which contains hotspots of cover crop adoption in the northern U.S [17,18].
A complex suite of environmental, social, economic, and political factors interacts across scales to influence, and ultimately constrain, farmers' decisions about land management [19][20][21][22].Farmers in the U.S. face major structural barriers to adoption of diversification practices like cover cropping due to federal farm policies (i.e. the Farm Bill), concentrated markets, and corporate power that work in concert to disproportionately support simplified cropping systems [23].Adoption of conservation practices therefore typically occurs through grassroots networks that emphasize peer-to-peer learning [24].A large body of research has focused on behavioral and demographic factors that influence practice adoption, while structural factors, including agri-environmental policies, are understudied [20,21,25].
Over the past 30 years, new agri-environmental policies in the U.S. emerged in response to challenges to state-centered farm support alongside growing interest in environmental conservation [26,27].These voluntary conservation programs include the Conservation Reserve Program (CRP), the Environmental Quality Incentives Program (EQIP), and the Conservation Stewardship Program (CSP).Working lands programs like EQIP and CSP provide cost-sharing for farmers to adopt conservation practices on land in agricultural production.These programs have been durable because they address multiple political goals, yet they are also criticized for limited environmental effectiveness [26,27].For instance, the capacity for these programs to scale up practices that are new or risky, such as cover cropping, is likely constrained by their limited funding.Conservation programs comprised only 7% of projected 2018 Farm Bill spending [3], and funding for commodity and crop insurance programs is more than twofold greater than funding for conservation programs [28].
To assess patterns of cover crop adoption on the ground, a growing number of studies use satellite data products to measure cover crop adoption at large spatio-temporal scales [29,30].Satellite data can also be used to estimate area under 'successful' cover crop performance, measured as high levels of biomass [31][32][33], which is important because realizing the benefits from cover crops depends on their biomass production [10].Studies have used satellite estimates of cover crop area to identify temporal trends and hotspots of adoption in the U.S. [17,29,30,34] and show that corn and soybean yields are reduced, on average, following winter cover crops [35], which is a wellknown barrier to their adoption [9].Only a few studies have used remote sensing to assess relationships between cost-share programs and cover crop adoption, for both state [33] and federal programs [30,36].To our knowledge, only one study has tested the relationship between Farm Bill conservation programs and cover crop adoption (as measured with satellite data) with panel fixed-effects regressions that address issues of omitted variable bias [36].This previous study, however, examined the influence of all conservation payments, rather than payments specifically for cover crops, even though only a small proportion of conservation payments is allocated to cover crops [15,37].
We used panel fixed effects regression models to test the relationship between EQIP payments for cover crops and overwintering cover crop adoption in Michigan from 2008-2019, as measured by remote sensing.Our remote sensing approach distinguished cover cropped fields with high spring greenness (a proxy for cover crop biomass), to identify fields where cover crops performed well in the overwintering niche, which could inform opportunities to adapt this program for increased effectiveness [38].We coupled this analysis with in-depth information collected from semi-structured interviews with 21 farmers in the Lake Erie watershed who are experienced with cover cropping.Our interviews aimed to understand the role that financial incentives play in cover crop adoption, and farmers' perspectives on changes to incentive programs that would increase adoption.To our knowledge, this is the first study integrating quantitative and qualitative methods-including novel remote sensing approaches-to advance an interdisciplinary understanding of the outcomes, opportunities, and limitations of financial incentive programs for supporting farm transitions to cover cropping.We hypothesized that EQIP cover crop assistance would have a moderate, positive effect on winter cover crop presence because a complex suite of social and environmental factors influences and constrains cover crop adoption, and the growth of cover crops after they are planted.
Study area
Our remote sensing analysis encompassed Michigan's Lower Peninsula.Michigan is the second most agriculturally diverse state in the country [39], with farms spanning a wide range of soil types, cropping systems, and climate conditions, although nearly half of Michigan farmland is planted to corn and soybeans [39].Within row crop rotations, the relatively high presence of small grain crops such as winter wheat creates a greater opportunity for cover cropping compared to farms that only produce corn and soybeans, because small grains are harvested in the summer.
EQIP is currently the largest working lands program [27] and during the study period, EQIP contracts covered more farmland in Michigan than other Farm Bill conservation programs.In most years, CRP payments, which are for land set-aside, exceeded EQIP and CSP, although CRP payments declined while EQIP and CSP payments increased from 2008 to 2019 (SI, table S1).Our analysis focused on EQIP payments for cover crops, specifically, which increased incrementally from 2008-2018 and had a large increase in funding in 2019 (figure 1).Data on CSP payments are not readily available at the county level, however, EQIP and CRP comprise over 80% of conservation payments in Michigan (SI, table S1).
To complement the quantitative analysis, we conducted in-depth interviews with farmers who use cover crops in the Lake Erie watershed, where nutrient losses from agriculture contribute to recurring algal blooms in Lake Erie [40].The interviewees were in the Maumee River and the River Raisin sub-watersheds.The River Raisin watershed (2776 km 2 ) contains parts of five counties in southern Michigan and most of the watershed's land is in row crop agriculture [41].The Maumee River is the largest watershed in the Great Lakes region (21 538 km 2 ) spanning Ohio, Michigan, and Indiana [42].Farms in our study were in the northern portion near the Ohio-Michigan border.Roughly two-thirds of the watershed is in agricultural production, and the Maumee River is the primary source of phosphorus to Lake Erie [43].
Farmers in the Lake Erie watershed face tremendous pressure to mitigate the environmental pollution caused by intensified agriculture [44].They also increasingly experience management challenges due to climate change [45].For instance, in spring 2019, extreme rainfall and historic flooding prevented farmers from planting large areas of agricultural land (e.g.Michigan farmers reported not planting 357 620 ha [46]).The Natural Resource Conservation Service (NRCS) created a disaster relief program for farmers to grow cover crops on unplanted land to provide financial support, maintain soil health, and reduce water pollution.Our study site thus experienced a highly visible impact of climate change, and an unusually high level of funding for planting cover crops, during the period when we interviewed farmers.
Data
We compiled the following datasets at the county scale from 2008-2019: area under successful (high biomass) overwintering cover crops, EQIP payments for cover crops (and all other EQIP payments), area associated with EQIP contracts for cover crops, CRP payments, crop subsidy and insurance payments, and two environmental factors that affect cover crop growth (growing degree days (GDD) and precipitation (table 1; SI, appendix S1)) [47].
Cover crop area
To quantify the area under successful overwintering cover crops, we produced annual maps at 30 m resolution of the main classes of agricultural land cover that are present from September to April.To create these maps, we combined information from two data For our statistical analyses, we only considered the overwintering cover crop class, and used this as our dependent variable in all regression models.Specifically, we aggregated these 30 m satellite data to the county scale for each year in two ways (appendix S1).The first quantified the area under overwintering cover crops for all agricultural parcels, and the second measure quantified the overwintering cover crop area only for parcels that were classified as row crops (corn, soybeans, or wheat) for any year from 2008-2019.We used the raster [48] and exactextractr [49] packages in R project software for all data layer creation, calculations, and extraction [50].
Federal payments
County-level data on EQIP payments and contracted cover crop area (figure 1) were provided by the NRCS Resource Economics, Analysis, and Policy Data Team.From 2008 to 2019 in Michigan, about 13% ($21,709,959) of EQIP payments were for cover crops (SI, table S3).We also included conservation payments from CRP [51], and government payments from crop insurance, commodity payments, and subsidies for corn, soybeans, and wheat (provided by the Environmental Working Group [52]) (table 1).
Modeling and analysis
We used Mann Kendall tests to identify temporal trends in overwintering cover crops for each county during the study period.We then used panel fixed effects regressions to test how changes in EQIP payments (or contracted cover crop area) over time affected the area of agricultural land under overwintering cover crops, by estimating equation (1): where cover_crop it indicates the area (ha) under overwintering cover crops in county i at time t, EQIP it-1 denotes EQIP payments (or contracted cover crop area (ha)) in county i in the year of cover crop planting (t−1), Env it denotes the environmental variables in county i during the cover crop growing season t, govt_pay it-1 signifies the payment variables (conservation, subsidy, and insurance) in county i for the cover crop planting year (t−1), φ it are the site (county) and year fixed effects, and ε it is the error (table 2).We ran four separate models-two with overwintering cover crops on all agricultural land and two with overwintering cover crops on land in row crops (i.e.corn, soybean, or wheat in each study year).Models were run using the plm package in R 4.0.5.Software and included the 58 counties with both EQIP cover crop contracts and parcelaggregated winter cover crop data (figure S1).
Farmer interviews
We conducted 21 semi-structured interviews between July 2019 and March 2020 with farmers who plant cover crops in the Maumee River and River Raisin Watersheds (IRB: HUM00165334).We used purposive sampling and an in-depth, qualitative approach to data collection to better understand the perspectives of our sample [53] and provide context for our quantitative results.To identify farmers, we contacted Soil and Water Conservation District personnel who connected us with row crop farmers who use cover crops.We identified additional participants by asking interviewees for other contacts.In the River Raisin we also interviewed farmers who were participating in research on cover crops led by one of the authors.We focused on farmers who grow cover crops because they could provide rich information from their personal experience using the practice.However, interviewees were heterogeneous in their years of experience with cover cropping, the size and physical characteristics of their farms, their attitudes toward conservation, and their level of engagement in government programs (SI, table S4).Sixteen interviewees had participated in EQIP, and all but two had participated in a financial incentive program to support growing cover crops.We did not exclude farmers if they had not participated in EQIP or another program; this broader approach allowed us to learn why those farmers had not participated.
Four interviews were conducted in-person, and the remaining interviews were conducted by phone.Interviews lasted approximately 80 min and included questions on farm characteristics and experience with cover cropping and cost-share policies and programs (SI, appendix S2).One farmer did not consent to being recorded, so we took hand-written notes during this interview.Remaining interviews were recorded and transcribed by one of the authors.Using a standardized template, memos were written for each interview to capture emerging themes.Transcripts were imported into NVivo and thematically coded by one of the authors [54].Our analysis was an iterative process, with interviews, transcriptions, memos, and conceptual analysis occurring simultaneously at times.Interviews were coded multiple times to ensure all relevant data were captured and consistently categorized into appropriate codes to identify salient themes.
Adoption of overwintering cover crops
Remote sensing results confirmed that most agricultural land in the lower peninsula of Michigan is bare in winter (figure 2(a)).The area planted to alfalfa/hay and winter wheat was low throughout the study period, with a slight downward trend for wheat.The overwintering cover and low biomass (weedy fallow) classes were more variable, and the area in overwintering cover crops increased in 2016 and 2017, corresponding with a decline in the area in bare fallow.Overall, the area in successful overwintering cover crops ranged from 239 261-456 242 ha on all cropland in lower Michigan during the study period, with a mean of 325 310 ha, which represents approximately 9% of cropland in lower Michigan (figure 2(b)).
At the county scale, the area in overwintering cover crops increased in just six counties in lower Michigan between 2008-2019 (figure 3), two of which are in the Lake Erie watershed.In contrast, 14 counties in central and northern Michigan exhibited decreases in overwintering cover crop area, with the remaining counties having no change.
Panel fixed effects model results
We ran four models to assess the impacts of EQIP cover crop payments, EQIP contracted area, other conservation payments, and other federal payments on the area in overwintering cover crops (table 2).Both EQIP cover crop payments and EQIP contracted area were significantly associated with overwintering cover crop area for all models.The effect of EQIP area on the area in overwintering cover crops was larger for all crops (0.93) compared to row crops only (0.86), while the effect of EQIP payments was the same for all cropland and land in row crops (0.01).All other variables considered in the models, including GDD, precipitation, and conservation and nonconservation payments were not significant.
Financial incentives support cover crop adoption
Farmers described the financial risk of disrupting cash crop production as an overarching barrier to cover crop adoption.For instance, farmers must successfully terminate overwintering cover crops in the spring in time to plant their primary crop.Fourteen farmers cited challenges with termination, which reduced crop yields by delaying planting or immobilizing nitrogen.More broadly, ten farmers noted it can be hard to justify the cost of the practice with no guaranteed return on investment.'All [farmers] see is the seed cost.They do not see the downstream bene-fits…reduced herbicide, increase in health and organic matter; those benefits are going to come down the line' [Farmer 10].
Farmers uniformly described financial incentive programs as playing an important role in buffering against the risks of cover crop adoption.They explained that, in an ideal scenario, farmers would enroll in a program for several years, gain experience with cover crop management, begin to witness soil health benefits, and thus become motivated to continue the practice once the program ends.One the area they planted to cover crops.The combination of financial incentives and on-farm experimentation allowed farmers to learn a knowledge-intensive skill and observe benefits firsthand while limiting their financial vulnerability during the transition period.
Incentive program outcomes fall short of intended goals
Despite widespread agreement that financial incentive programs support cover cropping, the interviewed farmers also outlined four fundamental design problems that may limit participation in EQIP and similar programs and cause farmers to discontinue the practice once financial incentives end.First, and perhaps surprisingly given that funding for cover crops is limited, a main criticism was that incentive programs provide too much money per area.This was mentioned by nine farmers (43%), who cautioned that if farmers view these programs as a money-making venture-instead of a soil health investment-they will be less likely to learn how to manage cover crops for maximum environmental benefits.Instead, they may terminate the cover crop too early and diminish its ecological potential, making farmers less likely to continue the practice: 'People are looking at it as a moneymaker.They can put cover out there, do a haphazard job of it, throw it out there.Apply it for 25 dollars an acre, and they are gonna get paid 50, 60, 70 dollars an acre to do it ' [Farmer 4].These farmers suggested that lowering the payment would cause farmers to take the practice more seriously to reap its full benefits.
A second shortcoming noted by eight farmers (38%) in our sample was the burden of enrolling in incentive programs combined with their lack of flexibility.Farmers were dissatisfied with the amount of paperwork and with specific program requirements, especially for federal programs like EQIP, saying that these bureaucratic requirements can be confusing and out of touch with the realities of farming.These farmers noted that EQIP's seeding rate requirements were too high for their region and management systems, or the planting window was too narrow, making management more difficult and limiting potential benefits.Seven farmers expressed uncertainty about the expectations, as deadlines and seeding rates vary by program and year.
A third problem was related to scale.EQIP tends to favor larger-scale farmers who are familiar with government programs.Seven farmers (33%) noted the lack of guaranteed acceptance into a financial incentive program as a problem, as these are often competitive and use a ranking system.With EQIP, for example, applications that enroll more land may receive a higher score and have a greater chance of being funded.This may result in some farmers not even applying: 'I think the policy end-they do not do a very good job of sorting out people like me.You know, they would much rather give an EQIP contract to a 2500 acre farm because they knocked out a bunch of acres, and it does not even really matter if people are really passionate about it' [ Farmer 15].Seven farmers described their preference for programs administered at the local level because these have fewer 'hoops to jump through' and because district employees know their local farming community's needs.However, the smaller budget for these programs made other farmers believe the effort to apply is more work than it is worth.
Finally, four farmers (19%) suggested that program contracts should be longer to achieve the full benefits of cover crops.Cost-share contracts generally last one to three years (Claassen, 2008), but interviewees noted that short contracts may not allow farmers to gain sufficient confidence with cover crop management or experience the soil health benefits and associated financial benefits that come from reduced input costs or increased yields.Because visual indicators like weed suppression, earthworms, and soil aggregation convey to farmers that they are getting a return on their investment, they believed that longer contracts would make it more likely for farmers to witness benefits and continue cover cropping.
Relationship between EQIP and winter cover crops
The results of this study demonstrate that financial incentives support transitions to cover cropping, despite limitations of program design and other barriers that constrain widespread adoption.Our analysis focused on Michigan, which is a diverse agricultural state in a region that sits at the center of debates about how to mitigate nutrient losses from agriculture [45].We used a fixed effects regression approach, which better identifies a causal relationship compared to traditional regressions by assessing whether inter-annual changes in EQIP payments (or contracts) and cover crop area are tightly coupled at the county level after controlling for time invariant factors.We found a strong relationship between EQIP contracted area and cover crop area in lower Michigan, and that every $1 of EQIP cover crop payments led to a 0.01 ha increase in overwintering cover crop presence.Indeed, prior studies have shown a significant effect of EQIP payments on cover crop adoption as reported in farmer surveys [18,55], and these studies find high additionality, meaning that high levels of observed cover cropping are attributed to economic incentives [56].
Our remote sensing approach adds a novel aspect to these findings-by using an NDVI threshold based on ground data, we focused on successful cover crops in the overwintering niche.Without appreciable biomass, environmental benefits such as nutrient retention will not be realized from the taxpayer dollars allocated to cover cropping [31,57,58].In temperate crop rotations, it is difficult to achieve high cover crop biomass and many cover crops fail, which may help explain why we found a stronger correlation between EQIP contracted area and the area in successful winter cover for all cropland compared to land in corn, soybeans, or wheat.The U.S. Census of Agriculture estimates that approximately 13% of farms and 7% of cropland in Michigan use cover crops [16], which is similar to the area we found in the overwinter cover class (9%; figure 2).This is surprising because we focused on winter cover crops with successful spring biomass (a subset of all cover crops), suggesting that we are only capturing a partial estimate of all cover crop area.However, our estimate may be slightly higher than the Census of Agriculture because we only considered lower Michigan, which is where most cover crops are planted, or because our remote sensing algorithm may have overestimated cover crops through time.
Farmers in the Great Lakes experience increasing management challenges from climate change, such as the historic flooding in the year that we conducted farmer interviews.Prior studies have shown that crop rotation diversity, including use of cover crops, is associated with resilience to extreme weather events [8].Overwintering cover crops support climate change resilience by building soil organic matter [12,59,60], which can improve key soil functions including water infiltration and storage [61] that buffer against drought and flooding.By using a threshold approach to identify successful cover crops, our work demonstrates how ecological information can be incorporated into remote sensing studies to better understand the ecosystem services from cropping systems, such as soil carbon storage and nutrient conservation [60,62,63].For instance, models based on satellite imagery of legume cover crops can successfully predict nitrogen inputs from their aboveground biomass [64], which can inform reducing fertilizer inputs.More broadly, there is a lack of research evaluating EQIP and other federal agrienvironmental policies [27], which have been criticized for limited environmental benefits in part because they provide incentives for practices rather than outcomes [26].Remote sensing could be applied as a monitoring tool to distribute more payments to farms with successful cover crop growth, which is a strong proxy for environmental outcomes.
Policy implications
Using qualitative methods, we assessed how farmers interact with and perceive incentive payments [65].The farmers we interviewed had three to more than 30 years of experience with cover cropping.However, they could all be considered innovators and early adopters of cover crops, given that cover crop adoption is still so low [66,67].Although the earliest adopters discussed the long-term positive effects of cover crops, the more recent adopters in our sample explained how incentives supported their transitions to cover cropping by buffering against large financial risks and allowing them to experiment with new management strategies on a portion of their farm.This trial-and-error approach was reflected in the data showing a small median size (41 ha) for EQIP contracts in Michigan (SI, table S3).These farmers also suggested that there is an ideal payment per area that could broaden participation while ensuring that farmers are invested in soil health, which would likely improve cover crop management and outcomes.This suggestion, however, may reflect these farmers' ideological commitments as early adopters (i.e.their motivations beyond subsidies), while nonadopters may have different perspectives.This should be confirmed with future research that includes a broader sample of farmers.Such research may reveal additional opportunities for engaging non-adopters.For instance, previous studies suggest that communication about cover crops within farmer networks can be crafted to resonate with non-adopters [68].Early adopters may also benefit from non-monetary rewards such as other forms of social recognition that strengthen social norms about cover cropping and conservation [69,70].
In Michigan, over $400 million was spent between 2010-2020 on projects to address nutrient pollution, with about 10% of the funds flowing to six southeastern counties in the Lake Erie Basin [71].These targeted investments may explain the Mann Kendall trends showing that two of the counties with consistent increases in overwintering cover crops were in the River Raisin watershed, which faces large pressure to reduce agricultural nutrient losses.Overall, it is perhaps surprising that the six counties where cover crop area increased are in the more intensively farmed region of Michigan, where corn-soybean rotations constrain the use of cover crops.However, prior studies have shown that large row-crop farms are often more likely to participate in federal conservation programs [25,72].This was also noted by some of the smaller-scale cover croppers we interviewed who experienced challenges with accessing these programs and expressed frustrations about ineligibility and program equity [27].The counties that showed declines in overwintering cover crops, in contrast, had much less row crop acreage [73].Nationally, NRCS allocates more than half of EQIP payments to livestock-related practices and facilities [15,37].While this suggests that EQIP funds reach lands that contribute to nutrient pollution, this approach has limitations related to the potential environmental benefits, social equity, and diversity of farm types supported by working lands conservation programs.Our analysis isolated the effects of EQIP payments for cover crops; however, broader political constraints limit EQIP's transformative potential [26,28].
In the U.S., farmer and community-level innovation is the main pathway for transitions to diversified farming systems because policies and markets that promote diversification are limited [23,24].Here, we focused on a federal conservation program that could help alleviate the large structural constraints to cover cropping.While widespread adoption of cover crops is unlikely to occur without broad changes to agricultural policy, modifying the EQIP program could increase participation and better target currently limited funds to improve cover crop success.Drawing from our interviews with farmers who were experienced with EQIP and other financial incentive programs, the following specific changes could increase program impact (SI, table S5): • Create a 'transitional' program for farmers to learn to manage cover crops that focuses on small areas and covers just the cost of implementation.Participation during this transition period would not subsequently disqualify a farmer from enrolling larger areas once they have gained experience and confidence with cover cropping.• Increase flexibility in program requirements, such as planting dates and seeding rates, which could then be tailored to different contexts through a more decentralized approach.Rather than stricter rules about practice implementation, program effectiveness could be enhanced by supporting peer networks to build local knowledge, offering social rewards, and strengthening networks of farmers and researchers to improve site-specific recommendations.• Better coordinate among agencies to streamline the process of enrolling in financial incentive programs, such as reducing paperwork and the number of acronyms.Given that the labor demands for cover crop adoption are high, barriers to enrollment need to be low to facilitate scaling up the practice.• Longer contracts are needed to match the timeframe of biophysical processes, such as building soil organic matter and restoring key ecosystem functions to levels that enable reduced input costs (or higher yields), and thus higher profitability.
Overall, the experiences and perspectives of the farmers we interviewed offer broader lessons for improving conservation incentive programs in regions that have high stakes for mitigating global change.
Conclusion
This study demonstrates that EQIP incentives for cover crops are tightly coupled with cover crop presence on the ground in Michigan, and that incentive programs supported adoption of cover cropping by reducing financial risks.Our results thus suggest that shifting more resources to these programs would increase the area in cover crops, which are currently planted on a small fraction of cropland in the U.S. Our remote sensing approach focused on cover crops that performed well in the overwintering niche because their biomass plays a key role in nutrient retention and other ecosystem functions.Overall, row crop farmers in the Great Lakes region experience large structural barriers to increasing the diversity of their cropping systems.However, this context is quickly changing as the urgent need to address climate change increases the political will and resources available for cover crops as part of a 'climate-smart agriculture' policy agenda.Drawing on their years of experience with cover cropping and incentive programs, the farmers who we interviewed suggested key policy changes that could reduce barriers to enrollment and improve program outcomes.Our findings suggest that strategic, top-down changes to strengthen conservation programs could amplify farmer-led innovation and catalyze transitions to diversified farming systems that are more sustainable and resilient.
Figure 1 .
Figure 1.EQIP payments for all EQIP contracts (dashed line) and for cover crops (solid line) in Michigan.
Figure 2 .
Figure 2. (a) Area in bare fallow and four classes of agricultural land cover from 2008-2019 in lower Michigan, and (b) area in the four winter cover classes, excluding bare fallow.
Figure 3 .
Figure 3. County-scale trends in overwintering cover crop presence on row crop fields in Michigan between 2008-2019, measured with Landsat satellite imagery.Hash marks represent counties not included in the analysis due to missing parcel or EQIP data.
Table 1 .
Variables included in regression models with descriptive statistics, expected relationship to dependent variable (i.e.overwintering cover crop area on all cropland, and on land in row crops), and data sources.All data are county-scale.
Acronyms: CRP = Conservation Reserve Program; EQIP = Environmental Quality Incentives Program; EWG = Environmental Working Group; FSA = Farm Service Agency; GDD = growing degree days; NRCS = Natural Resource Conservation Service; NOAA = National Oceanic and Atmospheric Administration; PSL = physical sciences laboratory.aThisvariable leaves out CSP payments because those data are not readily available at the county level.However, EQIP and CRP comprise over 80% of conservation payments in Michigan (SI, tableS1).
Table 2 .
Results of panel fixed effects regressions assessing the impacts of EQIP payments and EQIP contracted area ('planned cover crop') on adoption of overwintering cover crops in Michigan.
* 2.40 * 3.04 * 2.68 * Note: * p < 0.05; * * p < 0.01; * * * p < 0.001.layersdeveloped by the United States Department of Agriculture (USDA) National Agricultural Statistics Service Information and a map of winter cover that we produced using Landsat satellite data (appendix S1).We specifically mapped five different classes of agricultural land cover: bare/fallow, winter wheat, alfalfa hay, low biomass cover that represented weedy fallow or unsuccessful cover crops, and high biomass cover that represented successful cover crops (e.g.cereal rye, ryegrass), which obtained an appreciable amount of biomass during the overwintering period (referred to as overwintering crop crops in the rest of this paper). | 7,639.8 | 2024-03-20T00:00:00.000 | [
"Agricultural and Food Sciences",
"Environmental Science",
"Economics"
] |
Strain-stabilized superconductivity
Superconductivity is among the most fascinating and well-studied quantum states of matter. Despite over 100 years of research, a detailed understanding of how features of the normal-state electronic structure determine superconducting properties has remained elusive. For instance, the ability to deterministically enhance the superconducting transition temperature by design, rather than by serendipity, has been a long sought-after goal in condensed matter physics and materials science, but achieving this objective may require new tools, techniques and approaches. Here, we report the transmutation of a normal metal into a superconductor through the application of epitaxial strain. We demonstrate that synthesizing RuO2 thin films on (110)-oriented TiO2 substrates enhances the density of states near the Fermi level, which stabilizes superconductivity under strain, and suggests that a promising strategy to create new transition-metal superconductors is to apply judiciously chosen anisotropic strains that redistribute carriers within the low-energy manifold of d orbitals.
Supplementary Note 1: ELECTRICAL TRANSPORT MEASUREMENTS ON PATTERNED RESISTIVITY BRIDGES
In Supplementary Fig. 1, we show how the electrical transport properties of an RuO 2 /TiO 2 (110) sample depend on the direction of current flow in the film when it is confined to flow along the orthogonal in-plane crystallographic axes, [001] and [110]. Prior to lithographically patterning resistivity bridges on the film, we measured the resistance versus temperature of the entire 10 mm × 10 mm × 24.2 nm thick film by wire bonding four contacts directly to the surface of the sample in an inline contact geometry. Such a contact geometry probes the geometric mean of the two diagonal components of the in-plane resistivity tensor, i.e.
√ ρ 001 ρ 110 , neglecting small finite-size corrections that depend on how the contacts are oriented relative to the edges of the wafer [1]. The results of these measurements are shown by the blue traces in Supplementary Fig. 1a-b; these are the same data plotted on a logarithmic temperature scale in Fig. 1c of the main text.
Since RuO 2 has a tetragonal crystal structure in bulk (and orthorhombic or perhaps monoclinic in (110)oriented films), ρ 001 and ρ 110 are not guaranteed by symmetry to be equal. The intrinsic transport anisotropy in bulk RuO 2 is known to be small, with differences between ρ 100 and ρ 001 that are less than 10% at 300 K [2,3]; however, in heteroepitaxial thin films it is common for highly oriented structural defects-e.g., those nucleated at step edges on the substrate-to induce sizable extrinsic anisotropies between the different in-plane components of the resistivity tensor [4,5]. To investigate this possibility in this work, we used standard lithographic techniques to pattern the same RuO 2 /TiO 2 (110) sample into fourpoint resistivity bridges with dimensions 55 µm (length) × 10 µm (width) × 24.2 nm (thickness), where the direction of current flow is confined (via lithography) to be aligned with specific crystallographic directions. In the course of performing the lithography, we noticed that the TiO 2 substrates became mildly conducting, possibly due to oxygen vacancies formed during ion milling, as has been reported to occur for SrTiO 3 [6]. Therefore, we annealed the wafer containing the patterned resistivity bridges in air at elevated temperatures until the substrate again read open-circuit two-point resistances (> 100 MΩ); 2 hours at 500 • C was found to be sufficient.
The results of electrical measurements on these patterned resistivity bridges are shown by the green and orange traces in Supplementary Fig. 1. The temperature dependence of ρ(T ) is qualitatively consistent with the control measurements performed on the entire film before patterning, and the absolute magnitude of the resistivity anisotropies at 300 K and 4 K are both < 20%. Furthermore, the superconducting ρ(T ) and V (I) behavior does not depend strongly on the direction of current flow; this is contrary to what would be expected if the superconductivity arose purely from oriented structural defects.
In Supplementary Fig. 1b, we ascribe the substantial decrease in low-temperature resistivities observed in the patterned resistivity bridge data relative to the entire film data to the aforementioned annealing involved in preparing the bridges. We confirmed on other RuO 2 /TiO 2 (110) samples not containing bridges that post-growth annealing in air generically causes the low-temperature values of ρ to drop, by as much as a factor of four. Because of these complications and additional uncertainties involved in lithographically patterning resistivity bridges on films on TiO 2 substrates, all other electrical transport data presented in the main text and in the supplementary information were acquired by wire bonding directly to the surfaces of as-grown samples that were not subject to any post-growth annealing treatments.
Supplementary Note 2: FITTING AND EXTRAPOLATION OF SUPERCONDUCTING UPPER CRITICAL FIELDS VERSUS TEMPERATURE
In Supplementary Fig. 2, we present the results of magnetoresistance measurements for three RuO 2 /TiO 2 (110) samples with different film thicknesses; the data in Supplementary (2). Tcs are taken as the temperatures at which R crosses 50% of R4 K (middle dashed lines in a, c, e); error bars in b, d, f indicate the temperatures at which R crosses the 90% and 10% thresholds of R4 K, respectively (top and bottom dashed lines in a, c, e). through the superconducting transitions. All resistances are normalized to their zero-field values at 4 K, well above the superconducting transitions; since the normal-state R(T, H) behavior of RuO 2 /TiO 2 (110) in the absence of superconductivity is negligible in this regime of low temperatures and fields, the choice of a single normalization factor R 4 K for all data does not appreciably affect any of the results that follow. Because percolation effects imply that resistive measurements of critical fields inherently contain some ambiguity about the definition and meaning of H c⊥ relative to truly bulk-sensitive measurements of superconductivity [7], here we adopt the same convention employed in the main text: the temperature at which R crosses 50% of R 4 K is taken as T c for the given H c⊥ , and the error bars on the extracted T c are the temperatures at which R crosses the 90% and 10% thresholds of R 4 K , respectively [8].
While there are considerable quantitative discrepancies in the values of H c⊥ and T c for the different-thickness samples shown in Supplementary Fig. 2, the H c⊥ (T c ) scaling behavior is remarkably linear for all samples, with no signs of H c⊥ saturation down to reduced temperatures T /T c ≈ 0.2 − 0.3, unlike what is expected in, e.g., Werthamer-Helfand-Hohenberg (WHH) theory [9]. For example, evaluating the right hand side of the WHH expression where Φ 0 is the superconducting flux quantum and µ 0 is the magnetic permeability of free space.
Notably, these values of ξ(T → 0 K) are less than values reported for traditional elemental superconductors with comparable T c s by almost an order of magnitude, corresponding to critical fields that are ≈ 1 − 2 orders of magnitude greater. While an explanation and understanding of these sizable critical field enhancements are beyond the scope of the present work, they are internally self-consistent with the large critical current densities noted in Fig. 1d of the main text and in Supplementary Fig. 1c. These results may motivate future realspace measurements of the superconducting condensate by scanning-probe techniques. In particular, an interesting question to address is whether the structural defects in RuO 2 (110) act as pinning sites for the vortices that form under applied fields, similar to what has been observed in numerous other thin-film superconductors [10], or whether the defects host regions of enhanced superfluid density that effectively act as barriers to vortex motion, akin to twin boundaries in bulk single crystals of iron-based superconductors [11,12]. In Supplementary Fig. 3, we include electrical characterization and more comprehensive lab-based x-ray diffraction (XRD) measurements for the RuO 2 (101) and RuO 2 (110) films of comparable thickness shown in Fig. 2a of the main text. Supplementary Fig. 3a,e show the zero-field ρ(T ) behavior for the two films: the 18.6 nm thick RuO 2 (101) film is non-superconducting down to < 0.4 K with a residual resistivity ρ 0 < 1.7 µΩ-cm, whereas the 14.2 nm thick RuO 2 (110) film is superconducting at T c = 0.92 ± 0.21 0.07 K with a residual resistivity ρ 0 < 32 µΩ-cm. Supplementary Fig. 3b,f show rocking curves for the films overlaid on rocking curves for the TiO 2 substrates they were synthesized on: in all cases the coherent components of the film peaks exhibit narrow full width at half maximum (FWHM) values that are limited by the underlying substrate FWHM, as expected for isostructural film growths. In our studies we found that the rocking curve shapes and widths of the TiO 2 substrates supplied by CrysTec, GmbH can vary significantly depending on how the in-plane momentum transfer q || is oriented relative to the crystal axes of a given wafer, which may be due to the Verneuil process used to synthesize the crystals; to give some idea of the magnitude of this asymmetric mosaic spread, we show scans with q || oriented along azimuths separated by 90 • for each sample.
In Supplementary Fig. 3c,d and Supplementary Fig. 3g,h we show off-specular (q || , q ⊥ ) reciprocal space maps (RSMs) for both samples in regions surrounding HKL Bragg peaks that have q || purely aligned with the crystallographic directions indicated in the labels on the horizontal axes. For reference, the peak positions that would be expected for the lattice parameters of bulk RuO 2 and bulk TiO 2 at 295 K [13,14] are shown as red and white squares, respectively; the orange squares represent the central peak positions expected for commensurately strained RuO 2 thin films calculated using appropriately constrained density functional theory structural relaxations. To give a more quantitative sense of the logarithmic false color scale used here, the solid white lines overlaid on each plot represent the scattered intensity along the crystal truncation rods (CTRs)-i.e., the one-dimensional line cuts through the RSMs with q || equal to that of the substrate Bragg peaks. These results show that the 18.6 nm thick RuO 2 (101) film is coherently strained to the substrate along both in-plane directions, within the ≈ 0.1% resolution of the measurements. The variable widths of the CTRs versus q || in different RSMs are an artifact of instrumental resolution effects-namely, the "tall" incident beam profile convolved with the scattering geometries used to measure each RSM-which we do not attempt to correct for in this work. On the other hand, the 14 observed in RuO 2 /TiO 2 (110) samples, we measured RSMs around several Bragg peaks along the specular (q || = 0) CTR. Supplementary Fig. 4 summarizes the results of such measurements for the same 14.2 nm thick RuO 2 (110) sample for which off-specular RSMs are shown in Supplementary Fig. 3, which was also characterized by XRD and scanning transmission electron microscopy in Fig. 2 of the main text. By taking line cuts averaged over the dashed boxes-which span ranges of q ⊥ where the measured intensities are predominantly due to scattering from the film-we obtained the three rocking curves plotted in Supplementary Fig. 4d. Each rocking curve shows a sharp central peak that is resolutionlimited in width (or substrate-limited, cf. Supplementary Fig. 3b,f), superimposed on a much broader, nearly Lorentzian (FWHM = 0.003 − 0.005Å −1 ), component of the scattering that is also centered at q || = 0. Furthermore, the integrated intensity of the former coherent component of the scattering decays relative to that of the diffuse component as the magnitude of |q| = q ⊥ increases in progressing from cut (a) to cut (c).
The non-vanishing intensity of the diffuse component in the film rocking curves, and the scaling behavior of how the total integrated intensity is distributed between the coherent and diffuse components as |q| is varied, are both completely consistent with published data for numerous epitaxial thin films grown on lattice-mismatched substrates where the films are thick enough to exhibit some form of strain relaxation [15][16][17][18][19][20]. In principle, by analyzing the diffuse scattering profiles around multi-ple Bragg peaks with q that project differently onto the Burgers vectors of the relevant misfit dislocations that relax the strain, one can obtain quantitative information on the types of dislocations that exist, the dislocation densities, etc. [21,22]. We leave a more systematic analysis of this type to future synchrotron XRD studies, where the measurement noise floor is significantly lower and the strongly q-dependent instrumental resolution effects observed here are mitigated by having a more point-like incident beam profile. We note, however, that the similar FWHM values of the diffuse scattering versus q || around the 110, 220, and 330 peaks imply that the structural defects responsible for this scattering are more translational in nature than rotational (which in typical mosaic crystals, produce rocking curves of constant angular widths) [15,16]. Whether the inverses of these FWHM values for the fitted Lorentzians can be directly interpreted as the Fourier transform of a real-space correlation length (200 − 300Å) depends on whether the film is in the limit of weak (structural) disorder, in the formalism of Refs. [16,21]. Supplementary Fig. 5b) confirms that an abrupt interface exists between TiO 2 and RuO 2 . In particular, a sharp transition from lower intensity peaks in the substrate (Ti: Z = 22) to higher intensity peaks in the film (Ru: Z = 44) occurs over a region thinner than 1 nm surrounding the black arrow at the substrate-film interface; this indicates that any Ti/Ru chemical interdiffusion is minimal and cannot be the cause of the enhanced su-perconductivity observed in RuO 2 (110). At the lattice scale, we find that different regions from the same film exhibit varying degrees of crystalline coherence under the epitaxial strain applied by the TiO 2 substrate. The lateral in-plane direction imaged in Supplementary Fig. 5 is the [110] axis of the RuO 2 film, subject to +2.3% tensile strain from the TiO 2 substrate. Some regions, such as the one shown in Supplementary Fig. 5b, exhibit ex-Supplementary Figure 7. HAADF-STEM structural characterization of a non-superconducting RuO2(101) sample. a, As in the superconducting RuO2(110) samples, continuous film growth is observed across the entire length of the STEM lamella. b, Epitaxial growth between the RuO2 film and TiO2 substrate is again confirmed. Here, however, the observed contrast is comparatively smooth across the film, without the clear signs of high strain observed in the RuO2(110) sample. c, Atomic-resolution STEM image demonstrating the high crystalline quality of the RuO2(101) sample. Inset shows the expected structure for this projection (not to scale).
ceptionally clean crystalline quality: all of the atomic columns of Ru stack uniformly in the projection of the STEM image to produce highly ordered atomic contrast. In other regions of the same film, strain gradients in the film distort the RuO 2 lattice such that the columns of Ru atoms are slightly misaligned to the electron beam projection. This local misalignment of the lattice causes the apparent blurring and more mottled contrast of the STEM image seen in Supplementary Fig. 5c.
In Supplementary Fig. 6, the same sample is studied with HAADF-STEM imaging in the orthogonal projection direction. This orientation allows us to assess the crystalline response of the RuO 2 film along the [001] in-plane direction, which is subject to a larger lattice mismatch with the TiO 2 substrate, −4.7% compressive strain. Again, Supplementary Fig. 6a confirms the continuous and epitaxial growth of the RuO 2 (110) thin film over the mesoscopic and macroscopic length scales relevant for interpreting the electrical transport data shown elsewhere in the manuscript. Effects of the large compressive strain along the in-plane direction of this projection are apparent in Supplementary Fig. 6a-b as characteristic V-shaped contrast in the RuO 2 film. Contributions from electron channeling in ADF-STEM imaging produces this bright/dark contrast in regions of local crystallographic strain; such contrast is a common signature of epitaxial lattice strain in many other oxide systems. Supplementary Fig. 6c shows the same structural response at atomic resolution, where-similar to Supplementary Fig. 5cthe apparent blurring of atomic columns arises from regions where the film lattice has been locally distorted.
Finally, for completeness we also performed HAADF-STEM measurements on the same non-superconducting 18.6 nm thick RuO 2 (101) film characterized in Fig. 1c and Supplementary Fig. 7: the film is comparably continuous and epitaxial as the superconducting RuO 2 (110) films we have studied, without any signatures of extended defects or secondary phase inclusions that might otherwise alter its electrical properties. In good agreement with the XRD and electrical transport data shown in Supplementary Fig. 3, the RuO 2 (101) film exhibits more coherent crystalline order than the more drastically strained superconducting RuO 2 (110) films, even over relatively large fields of view as shown in Supplementary Fig. 7b. Supplementary Fig. 7c shows that the lattice remains largely defect-free down to the atomic scale.
The data presented in Supplementary Figs. 3-6 indicate that the crystal structures of superconducting RuO 2 (110) films are not commensurately strained to the TiO 2 substrates. To better visualize how this partial strain relaxation manifests in real space, we employed Fourier filtering of STEM images to find edge dislocations, following the techniques described in Ref. [23]. Specifically, in Supplementary Fig. 8a Supplementary Fig. 8a (Supplementary Fig. 8c), and then taking the inverse FFT with the contributions of all spatial frequencies smoothly masked out except for those in a narrow region of q-space surrounding the noted ±q || . The horizontal dashed lines in Supplementary Fig. 8b,d represent the boundaries of the film along the out-of-plane direction. In this representation, edge dislocations appear as topological defects in the otherwise continuous vertical streaks appearing in Supplementary Fig. 8b,d. These vertical streaks are formally the lattice points of the film and substrate crystal structures, blurred into streaks along the out-of-plane direction because we discard any highspatial-frequency information about out-of-plane correlations of the electron density (i.e., q ⊥ ≈ 0) when computing the inverse FFTs. Hereafter we loosely refer to these streaks as atomic columns, since the contrast in these HAADF-STEM data arises predominantly from electron scattering by the atomic cores with larger Z (i.e., Ti and Ru). Dislocations indicated by green markers add one atomic column to the number of columns that exist in layers beneath it (thus relaxing tensile strain in the lateral direction), whereas dislocations indicated by orange markers remove one atomic column to the number of columns that exist in layers beneath it (thus relaxing compressive strain in the lateral direction). Therefore, a fully strain-relaxed film of RuO 2 /TiO 2 (110) would show a collection of only green (only orange) dislocations accumulated at the substrate-film interface in Supplementary Fig. 8b (Supplementary Fig. 8d, respectively). The dislocation densities expected in this fully strain-relaxed scenario would be 1 per every 1/0.023 ≈ 43 vertical streaks for green dislocations in Supplementary Fig. 8b, and 1 per every 1/0.047 ≈ 21 vertical streaks for orange dislocations in Supplementary Fig. 8d.
In marked contrast to this behavior, zero dislocations are observed across a 226-unit-cell-wide field of view in Supplementary Fig. 8b. Furthermore, although a significantly higher density of dislocations is present across the 204-unit-cell-wide field of view in Supplementary Fig. 8d, there are nearly equal numbers of edge dislocations having Burgers vectors of −c (orange) and +c (green), respectively, and the dislocations are rather uniformly distributed throughout the entire thickness of the film. These observations imply that throughout a sizable volume fraction of the superconducting film, the crystal structure is, on average, much closer to the commensurately-strained limit than to the fully-relaxed limit. We note that this agrees well with the distribution of x-ray scattering intensities plotted in the RSMs for this sample and others in Supplementary Figs. 9-10. Based on these data, we suggest that it is appropriate to consider the local strain gradients that inevitably accompany the nucleation of dislocations in RuO 2 (110) to be sampledependent perturbations to significantly larger average components of the substrate-imposed strain fields that are present throughout all films shown in the manuscript. Because superconductivity is an essentially mean-field phenomenon, we believe that the latter average components of the strain fields in RuO 2 (110) are the key ingredients for stabilizing superconductivity with transition temperatures at least an order of magnitude larger than in bulk RuO 2 ; finer details of the local strain gradients probably determine finer details of the superconductivity, such as the exact sample-dependent T c s mea-sured by non-bulk-sensitive probes of superconductivity, such as resistivity. Since our platform for applying strain enables scanning-probe measurements of the superconducting condensate, future experiments may be able to provide direct experimental evidence to support these general expectations of mesoscale or nanoscale strain inhomogeneity resulting in spatially inhomogeneous superconductivity [24]. In Supplementary Fig. 9a-b, we plot x-ray reflectivity (XRR) data taken at low incident angles, and XRD data taken near the 110 Bragg peaks of the film and substrate, both acquired along the specular CTRs using Cu-Kα radiation. Finite-thickness fringes are present over a wide range of angles in both data sets, evidencing (from reflectivity) atomically abrupt interfaces of the films with the substrates and with vacuum, and (from diffraction) comparable levels of crystallinity along the out-of-plane direction across samples. Furthermore, the spacings between secondary maxima on either side of the primary film Bragg peaks in XRD match the spacings between the low-angle XRR fringes, suggesting that the crystal structures of all films are essentially homogeneous along the out-of-plane direction. The film thicknesses t listed in Fig. 2b-c of the main text and in Supplementary Figs. 9-10 are obtained by directly fitting the XRR data in Supplementary Fig. 9a using a genetic algorithm, which yields sub-nanometer roughnesses in all cases in the refined models.
Given that there are no obvious differences in film morphology or out-of-plane crystallinity between RuO 2 (110) samples with different t, an alternative explanation that may account for the thickness-dependent superconducting T c s is the proliferation of misfit dislocations in thicker films that progressively relax the epitaxial-i.e., in-plane-strains; in this scenario, it may be that partially strain-relaxed RuO 2 (110) films have higher (average) superconducting T c s compared with fully commensurately strained RuO 2 (110) films. To investigate this possibility, in Supplementary Fig. 9c- Figure 9. Evolution of crystal structure and electrical transport behavior as a function of film thickness for RuO2/TiO2(110). a, X-ray reflectivity and b, x-ray diffraction data along the specular CTR show comparable levels of flatness and crystalline order along the out-of-plane direction for all samples. c -e, Average line cuts versus q || through the 220, 310, and 332 RSMs (fully q-resolved data are shown in Supplementary Fig. 10) indicate that all samples with t > 5.8 nm exhibit partial strain relaxation. f -g, Zero-field ρ(T ) data show that thinner films generally have higher residual resistivities ρ0 and lower superconducting Tcs. Tcs in g are taken as the temperatures at which ρ(T ) crosses 50% of the normal-state ρ0, and error bars indicate the temperatures at which ρ(T ) crosses 90% and 10% of ρ0, respectively. The horizontal dashed line in g represents the base temperature attainable in our cryostat (0.4 K), and the gray-shaded region indicates the range of superconducting coherence lengths (ξ = 12 − 22 nm) extracted from magnetoresistance measurements of the upper critical fields for ten different RuO2(110) thin films. Comparisons of these ξ with the mean free paths (top axis of g) that correspond to the measured ρ0 (bottom axis of g) indicates that superconductivity persists in the dirty limit < ξ.
RSMs are plotted in Supplementary Fig. 10 using logarithmic false color scales; the line cuts in Supplementary Fig. 9c-e are averaged over the ranges of q ⊥ between the dashed white lines in Supplementary Fig. 10. All of the samples except the thinnest film exhibit diffuse scattering surrounding the CTRs, indicating that partial strain relaxation onsets between film thicknesses of 5.8 nm and 11.5 nm for the growth conditions used in this work to synthesize RuO 2 /TiO 2 (110) samples. Since the in-plane lattice mismatches between RuO 2 and TiO 2 are highly anisotropic for the (110) orientation, it might also be expected that the substrate-imposed compressive strain along [001] (−4.7%) starts to relax at smaller film thicknesses than the tensile strain along [110] (+2.3%) [17,19]. The off-specular RSMs in Supplementary Fig. 10b-c Figure 10. RSMs for RuO2/TiO2(110) samples with different film thicknesses, t. a, RSMs along the specular CTR near the 220 Bragg reflections for films with increasing t, moving from left to right. q || is aligned with [001] in all panels, although the phenomenology is similar with q || along [110], cf. Supplementary Fig. 4. b, Thickness-dependent RSMs near the off-specular 310 Bragg reflections where q || is purely along [110]. c, Same as b, but near the 332 Bragg reflections where q || is purely along [001]. White, red, and orange squares represent the central peak positions expected for bulk TiO2, bulk RuO2, and commensurately strained RuO2(110), as in Supplementary Fig. 3. The line cuts plotted in Supplementary Fig. 9c-e are averaged over the ranges of q ⊥ of the RSMs between the horizontal white dashed lines. thickness fringes can still be observed along the CTRs in the RSMs near 310 for films up to at least t = 17.2 nm, whereas only the t = 5.8 nm film shows a contribution to the coherent CTR scattering in the RSMs near 332 that clearly rises above the contributions of the substrate.
Although signatures of scattering from partially strainrelaxed RuO 2 (110) are manifestly present in the data for all of the superconducting samples in Supplementary Fig. 9-namely, broader distributions of intensity versus q || that asymmetrically gain weight towards the positions expected for bulk RuO 2 as the film thickness increasesit remains somewhat ambiguous whether this data can be interpreted in a straightforward manner to gain insight into what levels of strain optimize the superconducting T c s in RuO 2 . Strain relaxation in oxide thin films often occurs inhomogeneously, with a mixture of commensurately strained and partially relaxed material [25]. Indeed, examining the transport data for these same samples in Fig. 2b of the main text and in Supplementary Fig. 9f, it is tempting to ascribe the multi-stage behavior of the superconducting transitions to temperaturedependent Josephson coupling of regions of the films under different amounts of strain with correspondingly different "local" T c s; similar behavior has been described theoretically [26] and observed experimentally in patterned niobium islands on gold substrates [27]. However, because of the close proximity of the substrate Bragg peaks along q ⊥ (d 110 = 3.248Å) with the positions expected for commensurately strained RuO 2 (110) (d 110 = 3.241Å), it is difficult to disentangle their respective contributions to the total scattering observed in XRD.
Despite these complications in quantitatively analyzing the XRD results, we can use the values of t obtained from XRR to plot the normalized resistance versus temperature curves from Fig. 2b of the main text in terms of absolute resistivities, as shown in Supplementary Fig. 9f. From these data, a robust correlation between the superconducting T c s and the residual resistivities ρ 0 immediately becomes apparent, as displayed in Supplementary Fig. 9g. As noted in the main text, this correlation may suggest that the primary effect of reducing t is to enhance the relative importance of elastic scattering off disorder near the substrate-film interfaces, which is known to decrease T c in numerous families of thin-film superconductors, both conventional [28] and unconventional [29]. It is largely outside the scope of this paper to contribute meaningful data to ongoing research efforts investigating the mechanism underlying thickness-induced suppressions of T c that are ubiquitously observed for superconducting films in the two-dimensional limit [30]; however, in passing we note that the residual resistivity (ρ 0 = 60 µΩ-cm) of the thinnest (t = 5.8 nm) nonsuperconducting (T c < 0.4 K) RuO 2 (110) film shown in Supplementary Fig. 9 corresponds to a sheet resistance of R s = ρ 0 /t = 0.10 kΩ/ . This value of R s is about 60 times less than the Cooper pair quantum of resistance h/(2e) 2 = 6.45 kΩ/ that was empirically noted to separate insulating from superconducting ground states in ultrathin films of numerous elemental metals [31,32]; this indicates that quantitatively different physics is likely operative here in suppressing T c , which may place ultrathin films of RuO 2 (110) in closer proximity to the anomalous metal regime that was shown to occur at weaker levels of disorder in Ref. [33].
We believe that identifying the exact mechanism underlying the strain-stabilized superconductivity in RuO 2 (110) is also well beyond the scope of the current paper. Phase-sensitive measurements of the superconducting order parameter-and/or momentumresolved measurements of the superconducting gap magnitude-are notoriously challenging in multi-band materials with small (sub-meV) gaps, and a definitive answer to whether the pairing is (un)conventional will have to wait until such data become available. With that qualification in mind, it is natural to consider whether the T c versus ρ 0 behavior displayed in Supplementary Fig. 9g can shed any light on the answer to this question. To address this possibility, we need to convert measured properties of the normal-metal and superconducting states into comparable characteristic length scales. In unconventional low-temperature superconductors with sign-changing order parameters, such as Sr 2 RuO 4 [34], superconductivity is completely suppressed by non-magnetic impurity scattering whenever the normal-state mean free path is comparable with the clean-limit superconducting coherence length ξ 0 -i.e., T c → 0 K if ≈ ξ 0 . In conventional superconductors, by contrast, superconductivity persists even in the dirty limit << ξ 0 .
On the top horizontal axis of Supplementary Fig. 9g, we indicate approximate values of corresponding to the measured values of ρ 0 on the bottom horizontal axis; these numbers are computed following the analysis of Glassford et al. [3], who used the DFT-computed plasma frequencies and Fermi velocities for bulk RuO 2 to obtain [nm] = 3.6·35/ρ [µΩ-cm]. Because of various uncertainties implicit in these estimations, we neglect any straindependent changes in Fermiology that will, of course, quantitatively renormalize the precise relationship between and ρ for RuO 2 (110). To compare with , we also include a gray-shaded region on Supplementary Fig. 9g corresponding to the range of average in-plane superconducting coherence lengths ξ we extracted experimentally from magnetoresistance measurements of the perpendicular upper critical magnetic fields H c⊥ for ten different superconducting RuO 2 (110) samples, following the procedures detailed in Supplementary Note 2. As noted previously, these values of ξ likely represent a lower bound for what the clean-limit ξ 0 would be in the absence of extrinsic defects in the films that impede vortex motion. In any case, we find that superconductivity robustly persists in RuO 2 (110) even when < ξ < ξ 0 ; for example, the samples shown here with T c = 0.9 − 1.8 K have measured residual resistivities corresponding to mean free paths = 4.0 − 9.6 nm, which are all less than the range of measured superconducting coherence lengths, ξ = 12 − 22 nm. Therefore, whatever the superconducting pairing mechanism is in RuO 2 (110), these empirical considerations demonstrate that it is rather insensitive to defect scattering.
Conceptually, perhaps the most straightforward test of our proposal that substrate-imposed strains are an essential ingredient in stabilizing superconductivity in RuO 2 (110) would be to perform electrical transport measurements for a thick RuO 2 (110) film where the epitaxial strains are almost completely relaxed. In reality, however, such efforts are complicated by the observation that the +2.3% tensile strain along [110] is not released in sufficiently thick films of RuO 2 (110) by the nucleation of misfit dislocations; instead, cracks form in such samples (in our studies, for t 30 nm) that propagate through the entire thickness of the film and even tens of nanometers into the substrate.
In Supplementary Fig. 11a-b, we show HAADF-STEM images of such oriented micro-cracks for the same 48 nm thick RuO 2 (110) film characterized by ARPES and LEED in Fig. 4 of the main text and in Supplementary Figs. 14-15. Although these images evidence strong interfacial bonding between film and substrate-which is certainly crucial to maintain high levels of strain throughout the thinner RuO 2 (110) films we characterize elsewhere in the manuscript-the strongly anisotropic nature of the cracks makes the distribution of current flow through such samples extremely non-uniform. Accordingly, putative measurements of electrical "resistance" R(T, H) displayed in Supplementary Fig. 11c-e should be more pedantically interpreted as the voltage difference measured between two voltage contacts, divided by the total current sourced through two other contacts placed elsewhere on the film. Unsurprisingly, the results thus obtained for R depend in an essential way on the orientation of the voltage contacts relative to the cracks in the film: in Supplementary Fig. 11c-d we observe either a downturn (green trace) or upturn (purple trace) in the apparent R as the temperature is decreased below T c = 1.3 − 1.5 K, followed by plateaus at lower temperatures. In Supplementary Fig. 11e, we show that these temperature-induced anomalies in R can be suppressed in both cases by applying small magnetic fields H c⊥ < 3 kOe at fixed T = 0.45 K, confirming that they result from (inhomogeneous) patches of superconductivity. Irrespective of whether the measured R decreases or increases at (T c , H c ), we emphasize that the fractional changes in resistance induced by superconductivity in this 48 nm thick RuO 2 (110) film are less than 1% of the residual normal-state resistances R 4 K , in marked contrast to the full 100% drops to zero resistance exhibited by all thinner, more highly strained, uncracked RuO 2 (110) films shown in other figures.
Structural relaxations
One of the central themes of this work is the exploration of strain-induced changes to the electronic structure in epitaxial thin films of RuO 2 subject to biaxial epitaxial strains imposed by differently oriented rutile TiO 2 substrates. To model this situation computationally within the framework of density functional theory (DFT), we started by using the Vienna Ab Initio Software Package [35,36] to perform full structural relaxations (of lattice parameters and internal coordinates) to minimize the DFT + U -computed total energy of RuO 2 in the ideal tetragonal rutile crystal structure (space group #136, P 4 2 /mnm). Structural relaxations employed the same exchange-correlation functional and calculational parameters as for the DFT + U (U = 2 eV) calculations described in the Methods section of the main text, and forces were converged to < 1 meV/Å. Throughout the main text and supplementary information, we refer to DFT results for this minimum energy structure as "bulk RuO 2 ". The actual lattice parameters for this structure, (a bulk = 4.517Å, c bulk = 3.130Å), overestimate the experimentally measured lattice parameters at 295 K for RuO 2 single crystals of (a = 4.492Å, c = 3.106Å) by < 1%, due to well-established deficiencies of the generalized gradient approximation. With the former as the bulk reference structure, we then simulated biaxial epitaxial strains to (110)-oriented TiO 2 substrates by performing constrained structural relaxations for RuO 2 in which the in-plane lattice parameters c = (1 − 0.047) × c bulk and d 110 = (1 + 0.023) × d 110, bulk were held fixed, while the out-of-plane lattice constant d 110 and all other internal coordinates of the structure were allowed to relax so as to minimize the total energy. The fixed compression and expansion of c and d 110 , respectively, correspond to the experimentally measured lattice mismatches between TiO 2 and RuO 2 single crystals at 295 K [13,14].
Within this scheme, DFT + U predicts that commensurately strained RuO 2 (110) thin films will have an outof-plane lattice constant d 110 = (1 + 0.017) × d 110, bulk , which compares reasonably well with the 2.0% expansion of d 110 measured experimentally on a 5.8 nm thick RuO 2 (110) film. Because the splitting of d 110 and d 110 in strained RuO 2 (110) breaks the non-symmorphic glide plane symmetry of the parent rutile structure, we used a base-centered orthorhombic structure (space group #65, Cmmm) with lattice constants of c × 2d 110 × 2d 110 for DFT simulations of RuO 2 (110). The primitive unit cell of this Cmmm structure contains the same number of atoms as the parent rutile unit cell, so there is no apparent doubling and/or folding of the bands in spaghetti plots that compare the bandstructures of RuO 2 (110) and bulk RuO 2 , such as in Supplementary Fig. 12 or Fig. 4a of the main text.
To simulate the electronic structure of commensurately strained (101)-oriented RuO 2 thin films, we adopted a slightly different approach, since it is not straightforward to perform constrained structural relaxations with DFT in such a low-symmetry situation. Specifically, we took the rutile b axis to be under +2.3% tension, i.e. b = (1 + 0.023) × b bulk , as dictated by the lattice mismatch of RuO 2 with TiO 2 along this direction (cf. Supplementary Fig. 3d). On the other hand, the lengths of the rutile a and c axes are free to adjust their lengths, but are subject to the simultaneous constraints: Supplementary Eq. (3) ensures that the film is lattice matched to the TiO 2 substrate along the [101] in-plane direction (cf. Supplementary Fig. 3c), and Supplementary Eqs. (4)-(6) ensure that the d-spacings for the HKL = 202, 103, and 402 Bragg reflections reproduce the values we measured experimentally for a commensurately strained RuO 2 (101) film in Fig. 2a of the main text, in Supplementary Fig. 3c, and in analogous RSM data taken around 402 (a proper, i.e. highly overconstrained, lattice constant refinement would of course include data for many more reflections). Note that in deriving these equations, we assumed for simplicity that the angle between the rutile a and c axes remains 90 • in epitaxially strained films; small deviations away from this limit should be expected in reality, since this orthogonality is not guaranteed by any symmetry or constraint of the system. Nonetheless, finding the best-fit solution to Supplementary Eqs. (3)-(6) gives lattice constants of (a = 4.501Å, c = 3.077Å) in absolute units; dividing through by the experimentally measured lattice constants of bulk RuO 2 yields a = (1 + 0.002) × a bulk and c = (1 − 0.009) × c bulk as appropriately scaled inputs for DFT simulations. With a = b = c and all angles between the primitive unit cell translations equal to 90 • , the crystal structure for RuO 2 (101) DFT simulations was taken as the primitive orthorhombic space group #58, P nnm. Supplementary Table 1 summarizes all parameters of the crystal structures used in DFT simulations for bulk RuO 2 , RuO 2 (110), and RuO 2 (101).
Effects of adding +U
In Supplementary Fig. 12, we show the effects of including an ad hoc static mean-field +U term on the Ru sites in DFT calculations. Adding such a phenomenological term to the Kohn-Sham Hamiltonian shifts the bands relative to each other (up/down in energy) so as to force the orbital occupancies towards integer fillings, rather than also shrinking the bandwidths of the quasiparticle excitations, as would occur in a more realistic Figure 12. Strain dependence of the electronic structure of RuO2, according to DFT (+ U ).
theory that includes dynamical electron-electron interactions. The blue and red traces are reproduced from the non-magnetic DFT + U (U = 2 eV) results presented in Fig. 4a of the main text; the purple and orange traces are the results of repeating GGA + SOC calculations for the same RuO 2 (110) and bulk RuO 2 crystal structures, respectively, but now setting U = 0. Irrespective of U , both sets of calculations show a shift of the d || -derived flat bands towards E F and concomitant enhancement of the density of states (DOS) near E F when the amount of caxis compression is increased upon going from bulk RuO 2 to RuO 2 (110), as indicated by the gray arrows. While these strain-dependent trends in the electronic structure are robust against fine-tuning of parameters employed in the calculations, Supplementary Fig. 12 also suggests that the calculated positions of the peaks in the DOS and the exact values of the DOS near E F should not be taken too seriously, as there are considerable theoretical uncertainties in these quantities, depending on the choice of U . We leave a complete treatment of the effects on the electronic structure of commensurate Q AFM = (1 0 0) antiferromagnetic spin-density wave order [13,37] to future studies [38], because it is not possible in standard DFTbased approaches to stabilize self-consistent solutions for the spin densities that have small values of the ordered magnetic moment comparable to those measured experimentally (≈ 0.05 µ B /Ru) [13].
Supplementary Note 6: DETERMINATION OF OUT-OF-PLANE MOMENTA PROBED BY ANGLE RESOLVED PHOTOEMISSION SPECTROSCOPY
Figure 3d-f of the main text compares the electronic structure of a 7 nm thick RuO 2 /TiO 2 (110) sample experimentally measured by angle-resolved photoemission spectroscopy (ARPES) with the results of DFT + U simulations. To make this comparison, it is necessary to determine what range of out-of-plane momenta k z = k 110 in the initial state are probed by ARPES at a given finalstate kinetic energy and momentum. This was established by plotting the DFT-computed E(k) dispersions on top of the experimentally measured spectra along several one-dimensional cuts through momentum space measured over a small range of kinetic energies corresponding to near-E F states at the given photon energy (21.2 eV), and allowing k z to vary in the calculations so as to best match the experimental data. Supplementary Fig. 13 shows representative examples of this procedure for experimental spectra taken along Fig. 13a, the k F s and electron-like character of the band crossing E F are best fitted by calculations with k z in the range −0.2 → 0.0 π/d 110 . Likewise, for the panels in Supplementary Fig. 13b, k z values in the range −0.6 → −0.3 π/d 110 best reproduce the measured spectra, although the results here are more ambiguous because of the insensitivity of the flat-band energies to the precise value of k 110 . Therefore, we took the range of reduced initial-state out-of-plane momenta probed at normal emission (k x = k y = 0) to be k z,i = −0.1 ± 0.1 π/d 110 . Assuming a free-electron-like model of final states, the final-state k z,f is given by the expression where m e is the free-electron mass, E k is the kinetic energy of the photoelectrons, θ is the emission angle relative to the surface normal, V 0 is the inner potential, and 2d 110 is the spacing between equivalent lattice points along the out-of-plane direction (N can adopt any integer value). Substituting E k = 16.6 ± 0.3 eV, θ = 0 • , k z,i = −0.1 ± 0.1 π/d 110 , and d 110 = 3.23Å into Supplementary Eq. (7), we find that an inner potential of 13.7 ± 2.3 eV is compatible with our determination of k z,i . Taking this same value of V 0 and setting θ = 30−35 • in Supplementary Eq. (7)-as is appropriate for the experimental data in the panels displayed in Supplementary Fig. 13b-yields k z,i = −0.35 ± 0.17 π/d 110 ; visual inspection of the DFT bands for this range of k z,i show that the calculations also reproduce the experimental spectrum reasonably well in this region of the Brillouin zone. The curved green planes drawn in the Brillouin zone schematic in Fig. 3c of the main text are constructed by evaluating Supplementary Eq. (7) with V 0 = 13.7 eV and N = 3 for all (k x , k y ), and accounting for an intrinsic uncertainty of ≈ 0.2 π/d 110 in k z owing to the finite elastic escape depth of photoelectrons, which we take to be ≈ 5Å.
Supplementary Note 7: SURFACE LATTICE CONSTANT REFINEMENT
In Supplementary Fig. 14, we present the results of a surface lattice constant refinement for two different RuO 2 /TiO 2 (110) films of different thicknesses, 7 nm and 48 nm. For both samples we acquired many low-energy electron diffraction (LEED) images at normal incidence using incident energies ranging from E = 100 − 300 eV in 2 eV steps; for examples of the raw data, the insets in Supplementary Fig. 14b contain representative images taken at 200 eV. For each image, we located the positions of all visible spots and indexed the spots according to their in-plane momentum transfer values q || = 2π(H/c, K/2d 110 ), where H and K are integers and, by our convention, H defines the magnitude of q || along [001] (nearly horizontal in the images in Supplementary Fig. 14), and K defines the magnitude of q || along [110] (nearly vertical). We then calculated the distance of all spots from the specular q || = (0, 0) reflection and converted these image distances D (in pixel space) to scattering angles sin(θ) (where θ is the angle of each diffracted electron beam relative to the surface normal) based on D → sin(θ) calibrations that were independently determined from reference measurements on SrTiO 3 (001) surfaces having a known lattice constant. Note that these calibrations absorb the overall scaling factor that depends on the working distance between the LEED screen and the sample (and the camera image magnification factor), as well as some higher-order distortions of the spot patterns that result from the sample not being positioned precisely at the center of curvature of the LEED screen (and the screen itself being slightly aspherical).
From these values of sin(θ), the electron energies E at which each LEED pattern was recorded, and the (H, K) indices, we compiled lists of lattice constants corresponding to each fitted spot position. For simplicity in analysis, we restricted our attention to spots having q || purely aligned with [001] or [110]. Elastic scattering and conservation of momentum modulo translations of the surface reciprocal lattice together require that: which for Bragg spots of the type q || = 2π(H/c, 0) and 2π(0, K/2d 110 ), reduces to: (110) sample in red shows broader LEED spots, indicating lower surface crystallinity than the 7 nm thick sample, which results in wider distributions of the extracted lattice constants. Furthermore, the centers of mass of the red distributions, (c = 3.07 ± 0.06Å, 2d 110 = 6.39 ± 0.11Å), are displaced away from the blue distributions towards the values expected for bulk RuO 2 ; this is why in Fig. 4b-c of the main text we suggest that the surface electronic structure of this 48 nm thick sample measured by ARPESwhich probes the film over a comparable depth to LEED, within < 1 nm from the top film surface-should be more representative of bulk RuO 2 .
Supplementary Note 8: EXTRACTING THE NEAR-EF DENSITY OF STATES FROM ARPES MEASUREMENTS
In Supplementary Fig. 15 we present LEED and ARPES data taken on three different RuO 2 thin-film samples: 19 nm thick RuO 2 (101), 7 nm thick RuO 2 (110), and 48 nm thick RuO 2 (110). In Supplementary Fig. 15a, all of the samples show LEED spot patterns with the periodicities expected for unreconstructed (101)-and (110)oriented rutile surfaces, respectively; furthermore, the sharpness of the patterns suggest high degrees of surface crystallinity, such that in-plane momentum should be a nearly conserved parameter in photoemission. Given that strained RuO 2 (110) samples superconduct at measurable T c s, while RuO 2 (101) samples and bulk RuO 2 do not, the question we wanted to address using ARPES was: how does the density of states near the Fermi level, N (E F ), evolve between these samples? Recall that based on the LEED lattice constant analysis described in Supplementary Fig. 14, most of the substrate-imposed epitaxial strains are relaxed at the top surface of the 48 nm thick RuO 2 (110) sample, such that its electronic structure probed by ARPES is a reasonable proxy for that of bulk RuO 2 . Specifically, the quantity of interest as it relates to the low-energy physics is: where A(k, ω) is the single-particle spectral function, integrated over all momenta k in the Brillouin zone (BZ) and over some limited range of energies ω near E F (δ is some small parameter). Two separate factors make it extremely challenging to quantitatively extract the total N (E F ) directly from data taken with our lab-based ARPES system. First, our inability to continuously vary the photon energy-or equivalently, the kinetic energy of the photoelectrons at E F -implies that only regions of the Brillouin zone with specific k z can be probed, cf. Supplementary Eq. (7). Therefore the full integration over k in Supplementary Eq. (10) cannot be performed using a lab-based ARPES setup, which is especially problematic in a material such as RuO 2 that has a highly three-dimensional electronic structure depending strongly on k z (cf. Supplementary Fig. 13). Second, even if the entire Brillouin zone could be mapped exhaustively, the intensity measured in ARPES is not the initial-state spectral function A(k, ω), but rather this quantity multiplied by probabilities (i.e., matrix elements) for photoemission, which are difficult to account for theoretically.
With these qualifications in mind, there is a route to answering the simpler question of whether N (E F ) increases in strained RuO 2 (110) compared with strained RuO 2 (101) or bulk RuO 2 : we simply need to determine where the flat bands with d || orbital character are located in energy relative to E F . DFT calculations suggest that if these bands move closer to (further away from) E F , the total N (E F ) will increase (decrease), respectively. To approximately determine the positions of these bands experimentally, we integrated the photoemission intensity over the color-coded slabs in the Brillouin zone schematic in Supplementary Fig. 15a, plotted the resulting energy distribution curves (EDCs) in Supplementary Fig. 15b, and found the maxima in the EDCs as indicated by the dashed lines. The regions colored yellow in the Brillouin zone denote where the near-E F wavefunctions have greater than 90% d || orbital character, according to our DFT + Wannier90 calculations; since all slabs lie in this region, we expect that the dominant contributions to the measured EDCs are from d || initial states. Note that the region of k z = k 110 probed by ARPES with He-Iα (21.2 eV) photons on the (110)-oriented samples is well-constrained by analysis of the E(k) dispersions as outlined in Supplementary Fig. 13; however, for the (101)-oriented sample the region of k z = k 101 probed by ARPES with He-IIα (40.8 eV) photons is merely calculated from the free-electron final state model in Supplementary Eq. (7), using the same inner potential as for RuO 2 (110), and thus is subject to greater experimental uncertainties. Nonetheless, the results of this analysis qualitatively agree with the straindependent trends anticipated by DFT ( Supplementary Fig. 15c): in highly strained RuO 2 (110) films (blue), the flat bands move closer to E F compared with either more strain-relaxed RuO 2 (110) films (red) or commensurately strained RuO 2 (101) films (purple). This modification of the effective d orbital degeneracies boosts N (E F ), which-as proposed in the main text-likely contributes to the enhanced superconducting T c s observed in highly strained RuO 2 (110) samples. As described in the Methods section of the main text, we performed first-principles DFT-based electronphonon coupling calculations of the isotropic Eliashberg spectral function α 2 F (ω) and total electron-phonon coupling constant λ el-ph (integrated over all phonon modes and wavevectors) for bulk RuO 2 and commensurately strained RuO 2 (110). From these quantities, we estimated the superconducting transition temperatures using the semi-empirical McMillan-Allen-Dynes formula: McMillan obtained a formula resembling Supplementary Eq. (11) by numerically solving the equations of finite-temperature Migdal-Eliashberg theory using the experimentally measured spectral function of niobium [40]; Allen and Dynes improved the agreement of McMillan's formula with experimentally measured α 2 F (ω) and T c s for a variety of conventional superconductors by introducing an appropriately weighted average over α 2 F (ω) in the prefactor of the exponential, rather than using the Debye temperature [41]. In essence, Supplementary Eq. (11) can be considered an extension of Eq. (1) from the main text that identifies phonons as the bosonic modes that mediate Cooper pairing (i.e., λ = λ el-ph ), and which remains valid even in the limit of stronger couplings (i.e., λ > 1) by virtue of Migdal's theorem for the electron-phonon interaction.
For bulk RuO 2 (strained RuO 2 (110), respectively), we obtained λ el-ph = 0.685 and ω log = 34.1 meV (λ el-ph = 1.97 and ω log = 7.59 meV). The large strain-induced enhancement in λ el-ph and shift of ω log to lower frequencies is caused by substantial phonon softening that occurs under c-axis compression in the rutile structure. In fact, we found that some of the calculated zone-boundary phonon frequencies even become imaginary under strain in RuO 2 (110), possibly indicating an incipient structural instability. A more detailed account of this phenomenon will be described in a future publication; for the purposes of this work, we omitted such phonon modes in subsequent electronphonon coupling calculations, and neglected any errors this may cause in λ el-ph and ω log .
To convert the calculated values of λ el-ph and ω log to T c s via Supplementary Eq. (11) requires knowledge of the appropriate value(s) of the screened Coulomb interaction µ * between quasiparticles. Typically µ * is chosen in an ad hoc fashion to match the experimentally measured T c of a given material. Because bulk RuO 2 is not known to be superconducting at experimentally accessible temperatures, we cannot employ such a prescription here; nevertheless, we can use the experimentally measured least upper bound on T c for bulk RuO 2 (T c < 0.3 K [39]) to place a lower bound on µ * (µ * > 0.30), as illustrated by the dashed lines in Supplementary Fig. 16. For this range of µ * , inserting the values of λ el-ph and ω log calculated for RuO 2 (110) into Supplementary Eq. (11) predicts T c < 7 K, which agrees reasonably well with the experimentally measured values. Because several uncontrolled approximations enter into these estimates of T c , we consider this level of agreement as suggestive, although not conclusive, evidence for a phonon-mediated mechanism of superconductivity; in any case, it is clear that reducing the axial ratio c/a in appropriately strained variants of RuO 2 robustly boosts λ el-ph , in good agreement with expectations based on steric trends for other rutile compounds [42,43]. Any effects of strain on µ * are ignored for the purposes of this work. | 12,917.8 | 2020-05-13T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Quantum cryptography beyond quantum key distribution
Quantum cryptography is the art and science of exploiting quantum mechanical effects in order to perform cryptographic tasks. While the most well-known example of this discipline is quantum key distribution (QKD), there exist many other applications such as quantum money, randomness generation, secure two- and multi-party computation and delegated quantum computation. Quantum cryptography also studies the limitations and challenges resulting from quantum adversaries—including the impossibility of quantum bit commitment, the difficulty of quantum rewinding and the definition of quantum security models for classical primitives. In this review article, aimed primarily at cryptographers unfamiliar with the quantum world, we survey the area of theoretical quantum cryptography, with an emphasis on the constructions and limitations beyond the realm of QKD.
Introduction
The relationship between quantum information and cryptography is almost half-a-century old: a 1968 manuscript of Wiesner (published more than a decade later [233]), proposed quantum money as the first ever application of quantum physics to cryptography, and is also credited for the invention of oblivious transfer-a key concept in modern cryptography that was re-discovered years later by Rabin [195]. Still today, the two areas are closely intertwined: for instance, two of the most well-known results in quantum information stand out as being related to cryptography: quantum key distribution (QKD) [24] and Shor's factoring algorithm [206].
There is no doubt that QKD has taken the spotlight in terms of the use of quantum information for cryptography (in fact, so much that the term "quantum cryptography" is often equated with QKD-a misconception that we aim to rectify here!); yet there exist many other uses of quantum information in cryptography. What is more, quantum information opens up the cryptographic landscape to allow functionalities that do not exist using classical 1 information alone, for example uncloneable quantum money. We note, however, that the use of quantum information in cryptography has its limitations and challenges. For instance, we know that quantum information alone is insufficient to implement information-theoretically secure bit commitment; and that a proof technique called rewinding (which is commonly used in establishing a zero-knowledge property for a protocol) does not directly carry over to the quantum world and must re-visited in light of quantum information.
In this paper, prepared on the occasion of the 25th anniversary edition of Designs, Codes and Cryptography, we offer a survey of some of the most remarkable theoretical uses of quantum information for cryptography, as well as a number of limitations and challenges that cryptographers face in light of quantum information. We assume that the reader is familiar with cryptography, but we do not assume any prior knowledge of quantum information. Quantum cryptography is a flourishing area of research, and we have chosen to give an overview of only a limited number of topics. The reader is, of course, encouraged to follow up by consulting the references. To the best of our knowledge, prior survey work on the topic of "quantum cryptography beyond key exchange" is limited to a 2006 survey by Müller-Quade [186] and a 1996 survey by Brassard and Crépeau [45]; see also an interesting personal account by Brassard [43]. A number of surveys that focus on QKD exist and are listed in Sect. 3.2.
Overview
The predictions of quantum mechanics defy our everyday intuition: concepts such as superposition (a particle can be in multiple places or states at the same time), entanglement (particles are correlated beyond what is possible classically) and quantum uncertainty (observing one property of a particle intrinsically degrades the possibility of observing another) are partly responsible for the bewildering possibilities in the quantum world. Section 2 of this survey contains a brief introduction to the mathematical formalism of quantum information as it pertains to quantum cryptography (no prior knowledge of quantum mechanics is assumed). Topics covered in this section include the mathematical formalism for the representation and manipulation of qubits (the fundamental unit of quantum information). We also include a brief survey of concepts such as the quantum no-cloning theorem, entanglement and nonlocalityall of which play an important role in quantum cryptography. Section 3 of this survey is devoted to quantum cryptographic constructions. The principal appeal in using quantum information for cryptography is in establishing a qualitative advantage. More precisely, the goal is to develop quantum cryptographic protocols that achieve some functionality in a way that is fundamentally advantageous compared to using classical information alone. This quantum advantage can be of the following types: • a quantum protocol achieves statistical (information-theoretic) security; any classical protocol achieves this task with computational security at best; • a quantum protocol achieves computational security; no classical protocol can achieve this task, even with computational security.
Many of the quantum constructions that we cover in this survey (whether they are of the first or second type) are inspired by the original proposal of Wiesner called conjugate coding. This construction embodies many unique features of quantum information, as we explain in Sect. 3.1. In particular, conjugate coding is used in constructions for physically unforgeable quantum money (Sect. 3.1), as well as in QKD-a method that allows the informationtheoretically secure expansion of shared keys (Sect. 3.2). Another application of conjugate coding in quantum cryptography is in showing that two basic cryptographic primitives, bit commitment and oblivious transfer are (information-theoretically) equivalent in the quantum world (Sect. 3.3)-an equivalence that is provably false in the classical world. Technologically speaking, perfect quantum communication and storage is a challenge; in building protocols in the bounded-and limited-storage quantum models (collectively known as limited-storage models), ingenious cryptographers have turned this challenge to their advantage (Sect. 3.4)once again, the key ingredient in the construction being conjugate coding. Motivated by the perspective that quantum computations will, in the future, be outsourced to remote locations (again, because of technological challenges involved in building quantum computers), cryptographers have studied protocols for the delegation of quantum computations, which we cover in Sect. 3.5. In Sect. 3.6, we review the possibility of quantum primitives that accomplish a security against malicious participants that is typically too weak for cryptographic applications (since it does not provide exponential security), yet is still of interest due to the advantage that quantum information provides: this is embodied in quantum protocols for weak coin flipping and imperfect bit commitment. Finally, we survey device-independent cryptography (Sect. 3.7) which can be seen as a culmination of many of the constructions already mentioned: thanks to this sophisticated technique, it is possible to achieve cryptographic tasks such as QKD and randomness expansion/amplification, with untrusted devices, which are quantum devices that are assumed to have originated from an adversary. The very possibility of achieving this result stems from one of the most mysterious quantum phenomena, namely nonlocality (which is introduced in Sect. 2.5).
While quantum information provides a number of advantages for cryptography, it also has its unique limitations and challenges, which we survey in Sect. 4. The first limitations that we survey are in terms of impossibility results, namely the impossibility of informationtheoretically secure quantum bit commitment (Sect. 4.1) and of information-theoretically secure two-party quantum computation (Sect. 4.2). Next, we cover two topics that are applicable to purely classical protocols, in which essentially the only concern is that the adversary is capable of quantum information processing: quantum rewinding (Sect. 4.3) and superposition attacks (Sect. 4.4). We emphasize that the cryptographic challenges encountered here are not related to the superior computational power of a quantum adversary, but rather stem from quantum phenomena such as the no-cloning theorem (which forces us to develop an alternative to the common rewinding method used in order to establish the zero-knowledge property of interactive protocols), and of quantum superposition (which requires a new frame-work describing interactions with oracles-namely in the quantum random oracle model). Finally, in Sect. 4.5, we survey the research area of position-based quantum cryptography, where players use their geographical position as cryptographic credential; while the current main result in this area is a no-go theorem for quantum protocols for the task of position verification, the possibility of position-based quantum cryptography against resource-bounded adversaries remains a tantalizing open question.
One of the lessons learned from all these impossibility results is that quantum security is definitely a tricky business-quantum cryptographers should take this as a warning: the desire to find a "quantum advantage" (i.e. an application where quantum information outperforms all classical solutions) is extremely strong, and cryptographers must be vigilant since the quantum world comes with an abundance of subtleties.
We note that the bibliographic entries in this survey are available as an open-source BibT E X file at https://github.com/cschaffner/quantum-bib.
Further topics
Already, the literature on quantum cryptography is vast, and in this survey we have chosen to focus on only a few topics. We briefly mention here some topics that are not included in this survey: • Everlasting security A protocol has everlasting security if it is secure against adversaries that are computationally unlimited after the protocol execution. This type of security is very difficult to obtain classically, even under strong setup assumptions such as a common reference string or signature cards [219]. Even though we do not treat the notion explicitly in this survey, many quantum-cryptographic protocols such as QKD (Sect. 3.2) and the limited-quantum-storage protocols (Sect. 3.4) come with the important benefit of everlasting security. In fact, everlasting security might be the most important reason to use QKD in the first place [211]. • Quantum functionalities In this survey, we focus mainly on classical functionalities, but quantum functionalities are of course also of interest: assuming full quantum computers for all parties, one can study the secure realization of a quantum ideal functionality. This includes topics such as the encryption of quantum messages in the information-theoretic [9,41,134,153], entropic [95,96] and computational [52] settings, quantum secret sharing [77], multi-party quantum computation [22], authentication of quantum messages [15], two-party secure function evaluation [98,99], quantum anonymous transmission [49,74], quantum one-time programs [54] and quantum homomorphic encryption [52,239].
• Key recycling Using quantum information, it is possible to detect eavesdropping such that key re-use is possible (if no eavesdropping is detected), while maintaining informationtheoretic security [87]. • Quantum uncloneability Because quantum information cannot, in general, be duplicated, (see Sect. 2.4), we can achieve functionalities related to copy-protection that cannot be obtained in the classical world. These include uncloneable encryption [125], quantum copy-protection [1] and revocable time-release encryption [221]. • Isolation assumptions The multi-prover interactive proof scenario [21] enables the information-theoretic implementation of primitives that are unachievable in the singleprover setting. However, the study of quantum information has shed new light on the "isolation" assumptions that are required in order to establish security [84]. Related to this is the study of protocols which are secure against provers sharing correlations that are very strong, yet do not allow signalling: [110,139]. One particular way to enforce isolation is to spatially separate the players by a far enough distance: the relativistic no-signalling principle ensuring that no information can travel faster than the speed of light between the two sites. A relativistic (classical) bit-commitment scheme was first proposed by Kent [142] and lately improved and experimentally implemented [160]. See also related work [159]. • Leakage resilience using quantum techniques In leakage resilient computation, we are interested in protecting computations from attacks according to various leakage models. In one of these models (the "split-state" model), it was shown [93] that quantum information allows a solution to the orthogonal-vector problem, while no classical solution exists. Also, related work [151] shows that techniques from fault-tolerant quantum computation can be used to construct novel leakage-resilient classical protocols. • Quantum cryptanalysis Historically, the study of quantum algorithms is closely related to quantum cryptanalysis, which is the study of quantum algorithms for cryptanalysis. This is evidenced by some of the very early work on quantum algorithms, including Shor's algorithm for computing discrete logarithms and integer factoring in quantum polynomialtime [206], Grover's search algorithm [40,126] (which provides a square-root speedup in term of query complexity, for searching in an unstructured database). Recent work in the area of quantum cryptanalysis includes [33,71,127,128,150,196]. See also [13,184] for surveys on quantum algorithms, as well as the Quantum Algorithm Zoo. 2 • Merkle puzzles in a quantum world The first unclassified proposal for secure communication over insecure channels was made by Merkle in 1974 (published years later [176]).
The main idea is that honest parties can establish a secure communication channel by expending work proportional to N , yet any successful attack requires a computational effort proportional to N 2 . In a nutshell, Grover's search algorithm [126] implies that quantum computers break the security of Merkle's scheme. However, [50] show how to restore security in the quantum context: either by using a new classical protocol (in which case an adversary can break the scheme by expending work proportional to N 5/3 (which is shown to be optimal), or by using a quantum protocol (in which case the quadratic security can essentially be restored). • From classical to quantum security What can we say about the relationship between security in the classical setting versus the quantum setting? In this context, Unruh [216] shows that if a protocol is statistically secure in the universal composability (UC) framework [63,216], then the same protocol is quantum UC secure as well, and Fehr, Katz, Song, Zhou and Zikas [113] classified the feasibility of cryptographic functionalities in the universal composability (UC) framework, 3 and showed that feasibility in the quantum world is equivalent (for a large family of functionalities) to classical feasibility, both in the computational and statistical setting. See also [129,210]. • Post-quantum cryptography The area of post-quantum cryptography [32] 4 finds alternatives to the RSA and discrete-log assumptions in (classical) cryptography, in order to circumvent quantum attacks stemming from Shor's algorithm. This area is traditionally considered a topic related to classical cryptography, but the issues arising from the problem of quantum rewinding (Sect. 4.3) and the superposition model for oracle access (Sect. 4.4) may be considered part of post-quantum cryptography as well.
Basics of quantum information
This section contains the rudiments of quantum information that are used in the main text; we assume of the reader only basic knowledge of linear algebra. The reader should be warned that quantum theory is actually much more rich, subtle and beautiful! Textbook references on quantum information include: [141,177,190,228,234].
State space
The bit is the fundamental unit of information for classical information processing. In quantum information processing, the corresponding unit is the qubit, which is described mathematically by a vector of length one in a two-dimensional complex vector space. We use notation from physics to denote vectors that represent quantum states, enclosing vectors in a ket, yielding, i.e. |ψ . We can write any state on one qubit as a |ψ = α|0 + β|1 , where the states |0 and |1 form a basis for the underlying two-dimensional vector space, and where α, β are complex numbers satisfying |α| 2 + |β| 2 = 1. If neither α nor β are zero, then we say that |ψ is in a superposition (linear combination) of both |0 and |1 . The quantum state of two or more qubits can be described by a tensor product. Hence, the four basis states for two qubits are |0 ⊗|0 , |0 ⊗|1 , |1 ⊗|0 , |1 ⊗|1 which is usually abbreviated as |00 , |01 , |10 , |11 . Extending the concept of superposition to multiple qubits, we see that a system of n qubits can be in any superposition of the n-bit basis states |00 . . . 0 , |00 . . . 1 , . . . , |11 . . . 1 . Hence, an n-qubit state is described by 2 n complex coefficients. In case of bipartite quantum states shared among Alice and Bob, subscripts can be used to indicate which player holds which qubits. For instance, the 2-qubit state |0 A ⊗ |0 B = |00 AB means that Alice and Bob both hold a qubit in state |0 .
Unitary evolution and circuits
Basic evolutions of a quantum system are described by linear operations that preserve the norm; formally, these operations can be expressed as unitary complex matrices (a complex matrix U is unitary if UU † = I, where U † is the complex-conjugate transpose of U ). Quantum algorithms are commonly described as circuits (rather than by quantum Turing machines) consisting of basic quantum gates from a universal set. Commonly used singlequbit gates are the negation (X ), phase (Z ) and Hadamard (H ) gates, expressed by the following unitary matrices: An example of a two-qubit gate is the controlled-not operation (C N OT ):
Measurement
In addition to unitary evolution, we specify an operation called measurement, which, in the simplest case, takes a qubit and outputs a classical bit. If we measure a qubit |ψ = α|0 +β|1 , we will get as outcome a single bit, which takes the value 0 with probability |α| 2 and the value 1 with probability |β| 2 . We further specify that, after the process of measurement, the quantum system collapses to the measured outcome. Thus, the quantum state is disturbed and it becomes classical: any further measurements have a deterministic outcome. We have described measurement with respect to the standard basis; of course, a measurement can be described according to an arbitrary basis; the probabilities of the outcomes can be computed by first applying the corresponding change-of-basis, followed by the standard basis measurement. Measurements can actually be described much more generally: e.g. we can describe outcomes of measurements of a strict subset of a quantum system-the mathematical formalism to describe the outcomes uses the density matrix formalism, which we do not describe here. As a simple example of quantum measurements, consider the states |ψ 1 = |0 and |ψ 2 = 1 √ 2 (|0 + β|1 ). Then measuring the state |ψ 1 yields the outcome 1 with unit probability and the state remains |0 , while measuring |ψ 2 yields the outcome 0 or 1, each with probability 1 2 , and the post-measurement state is |0 if we observed outcome 0 and |1 if we observed 1.
Quantum no-cloning
One of the most fundamental properties of quantum information is that it is not physically possible, in general, to clone a quantum system [236] (i.e. there is no physical process that takes as input a single quantum system, and outputs two identical copies of its input). A simple proof follows from the linearity of quantum operations. 5 At the intuitive level, this principle is present in almost all of quantum cryptography, since it prevents the classical reconstruction of the description of a given qubit system. For instance, given a single copy of a general qubit α|0 + β|1 , it is not possible to "extract" a full classical description of α and β, because measuring disturbs the state. At the formal level, however, we generally require more sophisticated tools to prove the security of quantum cryptography protocols (see Sect. 3.1).
Quantum entanglement and nonlocality
A crucial and rather counter-intuitive feature of quantum mechanics is quantum entanglement, a physical phenomenon that occurs when quantum particles behave in such a way that the quantum state of each particle cannot be described individually. A simple example of such an entangled state are two qubits in the state (|00 AB + |11 AB )/ √ 2. When Alice measures her qubit (in system A), she obtains a random bit a ∈ {0, 1} as outcome and her qubit collapses to the state |a A she observed. At the same time, Bob's qubit (in system B) also collapses to |a B and hence, a subsequent measurement by Bob yields the same outcome b = a. It is important to realize that this collapse of state at Bob's side occurs simultaneously with Alice's measurement, but it does not allow the players to send information from Alice to Bob. 5 Assume a quantum operation A which takes as input a qubit in state |ψ (together with a "helping" qubit in state |0 ) and outputs |ψ |ψ . Hence, A|0 |0 = |0 |0 and A|1 |0 = |1 |1 . By linearity of A, it must hold that A |0 +|1 which is not equal to the state |0 +|1 which we would expect as output from a perfect copying operation A. Hence, such an A does not exist.
It simply provides Alice and Bob with a shared random bit. In general, quantum entanglement does not contradict the fundamental non-signaling principle of the theory of relativity stating that no information can travel faster than the speed of light. 6 It turns out that by measuring entangled quantum states, Alice and Bob are able to produce correlations that are stronger than all correlations they could obtain when sharing only classical randomness. In this case, physicists say that the correlations violate a Bell inequality [20]. The most well-known example of such an inequality was proposed by Clauser, Horne, Shimony and Holt [76]. It can be described as a so-called non-local game among two players Alice and Bob. In this CHSH game, Alice and Bob can initially discuss in order to establish a joint strategy. Once the game starts, they are separated and cannot communicate. They receive as input uniformly random bits x and y and have to output bits a and b respectively. They win the game if and only if a ⊕ b = x ∧ y (imagine a third party, called a referee who chooses x and y, receives a and b and checks whether the relationship a ⊕ b = x ∧ y holds). A possible classical strategy for Alice and Bob is to ignore their inputs and always output a = b = 0. This strategy lets them win the game with probability 3/4. It can be checked that there exist no better strategy for two classical players who are not allowed to communicate. In other words, we have the Bell inequality Pr[classical players win CHSH] ≤ 3/4. However, if Alice and Bob share a maximally entangled state (e.g. an EPR pair (|00 + |11 )/ √ 2), they can perform a quantum measurement which allows them to win the CHSH game with probability cos 2 (π/8) ≈ 0.85 which is strictly larger than 3/4, hence violating the Bell inequality. Many experimental tests of this inequality have been performed and consistently found violations of this inequality, thereby proving that the world is actually more accurately described by quantum mechanics rather than by classical mechanics.
Physical representations
The mathematical model of quantum mechanics is currently the most accurate description of the physical world. This theory is without doubt the most successful and well-tested physical theory of all times; it describes a wide range of physical systems, and thus offers a large number of possible physical systems which can serve as quantum devices. These possibilities include photonic quantum computing, superconduction qubits, nuclear magnetic resonance, ion trap quantum computing and atomic quantum computing (see, e.g. [94]). For our purposes, at the abstract level, all of these systems are described by the same formalism; however there may be experimental reasons to prefer one implementation over the other (e.g. photons are well-suited for long-distance quantum communications, but other systems such as superconducting qubits are better suited for quantum interactions).
Quantum cryptographic constructions
In this section, we survey a number of quantum cryptographic protocols (see Sect. 1.1 for a brief overview of these topics). Many of these protocols share the remarkable feature of being based on a very simple pattern of quantum information called conjugate coding. Because of its paramount importance in quantum cryptography, we first present this notion in Sect. 3.1. We then show how conjugate coding is the crucial ingredient in the quantumcryptographic constructions for quantum money (Sect. 3.1), QKD (Sect. 3.2), a quantum 6 However, this "spooky action at a distance" puzzled Einstein a lot, and was the inspiration for his very influential paper co-authored with Podolsky and Rosen [103]. Nowadays, we often call two qubits in the maximally entangled state (|00 AB + |11 AB )/ √ 2 an EPR pair.
Note that, we use the abbreviation R for the rectilinear basis and D for the diagonal basis reduction from oblivious transfer to bit commitment (Sect. 3.3), the limited-quantum-storage model (Sect. 3.4) and delegated quantum computation (Sect. 3.5). Further topics covered in this section are quantum coin-flipping (Sect. 3.6) and device-independent cryptography (Sect. 3.7).
Conjugate coding
Conjugate coding [233] is based on the principle that we can encode classical information into conjugate quantum bases. This primitive is extremely important in quantum cryptographyin fact, the vast majority of quantum cryptographic protocols exploit conjugate coding in one way or another. Conjugate coding is also called quantum coding [30] and quantum multiplexing [26]. The principle of conjugate coding is simple: for clarity of presentation and consistency with commonly used terminology, we associate a qubit with a photon (a particle of light), and use photon polarization as a quantum degree of freedom. Among others, photons can be polarized horizontally (|↔ ), vertically (| ), diagonally to the left (| ), or diagonally to the right (| ). Photon polarization is a quantum property, and by associating |↔ = |0 ,| = |1 , , we can apply quantum operations to these states, as in Sect. 2.
Each set R = {|↔ , | } and D = {| , | } forms a basis (called the rectilinear and diagonal bases, respectively), and can thus be used to encode a classical bit (see Table 1). R and D are conjugate bases.
The relevance of conjugate coding to cryptography is summarized by two key features that were, remarkably, already mentioned and exploited in Wiesner's work [233]: 1. Measuring in one basis irrevocably destroys any information about the encoding in its conjugate basis. 2. The originator of the quantum encoding can verify its authenticity; however, without knowledge of the encoding basis, and given access to a single encoded state, no third party can create two quantum states that pass this verification procedure with high probability.
In order to explain the first property, recall the well-known Heisenberg uncertainty relation [135], which forbids learning both the position and momentum of a quantum particle precisely and simultaneously. In terms of photon polarization, and for a single photon, let us denote by P X the distribution of outcomes when measuring the photon in the rectilinear basis and by Q X the distribution when measuring in the diagonal basis. Following Heisenberg, Maassen and Uffink [162] showed an uncertainty relation: H (P X ) + H (Q X ) ≥ 1 (where H is the Shannon entropy, an information-theoretic measure of uncertainty given by H (P X ) = − x p x log 2 p x ). Intuitively, such a relation quantifies the fact that one can know the outcome exactly in one basis, but consequently has complete uncertainty in the other basis. Looking ahead, we will see that such uncertainty relations play a key role in proving security of quantum cryptographic protocols, e.g. in the limited-quantum-storage setting (Sect. 3.4).
The second property above is explained by noting that a quantum encoding can be verified by measuring each qubit in its encoding basis and checking that the measurement result corresponds to the correct encoded bit. Intuitively, the no-cloning theorem (Sect. 2.4) prevents a third party from forging a state that would pass this verification procedure; however, formalizing this concept requires more work (see Sect. 3.1).
What is more, the technological requirements of conjugate coding are very basic: the single-qubit "prepare-and-measure" paradigm of conjugate coding is feasible with today's technology-thus, many protocols derived from conjugate coding inherit this desirable property (which is, in fact considered the gold standard for "feasible" quantum protocols).
In the late 1960s, Wiesner [233] had the visionary idea that quantum information could be used to create unforgeable bank notes. His ideas were in fact so much ahead of their time that it took years to publish them! (According to [30], Wiesner's original manuscript was written in 1968.) In a nutshell, Wiesner's proposal consists in quantum banknotes created by encoding quantum particles using conjugate coding (Sect. 3.1), with both the classical information and basis choice being chosen as random bitstrings. Thus, a banknote consists of a sequence of single qubits, chosen randomly from the states {| , |↔ , | , | }. As discussed in Sect. 3.1, the originator of the quantum banknote (typically called "the bank") can verify that a quantum banknote is genuine, yet quantum mechanics prevents essentially any possibility of counterfeiting. Clearly such a functionality is beyond what classical physics can offer: since any digital record can be copied, classical information simply cannot be used for uncloneability (not even computational assumptions will help).
Wiesner's work was improved and extended in many ways: early work of Bennett, Brassard, Breidbart and Wiesner [26] showed how to combine computational assumptions with conjugate coding in order to achieve a type of public verifiability for the encoded states (they coined their invention unforgeable subway tokens). Further work on publicly-verifiable (also called public-key quantum money) includes schemes based on the computational difficulty of some knot-theory related problems [107] (see also [3]), verification "oracles" [1] and hidden subspaces [2].
Returning to Wiesner's scheme (which is often called private-key quantum money in order to distinguish it from the public-key quantum money schemes), we note that the first proof of security in the case of multiple qubits is based on semi-definite programming, and appeared only recently [181] (this result is tight, since it also gives an explicit optimal attack). We also note work on variants of Wiesner's scheme in which quantum encodings are returned after validation: in all cases (whether the post-verification state is always returned [108], or the post-verification state is returned only for encodings that are deemed valid [55]), the resulting protocol has been found to be insecure.
We also note that further work has studied the possibility of private-key quantum money that can be verified using only classical interaction with the bank [116,181], quantum coins [185] (which provide a perfect level of anonymity), as well as noise-tolerant versions of Wiesner's scheme [191].
Quantum key distribution
QKD is by far the most successful application of quantum information to cryptography. By now, QKD is the main topic of a large number of surveys (see, for instance, [23,45,56,109,120]). Due to abundance of very good references on this topic, we survey it only briefly here.
The "BB84" protocol [24,25] was the first to show how conjugate coding could be used for an information-theoretically secure key agreement protocol. In a nutshell, the protocol consists in Alice sending a sequence of single qubits, chosen randomly from the states {| , |↔ , | , | }. Bob chooses to measure them according to his own random choice of measurement bases. They communicate their basis choice for each encoded qubit; eavesdropper detection is performed by comparing the measurement results on a fraction of the bases on which their choices coincide-if successful, this procedure gives a bound on the secrecy and similarity of the remaining shared string, which can be used to distill an almostperfect shared secret between Alice and Bob. In order to prevent man-in-the-middle attacks, this procedure requires authenticated classical channels. Usually, authentication is achieved by an initial shared classical secret between Alice and Bob. Thus, QKD is more accurately described as a key-expansion primitive. We note that, as a theoretical or experimental tool, it is often useful to consider a protocol equivalent to BB84, where the random choice of encoding basis (rectilinear of diagonal) is delayed; thus a quantum source would produce a sequence of maximally entangled states (|00 + |11 )/ √ 2, with both Alice and Bob then measuring in their random choice of bases. That an entangled system could be used in lieu of single qubits was suggested by Ekert [104], but note that Ekert's idea was to base security on the observation of a Bell-inequality violation-which implies a set of different measurements than in the rectilinear/diagonal bases. The entanglement-based ("purified") BB84 protocol was introduced by Bennett, Brassard and Mermin in [28].
We briefly mention that the formal security of QKD was originally left open, and that a long sequence of works (e.g. [158,171]) culminated in a relatively accessible proof by Shor and Preskill, based on the use of quantum error correction [207]. Further work by Renner [198] showed a very different approach for proving the security of QKD based on exploiting the symmetries of the protocol (and applying a de Finetti style representation theorem), and splitting the security analysis into the information-theoretic steps of error-correction and privacy amplification. Other proofs of QKD are more directly based on the complementarity of the measurements [148]. It is a sign for the complexity of QKD security proofs that most articles on this topic focus only on subparts of the security analysis and only very recently did a first comprehensive analysis of security appear [213].
The huge success of QKD is due in part to the fact that it is readily realizable in the laboratory (the first demonstration appeared in 1992 [27]). In light of practical implementations, security proofs for QKD need to be re-visited in order to obtain concrete security parameters-this is the realm of finite-key security [133,204,213,214]. Furthermore, we note that when it comes to real-world implementations, QKD is vulnerable to side-channel attacks, which are due to the fact that physical implementations deviate from the idealized models used for security proofs (this is often referred to as quantum hacking [161]).
We further note that the assumption of an initial short shared secret (for authenticating the classical channel) in the implementation of QKD can be replaced with a computational assumption or an assumption about the storage capabilities of the eavesdropper (see Sect. 3.4). The result is everlasting [219] or long-term [211] security: information-theoretic security is guaranteed except during the short period of time during which we assume a computational (or memory) assumption holds.
Bit commitment implies oblivious transfer
Oblivious transfer (OT) and bit commitment (BC) are two basic and important primitives for cryptography. In the classical case, it is easy to show that OT implies BC (in the information-theoretic setting), but the implication in the other direction does not hold. 7 In stark contrast, OT and BC are known to be equivalent in the quantum world. In the following sections, we introduce these primitives (Sect. 3.3.1) and describe a quantum reduction from oblivious transfer to bit commitment (Sect. 3.3.2).
Oblivious transfer (OT) and bit commitment (BC)
Wiesner's paper about quantum cryptography [233] introduced "a means for transmitting two messages either but not both of which may be received". This classical cryptographic primitive was later rediscovered (under a slightly different form) by Rabin [195], and was given in the form of 1-out-of-2 Oblivious Transfer (OT)) by Even, Goldreich and Lempel [106]. In OT, Alice sends two messages m 0 , m 1 to Bob who receives only one of the messages m c according to his choice bit c. Security for Alice (against dishonest Bob) guarantees that Bob receives only one of the two messages, whereas security for Bob (against dishonest Alice) ensures that Alice does not learn anything about Bob's choice bit. 8 In the version by Rabin [195], this primitive is essentially a secure erasure channel where Alice sends a single bit to Bob. This bit gets erased with probability 1/2 (in this case Bob receives ⊥), but Alice does not learn whether the bit was erased. In fact, it is known that Rabin OT is equivalent to 1-out-of-2 OT [81].
The importance of OT is embodied by the fact that it is universal for secure two-party computation [145] (i.e. using several instances of 1-out-of-2 OT, any function can be securely evaluated among two parties such that no dishonest player can learn any information about the other player's input-beyond what can already be inferred from the output of the computed function). 9 Due to this universality, the innocent-looking OT primitive gives an excellent indicator for the cryptographic power of a model.
Bit Commitment (BC) is a cryptographic primitive that captures the following two-party functionality: Alice has a bit b that she wants to commit to Bob, but she wants to prevent Bob from reading b until she chooses to reveal it (concealing or hiding). Although Bob should not be able to determine b before Alice reveals it, Alice should be unable to change the bit after it is committed (binding). A physical-world implementation of bit commitment would be for Alice to write b on a piece of paper, lock it in a safe, and send the safe to Bob. Since Bob cannot open the safe, he cannot determine b (concealing), and since Alice has physically given the safe to Bob, she cannot change b after the commitment phase (binding). When Alice wishes to reveal the bit, she sends the key to Bob.
Quantum protocol for oblivious transfer
In [29], Bennett, Brassard, Crépeau, and Skubiszewska suggested a very natural quantum protocol for OT (assuming BC): suppose Alice would like to obliviously send m 0 and m 1 so that Bob receives the message m c according to his choice bit c. She uses conjugate coding to send n quantum states each chosen randomly from the states {| , |↔ , | , | } to Bob. Let us denote by x ∈ {0, 1} n the string of encoded bits and by θ ∈ {R, D} n the string of basis choices. Bob measures the received qubits in a random basis θ ∈ {R, D} n of his choice, resulting in outcomes x ∈ {0, 1} n . After Alice tells Bob the bases θ ∈ {R, D} n she was using, Bob can partition the set of indices into two disjoint sets I 0 ∪ I 1 = {1, 2, . . . , n}: depending on his OT choice bit c, he puts all the indices where he measured correctly in I c and the rest in I 1−c . Bob then informs Alice about I 0 , I 1 (in this fixed order, independent of c). Alice picks two independent hash functions f 0 , f 1 (mapping from n/2 bits to 1 bit) and sends s i = f i (x| I i ) ⊕ m i for i = 0, 1 to Bob. Here, x| I denotes the substring of x with bit indices in I . Bob will be able to recover m c by computing f c (x | I c ) ⊕ s c .
While it is easy to show that the above protocol is correct and secure against dishonest Alice (i.e. Alice does not learn anything about Bob's choice bit c), it is clearly insecure against a dishonest Bob who is able to store all quantum states until Alice tells Bob the basis string θ . Such a Bob can then measure all positions in the correct basis and hence recover both m 0 and m 1 . The idea of [29] was to force Bob to perform the measurement by requiring him to commit to the bases θ and outcomes x . Alice then checks a fraction of these commitments before Alice announces the basis string θ .
A long line of research [29,82,168,172,238] has worked towards proving the security of this protocol. However, the crucial tools for an actual proof were eventually developed by Damgård, Fehr, Lunemann, Salvail, and Schaffner [91] nearly two decades after the original protocol was proposed; Unruh subsequently used these techniques to formally establish the equivalence of BC and OT in the quantum UC model [216].
Limited-quantum-storage models
As we will see in Sect. 4.1, bit commitment is impossible to construct in the quantum world. More generally, it has been shown (see Sect. 4.2) that secure two-party computation is impossible in the plain quantum model, without any additional restrictions on the adversaries. One option in order to obtain security is to make computational assumptions. However, as we discuss below, it is also possible to obtain information-theoretic security, while making instead some reasonable assumptions about the storage capabilities of the adversary.
One of the challenges in building quantum devices is the difficulty of storing quantum information in a physical system (such as atomic or phototonic systems) under stable conditions over a long period of time-building a reliable quantum memory is a major research goal in experimental quantum physics (see e.g. [208] for a review produced by the European integrated project Qubit Applications (QAP)). Despite continuous progress over the last years, large-scale quantum memories that can reliably store quantum information are currently out of reach. As we discuss in this section, ingenious quantum cryptographers have turned this technological challenge into an advantage for quantum cryptography!
The bounded-quantum-storage model, introduced by Damgård, Fehr, Salvail and Schaffner in [86], is a model which assumes that an adversary can only store a limited number of qubits. Generally, protocols in this model require no quantum storage for the honest players, and are secure against adversaries that are unable to store a constant fraction of the qubits sent in the protocol. This model is inspired by the classical bounded-storage model, as introduced by Maurer [62,167]. In this model, honest parties are required to store (n) bits, but protocols (for OT and key agreement) are insecure against attackers with storage capabilities of (n 2 ) bits. Unfortunately, this gap between storage requirements for honest and dishonest players can never be bigger than quadratic [101,102]. Combined with the fact that classical storage is constantly getting smaller and cheaper, this quadratic gap puts the classical-boundedstorage assumption on a rather weak footing. In sharp contrast, the quantum bounded-storage model gives an unbounded gap between the quantum-storage requirements of the honest and dishonest players, making this model model robust to technological improvements.
In the bounded-quantum storage model, a protocol for OT was proposed [86]. Again, it is based on conjugate coding and is essentially identical to the protocol outlined in the previous Sect. 3.3.2, except that there is a waiting time t (say, 1 s) right after the quantum phase, before Alice sends her basis string θ to Bob. In this time, a dishonest receiver Bob is forced to use his (imperfect) quantum memory and therefore loses some information about Alice's string x which intuitively leads to the security of the oblivious transfer. In a subsequent series of works [88][89][90], protocols for BC, OT and password-based identification (i.e. the secure evaluation of the equality function) were presented. For an overview of these results, see [109,205].
The noisy-quantum-storage model, as introduced by Wehner, Schaffner and Terhal [231] captures the difficulty of storing quantum information more realistically. Whereas in the bounded-quantum-storage model, the physical number of qubits an attacker can store is limited, dishonest players are allowed arbitrary (but imperfect) quantum storage in the noisyquantum-storage model.
Beyond limited quantum storage Continuing the idea of assuming realistic technological restrictions on the adversary, researchers have developed protocols that are secure under the assumption that certain classes of quantum operations are hard to perform. A natural class to study consists of adversaries who can store perfectly all qubits they receive, but who cannot perform any quantum operations, except for single-qubit measurements (adaptively in arbitrary bases) at the end of the protocol. Such a model was first studied by Salvail [201], and later by Bouman, Fehr, Gonzáles-Guillén and Schaffner [39] and Liu [138,154,155] under the name of "isolated-qubit model".
Cryptographic proof techniques In Sect. 3.1, we mentioned uncertainty relations (and in particular entropic uncertainty relations-which quantify uncertainty in information-theoretic terms). These relations play a key role in the security proofs for protocols in the limitedquantum-storage model. We refer to [229] for a survey by Wehner and Winter on this topic. In fact, one can argue that the areas of limited-quantum storage models and entropic uncertainty relations have benefited a lot from each other, as research questions in one area have led to results in the other and vice versa. This fruitful co-existence is witnessed by a series of publications: [34,35,39,100,187].
Composability It is natural to ask whether limited-quantum-storage protocols for basic tasks such as OT can be composed to yield more involved two-or multi-party secure computations. This question was answered in the positive in a number of works, including: Fehr and Schaffner [111], Wehner and Wullschleger [230] (for sequential composition) and Unruh [217] (for bounded concurrent composition).
Implementations
The technological requirements to implement limited-quantum-storage protocols in practice are modest and rather similar to already available QKD technology (often, the actual quantum phase is the same as in QKD). A small but significant difference is that it makes sense to run secure computations among players which are located within a few meters of each other, whereas the task of distributing keys demands large separations between players. This difference allows experimenters to optimize some parameters (such as the rate) differently for secure-computation protocols. The experimental feasibility of these protocols was analyzed theoretically in [232] and demonstrated practically in [105,188].
Delegated quantum computation
Quantum computers are known to enable extraordinary computational feats unachievable by today's devices [126,184,206]. However, technologies to build quantum computers are currently in their infancy; the current state-of-the-art suggests that, when quantum computers become a reality, these devices are likely to be available at a few location only. In this context, we envisage the outsourcing of quantum computations from quantum computationally weak clients to universal quantum computers (a type of quantum cloud architecture). This scenario has appealing cryptographic applications, such as the delegated execution of Shor's algorithm [206] for factoring, and thus breaking RSA public keys [200]. From the cryptographic point of view, this scenario raises many questions in terms of the possibility of privacy in delegated quantum computation.
Pioneering work of Childs [70] and Arrighi and Salvail [12] studied this problem for the first time. The first practical and universal protocol for private delegated quantum computation, called "universal blind quantum computation" (uBQC) was given by Broadbent, Fitzsimons and Kashefi [53]. In uBQC, the client only needs to be able to prepare random single-qubit auxiliary states (the client requires no quantum memory or quantum processor). Via a classical interaction phase, the client remotely drives a quantum computation of her choice, such that the quantum server cannot learn any information about the computation that is performed-with only the client learning the output. The uBQC protocol has been demonstrated experimentally [17].
It is remarkable that uBQC is also based on conjugate coding! For the first time, it is an application where the states derived from conjugate coding are used to directly achieve computational cryptographic tasks (vs other applications of conjugate coding which essentially directly measure these states in order to extract classical information). This relationship with conjugate coding is more clearly apparent in a related protocol called "quantum computing on encrypted data" (QCED) [51,114]. Here, the computation (as given by a quantum circuit) is public, but is executed remotely on an encrypted version of the data (reminiscent of the work on classical fully homomorphic encryption [117,199]). In this situation, QCED shows that it is possible to achieve delegated quantum computation where the client only needs to send random states in {|↔ , | , | , | } (hiding of the computation itself can be achieved via a universal circuit construction).
We mention further that the verifiability of delegated quantum computations has been addressed in [5,54], and has been the object of an experiment [18], and that security has been analyzed in terms of a strong notion of composability [97]. Furthermore, work of Reichardt, Unger and Vazirani shows that delegated quantum computation is achievable for a purely classical client, if we are willing to make the assumption of two universal quantum computers that cannot communicate [197] (see also Sect. 3.7).
Quantum protocols for coin flipping and cheat-sensitive primitives
In a classic cryptography paper [36], Blum describes how to "flip a coin over the telephone" with the help of bit commitment: Alice commits to a random bit a, Bob tells Alice another random bit b, and Alice opens the commitment to a. The outcome of the coin is a ⊕ b which cannot be biased by any of the two players (intuitively, because at least one random bit of an honest player was involved in determining the outcome). A coin flip with this property is called a strong coin flip. In contrast, for a weak coin flip, Alice and Bob have a desired outcome, i.e. Alice "wins" if the outcome is 0, and Bob "wins" if the outcome is 1. A weak-coin-flipping protocol with bias ε guarantees that no dishonest player can bias the coin towards his or her desired outcome with probability greater than ε. In the classical world, coin-flipping can be achieved under computational assumptions. However, in the information-theoretic setting, it was shown [130,136] that one of the players can always achieve his desired outcome with probability 1.
In the quantum world, we note that the general impossibility results for quantum two-party computation (Sect. 4.2) are not applicable to coin flipping, since the participants in a coin flipping protocol have no inputs, and instead aim to implement a randomized functionality. Nevertheless, Kitaev showed [146] (see also [10]) that any quantum protocol for strong coinflipping is insecure since it can be biased by a dishonest player. Formally, the bias of any strong coin-flipping protocol is bounded from below by 1 √ 2 − 1 2 . Interestingly, Mochon [180] managed to expand Kitaev's formalism of point games to prove the existence of a weak coin-flipping protocol with arbitrarily small bias ε > 0. Unfortunately, Mochon's 80-page proof has never been peer-reviewed and is rather difficult to follow. Aharonov, Chailloux, Ganz, Kerenidis and Magnin [6] have managed to simplify this proof considerably.
Based on this result, Chailloux and Kerenidis [66] derived an optimal strong-coin-flipping protocol with the best possible bias 1 √ 2 − 1 2 , matching Kitaev's lower bound. Also based on a weak-coin flip, Chailloux and Kerenidis [67] gave the best possible imperfect quantum bit commitment. For the optimality, they prove that in any quantum bit commitment protocol, one of the players can cheat with significant probability. 10 Such a result shows that an imperfect bit commitment cannot be amplified to a perfect one-which severely limits the applicability of the scheme to the cryptographic setting.
Cheat sensitivity Quantum mechanics offers the possibility to construct imperfect cryptographic primitives in the sense that they are correct (as long as the players are honest), but they are insecure, i.e. they do allow one of the players (say Alice) to cheat. However, the other player Bob has the possibility to check if Alice has been cheating (possibly by sacrificing the protocol outcome he would have obtained if he followed the protocol without checking). Hence, a cheating Alice has non-zero probability to be detected. These protocols are called cheat sensitive [4,57,118,119,132,137]. In this context, it is argued that one could set up a game-theoretic environment: a player caught cheating has to pay a huge fine (or undergo another punishment) and is therefore deterred from actually doing it.
We note, however, that the applications of cheat sensitive protocols to the cryptographic setting are limited: while quantum protocols for imperfect and cheat-sensitive primitives can provide nice examples of separations between the classical and quantum worlds, they fulfill their purpose as long as they are considered as "final products", for instance in case of private information retrieval. However, it is difficult to argue that a strong coin flip with a constant bias, an imperfect bit commitment, or imperfect OT are cryptographically useful primitives, because they do not inherit the cryptographic importance of their perfect counterparts which can be used as building blocks for more advanced cryptographic primitives. In the case of cheat sensitivity, it is often unclear how such primitives behave under composition. In fact, it is a challenging open question to come up with a composability framework for cheat-sensitive quantum primitives. 10 Formally, max{P * A , P * B } ≥ 0.739 where P * A is the average over the probabilities that a dishonest committer Alice successfully reveals bit b = 0 and successfully reveals b = 1; and P * B is the probability that a dishonest verifier Bob guesses the committed bit b after the commitment phase. P * A = P * B = 1 2 holds for a perfect bit-commitment protocol.
Device-independent cryptography
An exciting feature of quantum cryptography is that it allows the possibility of deviceindependent cryptography in the sense that protocols can be run on untrusted devices which have possibly been constructed by the adversary. The crucial insight is that the "quantumness" of two (or more) devices can be tested and guaranteed by using the devices to violate a Bell inequality, i.e. to produce correlations that are stronger than allowed by classical mechanics. As outlined in Sect. 2.5, the most well-known example of such an inequality is the CHSH game [76]. The key observation of device-independent cryptography is that in order to violate the CHSH inequality, a certain amount of intrinsic quantum randomness has to be present in the players' outputs. That we could exploit this relationship for cryptography was originally pointed out by Ekert [104], and further studied by Mayers and Yao [173] and Barrett, Hardy and Kent [16]. In fact, this latter work shows not only how to accomplish cryptography with untrusted devices, but also how to do away completely with assumptions on the validity of quantum mechanics: instead, it shows how to accomplish QKD solely based on the nonsignaling principle [131,166]!
The relation between the CHSH violation and the amount of entropy in the outcomes of the measurements can be quantified exactly [194]. In fact, on the topic of self-testing quantum devices [163,174,175,178], Reichardt, Unger and Vazirani have shown a strong robustness result [197] in the sense that being close to winning the CHSH-game with optimal probability implies that the players must essentially be in possession of a state which is close to an EPR pair. This is an extremely powerful result which has various applications.
The two qubits of an EPR state are maximally entangled. Quantum mechanics forbids any third party to be entangled with such a state (a phenomenon called monogamy of entanglement). Hence, measurements on an EPR state result in shared randomness which is guaranteed to be unknown to any eavesdropper. 11 In a similar vein, one can argue that the measurement outcomes of Alice and Bob while successfully playing the CHSH game cannot be known to any adversary even if this adversary has built the devices herself and is possibly still entangled with them.
This effect leads to the interesting cryptographic applications of device-independent randomness amplification and expansion and device-independent QKD. In randomness amplification, the task at hand is to obtain near-perfect randomness from a weak random source using untrusted quantum devices (without using any additional randomness); this idea was originally proposed by Colbeck and Renner [80]. In randomness expansion, one wants to expand a few truly random bits into more random bits, again using untrusted quantum devices; this idea was originally proposed by Colbeck and Kent [67,78]. Providing formal security proofs has turned out to be rather challenging and was first established against classical adversaries [112,193,194], and later also against quantum adversaries [179,224]. A combination of the latest protocols allows to arbitrarily amplify very weak sources of randomness in a device-independent fashion. Experimental realizations of device-independent randomness include [75,194].
In device-independent QKD, we make the additional assumption that there is no communication between the adversary and the quantum devices. The first formal proof for a device-independent QKD scheme was given by Vazirani and Vidick [225]. Current research in this area aims to propose more practical device-independent QKD schemes that retain their functionality at realistic levels of noise.
Quantum cryptographic limitations and challenges
In this section, we survey a number of limitations and challenges of quantum cryptography (see Sect. 1.1 for a brief overview of these topics). We cover the impossibility of informationtheoretically secure quantum bit commitment (Sect. 4.1) as well as the impossibility of information-theoretically secure two-party quantum computation (Sect. 4.2). Next, we survey the challenges imposed by quantum information in the context of quantum rewinding (Sect. 4.3) and superposition access to oracles in a quantum world (Sect. 4.4). Finally, we discuss impossibility results for position-based quantum cryptography (Sect. 4.5).
Impossibility of quantum bit commitment
The 10-year period following the publication of the first QKD protocol [24] saw only a handful of cryptographers working in quantum cryptography. This era was a period of vivid optimism. Indeed, the concept that quantum mechanics could allow unconditionally secure key expansion is mind-boggling, so why stop there? The next natural step to examine was oblivious transfer, which is an important building block for cryptography [145] (see Sect. 3.3.1 for definitions of bit commitment and oblivious transfer).
From this early period of quantum cryptography, we know of a quantum reduction from bit commitment to oblivious transfer [29] (see Sect. 3.3). Hence, the holy grail of oblivious transfer is achievable, if only we have access to a bit commitment! Thus, researchers explored the possibility of quantum bit commitment (i.e. of using quantum information in order to build bit commitment), with the hopes of founding all of cryptography on the unique assumption of quantum mechanics. This line of work started in [44], culminating in a claim of a unconditionally secure quantum bit commitment protocol [46]. However, the optimism for quantum cryptography lasted only a few years as Mayers [169] found a subtle flaw in the original argument of security. This result was generalized to rule out all quantum protocols for bit commitment by Mayers, and Lo and Chau [157,170]. Note that the possibility of bit commitment in the limited-quantum-storage model (Sect. 3.4) introduces an extra physical assumption, and does not contradict the impossibility as discussed here! We now briefly review the main impossibility argument [47] (for ease of presentation, we focus on the exact case). 12 First, consider the following sketch of impossibility for perfectly secure classical bit commitment: suppose such a protocol exists. Then by the informationtheoretic security requirement, at the end of the commitment phase, Bob's view of the protocol must be independent of b (since, otherwise, the protocol would not be perfectly hiding). But this independence implies that Alice can choose to reveal either b = 0 or b = 1 in the reveal phase, with both being accepted by Bob. Hence, the bit commitment cannot be binding. It is interesting that the same proof structure is applicable to the quantum case, albeit by invoking some slightly more technical tools. Namely, we first consider a purified version of the protocol, which consists in all parties acting at the quantum level (measurements are replaced by a unitary process via a standard technique). Next, by the information-theoretic hiding property, the reduced quantum state that Bob holds at the end of the commit phase 12 See also a more recent proof exploiting "quantum combs" [72], as well as a more general result about the impossibility of "growing" quantum bit commitments [235]. must be identical, whether b = 0 or b = 1. This condition is enough to break the binding property, since it means [215] that Alice can locally perform a unitary quantum operation on her system in order to re-create a joint state consistent with either b = 0, or b = 1, at her choosing. 13 Hence, she can chose to open either b = 0 or b = 1 at a later time, and Bob will accept: the commitment scheme cannot be binding.
Going back to the original paper on quantum bit commitment [46], we note that a subtlety in the definition of the binding property is the origin of the false claim of security: while it is true that the protocol is such that Alice is unable to simultaneously hold messages that would unveil a commitment to b = 0 and as b = 1 (and thus, to be able to choose to open b = 0 and b = 1), this is insufficient to prove security, since in fact Alice is able to delay her choice of commitment until the very end of the protocol-at which point she can choose to open as either b = 0 or b = 1 (but not necessarily both at the same time!).
Impossibility of secure two-party computation using quantum communication
Given the impossibility of quantum bit commitment, the next question to ask is: are there any classical primitives that may be implemented securely using quantum communication? In fact, the possibility for OT was stated as a open problem in [45]. Unfortunately, this hope was shattered rather quickly, as impossibility results were given by Lo in [156] for onesided computations (where only one party receives output). This result already shows the impossibility of 1-out-of-2 OT-the proof technique follows closely the technique developed for the impossibility of quantum bit commitment (see Sect. 4.1).
It took almost 10 years until Colbeck showed the first impossibility result for two-sided computations, namely that Alice can always obtain more information about Bob's input than what is implied by the value of the function [79]. In a similar vein, Salvail, Schaffner and Sotakova proved in [202] that any quantum protocol for a non-trivial primitive leaks information to a dishonest player. What is worse, even with the help of a trusted party, the cryptographic power of any primitive cannot be "amplified" by a quantum-communication protocol.
Buhrman, Christandl and Schaffner [58] have strengthened the above impossibility results by showing that the leakage in any quantum protocol is essentially as bad as one can imagine: even in the case of approximate correctness and security, if a protocol is "secure" against Bob, then it is completely insecure against Alice (in the sense that she can compute the output of the computation for all of her possible inputs). For impossibility results in the universal composability (UC) framework, see [113].
Zero-knowledge against quantum adversaries: "quantum rewinding"
Zero-knowledge interactive proofs, as introduced by Goldwasser, Micali, and Rackoff [124] are interactive proofs with the property that the verifier learns nothing from her interaction with the honest prover, beyond the validity of the statement being proved. These proof systems play an important role in the foundations of cryptography, and are also fundamental building blocks to achieve cryptographic functionalities (see [121] for a survey).
In zero-knowledge interactive proofs, the notion that the verifier "learns nothing" is formalized via the simulation paradigm: if, for every cheating verifier (interacting in the protocol on a positive instance), there exists a simulator (who does not interact with the prover) such that the output of the verifier is indistinguishable from the output of the simulator, then we say that the zero-knowledge property holds. In the classical world, a common proof technique used for establishing the zero-knowledge property is rewinding: a simulator is typically built by executing the given verifier-except that some computation paths are culled if the random choices of the verifier are not consistent with the desired effect. This selection is done by keeping a trace of the interaction, thus, if the interaction is deemed to have followed an incorrect path, the simulation can simply reset the computation ("rewind") to an earlier part of the computation (see [121] and references therein).
In the quantum setting, such a rewinding approach is impossible: the no-cloning theorem tells us that it is not possible, in general, to keep a secondary copy of the transcript in order to return to it later on. This problem is further aggravated by the fact that, in the most general case, the verifier starts with some auxiliary quantum information (which we do not, in general, know how to re-create)-thus even a "patch" that would emulate the rewinding approach in the simple case would appear to fail in the case of auxiliary quantum information. We emphasize that the above concerns about the zero-knowledge property are applicable to purely classical protocols: honest parties are completely classical, but we wish to establish the zero-knowledge property against a verifier that may receive, store and process quantum information (these concerns are independent of the computational power of the verifier-they simply relate to the computational model!).
The fundamental difficulty in proving the zero-knowledge property in the quantum world was first discussed by van de Graaf [223]; while some progress was made on this question [85], it is the breakthrough result of Watrous [227] that restored confidence that the zeroknowledge property of many standard classical zero-knowledge proofs is maintained in a quantum world.
In a nutshell, Watrous introduced the technique of quantum rewinding, which establishes that under some reasonable (and commonly satisfied) conditions, the success probabilities of certain processes with quantum inputs and outputs can be amplified. This technique therefore provides an alternative to the classical rewinding paradigm, and is used to show that the Goldreich-Micali-Wigderson graph 3-coloring protocol [122] is zero-knowledge against quantum attacks. We briefly mention that quantum rewinding is established using a technique resembling amplitude amplification [48] as is related to Grover's quantum search algorithm [126].
Further work on quantum rewinding has dealt with extending the domain of applicability to proofs of knowledge [218]. However, [11] show limitations to this technique (so that, in fact-relative to an oracle-there exists classical protocols that are insecure against quantum adversaries). See also [85].
Superposition access to oracles: quantum security notions
Post-quantum cryptography [32] (see Sect. 1.2) investigates classical cryptographic schemes which remain secure in the presence of quantum adversaries. In classical cryptography, security is often defined in terms of an interactive game between an adversary and a challenger: a scheme is deemed secure if the adversary can only win the game with negligible probability. When such notions are used to prove post-quantum security, one must consider quantum adversaries which are potentially able to communicate quantumly with the challenger. An example is the chosen-plaintext-attack (CPA) learning phase that is present in game-based security definitions, for instance for defining indistinguishability (IND) security of encryption schemes [123,140], where it is natural to consider attackers that can query superpositions of plaintexts to be encrypted and are returned superpositions of according ciphertexts from the challenger.
Another important example relates to the random-oracle (RO) model. A common technique used in classical cryptography is to assume that hash functions are perfect random oracles which adversaries can evaluate. It is well-known in classical cryptography that the RO-methodology comes with a plethora of techniques that can be employed in order to give formal proofs. Unfortunately, most of these tricks do not work in a quantum context for the following reason: a quantum adversary can always evaluate a classical hash function on an arbitrary superposition of inputs. Therefore, in the quantum random oracle (QRO) model, it is necessary to give the adversary superposition access to the oracle. As a consequence, standard techniques from the classical RO model (such as planting the challenge in a random one of the RO queries of the adversary) fail in the more realistic QRO model setting (the adversary might make a single quantum query with all input values in superposition).
Boneh, Dagdelen, Fischlin, Lehmann, Schaffner, and Zhandry [38] first showed how to correctly define the random-oracle model in the quantum setting. They also showed a separation between the classical and quantum RO models. Zhandry [240] showed how to plant challenges in the QRO model at the beginning of the execution, and Unruh [222] showed how to reprogram the RO during runtime. Security definitions allowing superposition access have subsequently been studied by Boneh and Zhandry [37,240] in the context of encryption, digital signatures and the construction of pseudo-random functions. See also related work by Damgård, Funder, Nielsen and Salvail [92], who study superposition attacks on secret-sharing and multi-party protocols.
Position-based quantum cryptography
In cryptography, digital keys or biometric features are used to verify the identity of a person. The goal of position-based cryptography is to use the geographical position of an entity as a cryptographic credential. As a physical analogy, consider the scenario of a bank, where typically, the mere fact that a bank teller is behind the counter (her position) suffices as a credential in order to initiate the exchange of sensitive information.
A central building block of position-based cryptography is the task of position verification, a problem previously studied in the field of wireless security [42,61,64,65,203,209,226,241]. The goal is to prove to a set of verifiers that one is at a certain geographical location. Protocols typically exploit the relativistic no-signaling principle that messages cannot travel faster than the speed of light. By responding to a verifier in a timely manner, one can guarantee that one is within a certain distance of that verifier [42]. It was shown in [69] that classical positionverification protocols based only on this relativistic principle can be broken by multiple attackers who simulate being at the claimed position while physically residing elsewhere in space. Because of the no-cloning property of quantum information (see Sect. 2.4), it was believed that with the use of quantum messages one could devise protocols that were resistant to such collaborative attacks. Several schemes were proposed [68,144,152,164,165] that later turned out to be insecure. Finally, Buhrman, Chandran, Fehr, Gelles, Goyal, Ostrovsky and Schaffner showed that also in the quantum case, no unconditionally secure schemes are possible [60], as long as the colluding adversaries share a large enough amount of entanglement: attackers can break the protocol if the number of pre-shared EPR pairs is exponential in the size of the messages of the protocol [19]. This exponential overhead in resources (in terms of entanglement and quantum memory) leads to the main open problem in this research area, namely to find quantum protocols which remain secure under the assumption that adversarial resources are restricted to a polynomial amount, while at the same time, honest players can perform the schemes efficiently.
Historically, position-based schemes were first studied by Kent, Monroe and Spiller in 2002 under the name of "quantum tagging". A US-patent was granted in 2006 [143], but the results appeared in the scientific literature only in 2010 [144]. Some simple positionverification schemes are studied in [59,144]. The only quantum ingredient in these protocols is a single qubit sent to the prover who is required to route this qubit back to the correct verifier depending on the classical information he also receives from the verifiers. Note that the actions of the honest players are simple enough that they can be implemented using current quantum technology.
In order to analyze how much entanglement colluding adversaries need to break these simple schemes, a new model of (classical) communication complexity (called the gardenhose model) was introduced by Buhrman, Fehr, Schaffner and Speelman [59]. This model connects attacks on position-based quantum protocols to various interesting problems in classical complexity theory and communication complexity, as witnessed in related work [73,147]. In particular, the garden-hose complexity of a function gives an upper bound on the amount of entanglement required to break the security of the position-verification protocol based on that function. However, it is an open question whether more advanced techniques will allow to also prove lower bounds on the entanglement required to break these simple position-verification protocols.
In [220], Unruh introduces a helpful methodology for analyzing quantum circuits in space-time and gives a position-verification protocol in three dimensions which is secure in the quantum-random-oracle model. Furthermore, [60,220] give schemes for position-based authentication which allows the verifiers to be convinced that a message originated from a certain location. However, it remains an open problem to find efficient schemes which do not use random oracles.
Conclusion and open problems
Since its inception almost 50 years ago, quantum cryptography has developed into an active and exciting multidisciplinary area of research that combines state-of-the-art techniques from cryptography, quantum physics, complexity theory, information theory and beyond. While experimental implementations are still at the prototype level, our theoretical understanding of the power and limitations of quantum cryptography is continuously expanding.
Ongoing work on quantum cryptography consists in improving existing schemes as well as finding further applications and proof techniques. As final words, we mention here some open problems of interest.
• What types of cryptosystems can quantum algorithms break? The area of post-quantum cryptography bases classical cryptography on computational problems which are hard even for quantum computers. More research on quantum algorithms for quantum cryptanalysis is needed to fully understand how difficult these problems are. This understanding is also crucial when choosing the security parameters for post-quantum cryptographic schemes. • Can we make device-independent protocols that are feasible in practice? It is a challenging open problem to develop device-independent protocols (for key distribution, but possibly also for other applications) which can tolerate a realistic amount of noise. • Can quantum protocols verify the position of a player? As outlined in Sect. 4.5, one of the main open questions in the area of position-based cryptography is to find a protocol which can be executed efficiently (with current technology) by honest players, but requires an exponential amount of resources (such as entangled qubits) for attackers to break it.
• How can we construct quantum-secure pseudorandom permutations (qPRP)? Related to the topic of quantum security notions (Sect. 4.4), Zhandry [240] has shown how to construct quantum-security pseudo-random functions (qPRF). Classically, it is well-known that using the PRF in a three-round Feistel network yields a pseudo-random permutation. However, this construction is probably insecure in the quantum setting [149].
It is an open question how to construct quantum-secure pseudo-random permutations. • Which cryptographic functionalities can be achieved by quantum protocols? The impossibility results from Sect. 4.2 are all concerned with deterministic classical functionalities. In Sect. 3.6, we have seen that quantum protocols for (weak or biased strong) coin flipping exist. Hence what exactly is the set of randomized classical functionalities that can be implemented by quantum protocols? More generally, can these impossibility results be extended to quantum functionalities? • Does quantum information allow for devices that hide the inner workings of a computer program? Due to its diverse and far reaching applications, program obfuscation has been long considered as a holy grail of cryptography. However, hopes of attaining highly secure obfuscation were diminished in 2011 by an impossibility proof [14] (note, however that weaker security notions are attainable [115]). The situation is completely different in the quantum case, since the proof technique is not applicable (essentially due to the nocloning theorem). As such, a positive result establishing that quantum information allows program obfuscation would unleash a number of powerful primitives, and would yield another qualitative advantage of quantum information over its classical counterpart. • What are the limits of the delegated quantum computation scenario? In Sect. 3.5, we reviewed results on how a quantum computationally weak client can outsource a quantum computation. An open question that remains is to establish the ultimate limits in terms of the power of the client: can a fully classical client delegate a private quantum computation to a single quantum server, while ensuring privacy and/or verifiability? In the computational setting, this question is related to that of quantum fully homomorphic encryption: can we encrypt quantum data such that any quantum circuit can be applied to the encrypted data (without revealing the key, of course!)? • Can we build quantum public-key money from standard assumptions? Current techniques for quantum public-key money rely on ad hoc assumptions (see Sect. 3.1). An open problem is to construct these primitives on standard cryptographic assumptions (such as the existence of quantum-secure one-way functions [7,183,192]). | 17,609.2 | 2015-10-21T00:00:00.000 | [
"Computer Science",
"Mathematics",
"Physics"
] |
Creatine, guanidinoacetate and homoarginine in statin-induced myopathy
Our study evaluated the effect of creatine and homoarginine in AGAT- and GAMT-deficient mice after simvastatin exposure. Balestrino and Adriano suggest that guanidinoacetate might explain the difference between AGAT- and GAMT-deficient mice in simvastatin-induced myopathy. We agree with Balestrino and Adriano that our data shows that (1) creatine possesses a protective potential to ameliorate statin-induced myopathy in humans and mice and (2) homoarginine did not reveal a beneficial effect in statin-induced myopathy. Third, we agree that guanidinoacetate can be phosphorylated and partially compensate for phosphocreatine. In our study, simvastatin-induced damage showed a trend to be less pronounced in GAMT-deficient mice compared with wildtype mice. Therefore, (phospo) guanidinoacetate cannot completely explain the milder phenotype of GAMT-deficient mice, but we agree that it might contribute to ameliorate statin-induced myopathy in GAMT-deficient mice compared with AGAT-deficient mice. Finally, we agree with Balestino and Adriano that AGAT metabolites should further be evaluated as potential treatments in statin-induced myopathy.
Dear editor,
We thank Drs. Balestrino and Adriano for their insightful comments on our publication about the effects of AGAT-and GAMT-deficiency in simvastatin-induced myopathy (Balestrino and Adriano 2020). Among all our findings, Balestrino and Adriano point out the increased vulnerability of simvastatin induced myopathy in AGAT-deficient (AGAT −/− ) mice compared with wildtype and GAMT-deficient (GAMT −/− ) mice (Sasani et al. 2020). AGAT −/− mice are devoid of creatine, homoarginine and guanidinoacetate-the only known products of the AGAT and GAMT pathway.
First, we have shown that creatine reduces simvastatininduced muscle damage in creatine-deficient AGAT −/− mice. This finding in mice is in line with their and other previous work in patients (Balestrino and Adriano 2018;Shewmon and Craig 2010). In mice, we have shown that creatine-deficient AGAT −/− and GAMT −/− mice reveal a severe myopathy and reduced muscle strength Schmidt et al. 2004). Therefore, creatine possesses a protective potential to ameliorate statin-induced myopathy in humans and mice.
Second, homoarginine was studied in statin-induced myopathy, given that we were the first to show that AGAT is mandatory for homoarginine synthesis in mice and humans . However, our current experiments revealed that homoarginine supplementation in homoarginine-deficient AGAT −/− mice did not affect simvastatininduced myopathy, which is in contrast to homoarginine's Handlinf editor: E. Closs.
* Chi-un Choe<EMAIL_ADDRESS>role in cerebrovascular and cardiac function Faller et al. 2018). In mouse models of ischemic stroke and heart failure, homoarginine-but not creatinewas able to improve cerebrovascular damage and normalize cardiac dysfunction in AGAT-deficient mice. We agree with Balestrino and Adriano that although we hypothesized a protective effect of homoarginine in statin-induced myopathy, we did not find any. The third product of AGAT activity is guanidinoacetate. GAMT-deficient mice are devoid of creatine-as AGATdeficient mice, but have increased AGAT expression and, therefore, higher guanidinoacetate levels. Therefore, Balestrino and Adriano hypothesized that the milder phenotype of GAMT-versus AGAT-deficient mice might rather be explained by elevated guanidinoacetate levels and not AGAT expression itself. Early studies with GAMT-deficient mice and patients revealed that guanidinoacetate can also be phosphorylated in the absence of creatine to partially substitute for phosphocreatine (Kan et al. 2004;Schulze et al. 1997). However, enzyme kinetics and recovery rates after depletion of phospho-guanidinoacetate are considerably slower compared to phosphocreatine. Phospho-guanidinoacetate is not able to fully compensate for the lack of phosphocreatine, because GAMT-deficient mice reveal a reduced muscle strength and histological signs of myopathy (Schmidt et al. 2004). Moreover, in addition to AGAT-and GAMT-deficient mice, we also used wild-type controls in our experiments with normal creatine, normal homoarginine and normal guanidinoacetate levels (Sasani et al. 2020). Interestingly, statin-induced muscle damage and dysfunction showed a trend to be less pronounced in GAMT-deficient mice compared with wild-type mice. Although phospho-guanidinoacetate cannot completely explain the milder phenotype of GAMT-deficient mice, (phosho) guanidinoacetate might contribute to ameliorate statin-induced myopathy in GAMTdeficient mice compared with AGAT-deficient mice as suggested by Balestrino and Adriano (Balestrino and Adriano 2020). In addition to their protective roles as energy buffers, (phospho) guanidinoacetate might have similar pleitropic effects as (phospho) creatine (Wallimann et al. 2011), such as being an osmolyte that protects muscle against exerciseinduced hypertonic stress (Alfieri et al. 2006). Finally, we agree with Balestino and Adriano that AGAT metabolites should further be evaluated as potential treatments in statininduced myopathy.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 1,171.8 | 2020-06-27T00:00:00.000 | [
"Biology"
] |
A Heterogeneous Image Fusion Method Based on DCT and Anisotropic Diffusion for UAVs in Future 5G IoT Scenarios
Unmanned aerial vehicles, with their inherent fine attributes, such as flexibility, mobility, and autonomy, play an increasingly important role in the Internet of Things (IoT). Airborne infrared and visible image fusion, which constitutes an important data basis for the perception layer of IoT, has been widely used in various fields such as electric power inspection, military reconnaissance, emergency rescue, and traffic management. However, traditional infrared and visible image fusion methods suffer from weak detail resolution. In order to better preserve useful information from source images and produce a more informative image for human observation or unmanned aerial vehicle vision tasks, a novel fusion method based on discrete cosine transform (DCT) and anisotropic diffusion is proposed. First, the infrared and visible images are denoised by using DCT. Second, anisotropic diffusion is applied to the denoised infrared and visible images to obtain the detail and base layers. Third, the base layers are fused by using weighted averaging, and the detail layers are fused by using the Karhunen–Loeve transform, respectively. Finally, the fused image is reconstructed through the linear superposition of the base layer and detail layer. Compared with six other typical fusion methods, the proposed approach shows better fusion performance in both objective and subjective evaluations.
Introduction
Internet of Things (IoT) has attracted extensive attention in academics and industry ever since it was first proposed. IoT aims at integrating various technologies, such as body domain network systems, device-to-device (D2D) communication, and unmanned aerial vehicles (UAVs) and satellite networks, to provide a wide range of services in any location by using any network. This makes it highly useful for various civil and military applications. In recent years, however, the proliferation of intelligent devices used in IoT has given rise to massive amounts of data, which brings its own set of challenges to the smooth functioning of the wireless communication network. However, the emergence of the fifth-generation (5G) wireless communication technology has provided an effective solution to this problem. Many scholars are committed to the study of key 5G characteristics such as quality-of-service (QoS) and connectivity [1,2]. With the development of UAV technology and 5G wireless systems, the application field of IoT has expanded further [3][4][5]. Due to their characteristics of dynamic deployment, convenient configuration, and high autonomy, UAVs play an extremely important role in IoT. As some wireless devices suffer from limited transmission ranges, UAVs can be used as wireless relays to improve the network connection and extend the coverage of the wireless network. Meanwhile, due to their adjustable flight altitude and mobility, UAVs can easily and efficiently collect data from the users of IoT on the ground. At present, there exists an intelligent UAV management platform, which can operate several UAVs at the same time through various terminal devices. It is capable of customizing flight routes as needed and obtaining the required user data. Intelligent transportation systems (ITS) can use UAVs for traffic monitoring and law enforcement. UAVs can also be used as base stations in the air to improve wireless network capacity. In 5G IoT, UAV wireless communication systems will play an increasingly important role [6,7].
The perception layer, which is the basic layer of IoT, consists of different sensors. The UAVs collect the data of IoT users by means of airborne IoT devices, including infrared (IR) cameras and visible (VI) light cameras [8]. One of the key technologies affecting the reliability of the perception layer is the accurate acquisition of multisource signals and reliable fusion of data. In recent years, heterogeneous image fusion has become an important topic regarding the perception layer of IoT. IR and visible light sensors are the two most commonly used types of sensors. IR images taken by an IR sensor are usually less affected by adverse weather conditions such as bright sunlight and smog [9]. However, IR images lack sufficient details of the scene and have a lower spatial resolution than VI images. In contrast, VI images contain more detailed scene information, but are easily affected by illumination variation. IR and VI image fusion could produce a composite image, which is more interpretable to both human and machine perception. The goal of image fusion is to combine images obtained by different types of image sensors to generate an informative image. The fused image would be more consistent with the human visual perception system than the source image individually. It can be convenient for subsequent processing or decision-making. Nowadays, the image fusion technique is widely used in such fields as military reconnaissance [10], traffic management [11], medical treatment [12], and remote sensing [13,14].
In recent decades, a variety of IR and VI image fusion approaches have been investigated. In general, based on the different levels of image representation, fusion methods could be classified into three categories: pixel-level, featurelevel, and decision-level [15]. Pixel-level fusion, conducted on raw source images, usually generates more accurate, richer, and reliable details compared with other fusion methods. Feature-level image fusion first extracts various features (including colour, shape, and edge) from the multisource information of different sensors. Subsequently, the feature information obtained from multiple sensors is analysed and processed synthetically. Although this method could reduce the amount of data and retain most of the information, some details of the image are still lost. Decision-level fusion is used to fuse the recognition results of multiple sensors to make a global optimal decision on the basis of each sensor independently, thus, completing the decision or classification. Decision-level fusion has the advantage of good realtime performance, self-adaptability, and strong antiinterference; however, the fault-tolerance ability of the decision function directly affects the fusion classification performance. In this study, we focus on the pixel-level fusion method.
The remainder of this paper is organised as follows. In Section 2, we introduce related works and the motivation behind the present work. The proposed fusion method is described in Section 3. Experimental results on public datasets are covered in Section 4. The conclusions of this study are presented in Section 5.
Related Works
In general, the pixel-level fusion approach can be divided into two categories: space-based fusion methods and the transform domain technique. Space-based fusion methods usually address the fusion issue via pixel grayscale or pixel gradient. Although these methods are simple and effective, they easily lose spatial details [16]. Liu et al. [17] observed that this type of method is more suitable to fusion tasks of the same type of images. The transform domain technique usually takes the transform coefficient as the feature for image fusion. The fused image is obtained by the fusion and reconstruction of the transform coefficients. The image fusion method based on multiscale transformation has been widely investigated because of its compatibility with human visual perception. In recent years, a variety of fusion methods based on multiscale transform have been proposed, such as low-pass pyramid (RP) [18], gradient pyramid (GP) [19], nonsubsampled contourlet transform (NSCT) [20], discrete wavelet transform (DWT) [21], and dual-tree complex wavelet transform (DTCWT) [22]. However, image fusion methods based on multiscale transform are usually complex and suffer from long processing time and energy consumption issues, which limit their application.
Due to the aforementioned reasons, many researchers implemented image fusion methods by using discrete cosine transform (DCT). In [23], the authors pointed out that image fusion methods based on DCT were efficient due to their fast speed and low complexity. Cao et al. [24] proposed a multifocus image fusion algorithm based on spatial frequency in the DCT domain. The experimental results showed that it could improve the quality of the output image visually. Amin-Naji and Aghagolzadeh et al. [25] employed the correlation coefficient in the DCT domain for multifocus image fusion, proving that the proposed method could improve image quality and stability in noisy images. In order to provide better visual effects, Jin et al. [26] proposed a heterogeneous image fusion method by combining DSWT, DCT, and LSF. Jin et al. proved that the proposed method was superior to the conventional multiscale method. Although image fusion methods based on DCT have achieved superior performance, the fused results show undesirable side effects such as blocking artifacts [27]. While performing DCT, the image is usually required to be divided into small blocks prior, which causes discontinuities between adjacent blocks in the image. In order to address this problem, several filtering techniques have been proposed, such as weighted least square filter [28], bilateral filter [29], and anisotropic diffusion filter [30]. Xie and Wang [31] pointed out that the anisotropic diffusion processing of images could retain the image edge contour information. Compared with other fusion methods based on filtering, image fusion based on anisotropic diffusion could retain more edge profiles. It preferably suppresses noise and obtains better visual evaluation. However, most of the proposed methods for anisotropic diffusion models are based on the diffusion equation itself, ignoring the image's own feature information, which may lead to loss or blurring of image details (textures, weak edges, etc.). Inspired by the above research, a heterogeneous image fusion method 2 Wireless Communications and Mobile Computing based on DCT and anisotropic diffusion is proposed. The advantages of the proposed method mainly lie in the following three aspects: (1) Due to the use of DCT transform, the fusion algorithm proposed in this paper shows good denoising ability (2) The final fusion images show satisfactory detail resolution (3) The proposed algorithm is easy to implement, and the real-time performance is very good, which is suitable for real-time requirements
Proposed Fusion Method
In this section, the operation mechanism of the proposed algorithm is described in detail. The proposed image fusion framework can be divided into three components, as shown in Figure 1. In the first step, in order to eliminate the noise in the original images, DCT and inverse discrete cosine transform are performed on the IR and VI images, respectively. In the second step, anisotropic diffusion is adopted to decompose IR and VI images to obtain the detail and base layers. In the third step, base layers are fused by using the weighted averaging, and detail layers are fused by using the Karhunen-Loeve transformation. Finally, the fused base and detail layers are linearly superimposed to obtain the final fusion result.
3.1. DCT. As an effective transform tool, DCT can transform the image information from the time domain to the frequency domain so as to effectively reduce the spatial redundancy of the image. In this study, the 2D DCT of an N × N image block f ði, jÞ is defined as follows [32]: where f ði, jÞ denotes the ði, jÞ-th image pixel value in the spatial domain and Fðu, vÞ denotes the ði, jÞ-th DCT coefficient in the frequency domain; cðkÞ is a multiplication factor, defined as follows: Similarly, the 2D inverse discrete cosine transform (IDCT) is defined as: When performing the DCT transform, most of the image information is concentrated on the DC coefficient and lowfrequency spectrum nearby. Therefore, the coefficient close to 0 is deleted, and the coefficient containing the main information of the image is reserved for inverse transformation. The influence of noise can be effectively removed without
Anisotropic Diffusion.
In computer vision, anisotropic diffusion is widely used to reduce noise while preserving image details. The model of anisotropic diffusion of an image I can be represented as follows [33]: where cðx, y, tÞis rate of diffusion, ∇and Δ are used to represent the gradient operator and the Laplacian operator, respectively, and t is time. Then, the equation (4) can be discretized as: In (5), I t+1 i,j is an image with coarse resolution at t + 1 scale. λ denotes a stability constant satisfying 0 ≤ λ ≤ 1/4. The local image gradients along the north, south, east, and west directions are represented as: Similarly, the conduction coefficients along the four directions of north, south, east, and west can be defined as: In (7), gð·Þ is a decreasing function. In order to maintain smoothing and edge preservation, in this paper, we choose gð·Þ as: where T is an edge magnitude parameter. Let I IR ðx, yÞ and I VI ðx, yÞ be IR and VI images, respectively, which have been coregistered. Anisotropic diffusion for an image Iis denoted as ADðIÞ. The base layers I B IR ðx, yÞ and I B VI ðx, yÞ are obtained after performing anisotropic diffusion processes on I IR ðx, yÞ and I VI ðx, yÞ, respectively, which are represented by: Then, the detail layers I D IR ðx, yÞ and I D VI ðx, yÞ are obtained by subtracting the base layers from the respective source images, which are shown in (10).
The results obtained by the anisotropic diffusion of IR image and VI image are shown in Figure 3.
Base Layer Fusion.
The weighted average is adopted to fuse the base layers of IR and VI images. The fused base layer I B F is calculated by: where ω 1 and ω 2 are normalized weighted coefficients, set to 0.5 in this paper.
Detail Layer Fusion.
KL-transform can make the new sample set approximate to the original sample set distribution with minimum mean square error, and it eliminates the correlation between the original features. We use KLtransform to fuse the detail layers of IR and VI images. Let I D IR ðx, yÞ and I D VI ðx, yÞ be detail layers of the IR and VI images, respectively. The fused process based on KL-transform is described as follows.
Step 1. Arrange I D IR ðx, yÞ and I D VI ðx, yÞ as column vectors of a matrix, denoted as X.
Step 2. Calculate the autocorrelation matrix C of X.
Step 4. Calculate the uncorrelated coefficients K 1 and K 2 corresponding to the largest eigenvalue λ max , which is defined as: The eigenvector corresponding to λ max is denoted as μ max . And, K 1 and K 2 are given by:
Wireless Communications and Mobile Computing
Step 5. The fused detail layer I D F is calculated using: 3.3.3. Reconstruction. In this study, the final fused image is obtained by the linear superposition of I B F and I D F , as shown in equation (14).
Experiment
In this section, the coregistered IR and VI images from the TNO image fusion dataset are used to evaluate our algorithm. This database provides a large number of coregistered infrared and visible images. All the experiments are implemented [34], discrete harmonic wavelet transform (DCHWT) [35], discrete wavelet transform (DWT) [21], two-scale fusion (TS) [36], dual-tree complex wavelet transform (DTCWT) [22], and curvelet transform (CVT) [37]. In order to verify the advantages of the proposed method, the experimental verification is divided into two parts. Subjective evaluation results of the fused image are shown in the first part. In the second part, we compare the objective evaluation results of the proposed algorithm with six comparison algorithms. The six pairs of coregistered source images used in this experiment are depicted in Figure 4.
Subjective Evaluation.
The fusion results obtained by the proposed method and six compared methods are shown in Figure 5. The fusion experimental results from pair 1 to pair 6 are represented from top to bottom, respectively. In order to show a better comparison, the details in the fused images are highlighted with red boxes. As can be seen from Figure 5, our method preserves more detail information and contains less artificial noise in the red window. The image details in fused images obtained by MSVD, DWT, and TS are blurred, which are clearly seen from the first three pairs of the experiment. Compared with the above three fusion methods, the fusion results based on DCHWT preserve more detail information, while showing obvious VI artifacts in them (clearly visible in the last pair of the experiment). The fused images obtained by DTCWT, CVT, and the proposed method could preserve more detail information. Compared with DTCWT and CVT fusion methods, the fusion results of the proposed algorithm look more natural. In the next section, several different objective quality metrics are evaluated to demonstrate the advantages of the proposed method.
Objective Evaluation.
In order to verify the advantages of the proposed method, cross entropy (CE) [38], mutual information (MI) [39], the average gradient (AG) [39], relative standard deviation (RSD) [39], mean gradient (MG) [39], and running time are used as objective evaluation metrics. CE represents the cross entropy between the fused image and the source image. The smaller the cross entropy, the smaller the difference between the images. MI represents the calculation of mutual information between the fused image and the source image. The larger the value of MI, the higher the similarity between the two images. Calculating the average gradient of the image involves calculating the definition of the image, which reflects the expressive ability of the image to the detail contrast. RSD represents the relative standard deviation of the source image and the fused image, which reflects the degree of deviation from the true value. The smaller the relative standard deviation, the higher the fusion accuracy. The mean gradient represents the definition of the fused image. It refers to the clarity of each detail shadow and its boundary on the image. The objective evaluation results of the 6 pairs of the experiment in the "Subjective Evaluation" section are shown in Tables 1-3, and the best results are highlighted in bold.
The comparison results of the objective evaluation indexes of the first two pairs of experiments are given in Table 1. As seen from Table 1, the proposed method outperforms other fusion methods in terms of all metrics except for CE. Although the CE value of the proposed method is not the minimum value, it is very close to the minimum.
The comparison results of objective evaluation indexes of the third and fourth pairs of the experiment are shown in Table 2. As can be seen from Table 2, the fusion result of the proposed algorithm in this paper contains more information. The proposed method outperforms other methods as regards AG, RSD, MG, and running time. The CE values of the proposed method in the third pair of the experiment are very close to the best values produced by MSVD. In the fourth pair of the experiment, all fusion quality indexes of image fusion generated by the proposed method are the best.
The comparison results of the objective evaluation indexes of the last two pairs of the experiment are given in Table 3. As can be seen from Table 3 As can be seen from Tables 1-3
Conclusions
Data fusion, which is a key technology in IoT, can effectively reduce the amount of data in the communication network and thus reduce the energy loss. In this article, we focused on the fusion of the IR image and the VI image and proposed a novel heterogeneous fusion approach. Considering that DCT has shown good denoising ability and anisotropic diffusion has shown satisfactory detail resolution, we fused the two algorithms through effective fusion strategies. Experimental results show that the proposed method can achieve better performance compared with other six state-of-the-art fusion approaches as regards both subjective and objective indexes. However, IR and VI images in this experiment are all coregistered. The actual multisource data may be unregistered. In the future, we will focus on unregistered image fusion.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper. | 4,537.4 | 2020-06-27T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
RNA sequencing of corneas from two keratoconus patient groups identifies potential biomarkers and decreased NRF2-antioxidant responses
Keratoconus is a highly prevalent (1 in 2000), genetically complex and multifactorial, degenerative disease of the cornea whose pathogenesis and underlying transcriptomic changes are poorly understood. To identify disease-specific changes and gene expression networks, we performed next generation RNA sequencing from individual corneas of two distinct patient populations - one from the Middle East, as keratoconus is particularly severe in this group, and the second from an African American population in the United States. We conducted a case: control RNA sequencing study of 7 African American, 12 Middle Eastern subjects, and 7 controls. A Principal Component Analysis of all expressed genes was used to ascertain differences between samples. Differentially expressed genes were identified using Cuffdiff and DESeq2 analyses, and identification of over-represented signaling pathways by Ingenuity Pathway Analysis. Although separated by geography and ancestry, key commonalities in the two patient transcriptomes speak of disease - intrinsic gene expression networks. We identified an overwhelming decrease in the expression of anti-oxidant genes regulated by NRF2 and those of the acute phase and tissue injury response pathways, in both patient groups. Concordantly, NRF2 immunofluorescence staining was decreased in patient corneas, while KEAP1, which helps to degrade NRF2, was increased. Diminished NRF2 signaling raises the possibility of NRF2 activators as future treatment strategies in keratoconus. The African American patient group showed increases in extracellular matrix transcripts that may be due to underlying profibrogenic changes in this group. Transcripts increased across all patient samples include Thrombospondin 2 (THBS2), encoding a matricellular protein, and cellular proteins, GAS1, CASR and OTOP2, and are promising biomarker candidates. Our approach of analyzing transcriptomic data from different populations and patient groups will help to develop signatures and biomarkers for keratoconus subtypes. Further, RNA sequence data on individual patients obtained from multiple studies may lead to a core keratoconus signature of deregulated genes and a better understanding of its pathogenesis.
Results
Donor and patient samples. We recruited keratoconus patients of African American ancestry (KC) from Baltimore (17-69 years old) and of Middle Eastern ancestry (KJ) from Saudi Arabia (18-36 years old). (Supplemental Table S1). As controls for the KC samples, we used corneas from deceased African American donors (25-75 years old) for the KC samples. For the KJ corneas, we used European American donor corneas for two reasons. First, corneal tissue donation in Saudi Arabia is extremely rare making them unavailable for research 42 . Second, Middle Eastern individuals are genetically much closer to individuals of European than African ancestry 43 .
A majority of the samples retained the epithelial layer but were missing all or part of the single-cell layered endothelium. Paraffin sections stained with H&E showed the corneas to have an attached epithelium and a mostly intact stroma. The KJ sample showed an irregular and slightly thickened epithelium, but otherwise intact epithelia and stroma (Fig. 1). This indicated that the quality and epithelial retention were similar for KCN and donor corneas, although the former were placed immediately at −80 °C or processed for RNA extraction, while donor corneas were extracted one week after their receipt in Optisol at 4 °C. Further analyses of RNA quality, read depth, yield of transcripts and genes showed no difference between donor and KC samples indicating that Optisol-storage did not have a detectable negative impact (discussed below). RNA quality, sequence quality and patient-donor differences. The average RNA yield and RIN value among all 36 samples was 1.88 μg and 8.02, respectively (Supplemental Table S2). Across all samples, we detected 15,950 transcripts encoded by 11,286-13,025 unique protein-coding genes per sample with FPKM ≥ 1, and 6,900-9,155 transcripts encoded by 6,507-8,559 unique protein coding genes per sample at FPKM ≥ 5 (Table 1). For this study, we focused only on coding transcripts to identify gene-level changes between the patient groups. The data have been deposited in Gene Expression Omnibus at NCBI and are accessible through GEO Series accession number GSE151631 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc= GSE151631). To assess sequencing and analysis accuracy, the same RNA preparation for each of the eight samples (7 KC, 1 donor) was sequenced twice and the results show very high correlation between these technical replicates (Supplemental Fig. S1).
To determine the relationships between all 36 samples, we conducted principal component analysis (PCA) using 10,652 expressed genes with FPKM ≥ 5 in at least one sample (Fig. 2). The first principal component (PC1) accounted for 31.2% of the expression variance and separated all control healthy tissues from diseased corneal tissues. The second principal component (PC2) accounted for 14.5% of the expression variance and was largely www.nature.com/scientificreports www.nature.com/scientificreports/ explained by one outlier (KC406) which was nevertheless consistent with its replicate. A scree plot shows the variance as represented by each principal component (Supplemental Fig. S1B). Pairwise comparison of all samples is shown in Supplemental Table S3. The mean distance between technical duplicates is small (2.57 ± 0.97), while comparisons of cases to controls show ~30-fold larger mean distances: 74.99 ± 27.96 and 73.15 18 36 ± . between the 4 DN (control) versus the 7 KC, and the 3 LE (control) versus the 12 KJ samples, respectively. Additionally, mean distances between cases and controls are 2 and 4.8 standard deviations greater than distances within cases or controls. Repeat PCA using a smaller set of 4,787 differentially expressed genes (DEG) with FPKM ≥5 in all samples (see later) showed similar relationships albeit with a significantly smaller separation along the minor axis (Supplemental Fig. S2). Keratoconus associated with atopy, hay fever or vernal keratoconjunctivitis, may represent a subtype within keratoconus; in our patient groups there were 4 such cases in the KJ and one in the KC group. However, these five showed no clustering or clear separation from other cases: mean distances between the allergy and non-allergy patients was 11.55 ± . 7 53, and, within the non-allergy group was 14.55 ± . . 8 26 In addition, there were no DEGs that were significantly different between the allergy and non-allergy KCN, and larger sample size in the future may reveal gene signatures for these subtypes of KCN (Supplemental Table S4).
Differentially expressed genes (DEGs) in patient and donor groups. We tested expression differences between donor and patient corneas of 13,397 genes 44,45 , retaining those with q ≤ 0.01, ≥2 or ≤−2-fold change and with genes expressed at FPKM ≥5 in at least one sample. This filtered set contained 819 (232 upregulated, 587 downregulated) and 993 (213 upregulated, 780 downregulated) unique protein-coding DEGs in the Baltimore KC and Saudi Arabian KJ groups, respectively (Supplemental Table S4). Analysis using a second independent method showed high concordance, 80-81% of the DEGs in each canonical pathway (discussed later) were identical (Supplemental Table S5) 46 .
Upon comparing the DEGs in the two patient groups, we detected 53 commonly up-regulated and 380 commonly down-regulated genes, and an additional 179 and 160 uniquely up-regulated genes in KC and KJ, respectively (Table 2). Only a few genes showed opposite expression trends in the two groups: LGALS7) was elevated in KJ but decreased in KC, while 12 genes (CD34, CHST6, COLGALT2, DKK2, KDR, KERA, MAMDC2, MME, NANOS, PDGFD, RECK, STEAP4) were decreased in KJ and increased in KC. Thus, patient corneas from two distinct populations demonstrate highly concordant differences. This is further supported by similar distributions of DEGs when categorized by cellular localization and annotated functions of the encoded proteins (Supplemental Fig. S3A,B). All DEGs with FDR adjusted p-value for the two KCN groups are shown in two volcano plots in Fig. 3; we detected greater similarities in down regulated than upregulated genes in the two patient groups (detailed list in Supplemental Table S6). For example, both volcano plots show significant decreases in ATF3, CA3, FOSL1, PLAUR and UBAP1 gene expression. Many of these genes are connected to stress response as discussed later.
Fewer similarities in most increased transcripts in the two patient groups. There are few overlaps in the ten most elevated transcripts between the two groups of samples. Most increased transcripts unique to KC corneas include 7 that encode ECM and matricellular proteins (Fig. 3A, Supplemental Table S6A). By contrast, the ten most increased DEG in KJ samples (Supplemental Table S6B) include genes that regulate inflammation (S100A9, S100A8, LGALS9C, LCN2), oxidative deamination of histamine and allergic response (AOC1), eye development and chloride channel functions (CLIC), and ADAMTS14, encoding a collagenase required for the production of assembled mature collagen fibrils 47 . www.nature.com/scientificreports www.nature.com/scientificreports/ The few highly upregulated genes shared by both patient groups require further studies as these may harbor promising biomarkers for keratoconus. Thrombospondin 2 (THBS2/TSP2), a matricellular protein-encoding gene was significantly increased in both patient groups (Supplemental Table S6A). THBS2 is involved in cell-ECM interactions and collagen fibrillogenesis as evidenced by thin corneas in Thbs2 null-mice [48][49][50] . Within the eye, it is present in the lens epithelium, injured corneal epithelium and the stroma 51 . Other significant DEGs increased in both sample sets, with FPKM 5 ≥ , include genes encoding cellular proteins that regulate growth (GAS1), Calcium sensing (CASR), exocytosis (RAB3D), cytoskeleton (KRT78) and proton-channel (OTOP2) (Fig. 3A,B, Supplemental Table S4).
In contrast, the ten most decreased DEGs (Supplemental Table S6C) show considerable overlaps between the two patient groups and point to shared pathogenesis. SLC2A3 encodes a glucose transporter; its decrease may be upstream of nutrient deprivation-related stress. Decreased transcripts for PLAUR, involved in plasmin formation, CHI3L1, encoding a chitinase-like protein, and possibly a promoter of tissue remodeling, may both be part of a tissue injury response program. The gene encoding cyclic AMP dependent transcription factor ATF3, decreased in both groups (Fig. 3A,B, Supplemental Table S6C), regulates cell proliferation and differentiation, and is a stress response gene that interacts with JUN and FOS; additionally, related genes, JUND and FOLSL1 are both downregulated in the keratoconus samples. Decreased expression of NR4A1 encoding a steroid-retinoid hormone receptor, known to promote apoptosis may also be part of a dysregulated injury response. Other notable decreases with FPKM ≥ 5 in the KC cases include SERPINB2, PPP1R15 A (GADD34), KRT17, APOBEC3A, PTGS2, GADD45A, GADD45B, TM4SF1, NEDD9 and PIM1. Among the KJ samples similar decreases were seen for PPP1R15A, PIM1, NEDD9 and GADD45B (Supplemental Table S4). www.nature.com/scientificreports www.nature.com/scientificreports/ Decreased acute phase and NRF2-mediated oxidative stress in keratoconus. To identify the biological processes potentially important to corneal functions and perturbed in KCN, we used DAVID Bioinformatic Resources and Ingenuity Pathway Analyses, yielding five significant canonical pathways in each patient group ( Fig. 4) 52,53 . First, acute phase (tissue injury) responses and IL-6 signaling related DEGs were significant in both patient groups. Second, NRF2-regulated oxidative stress response DEGs were significantly decreased in the KJ samples, with a similar but non-significant trend in KC samples. Third, two networks, often active in arthritis, that essentially regulate immune signals, inflammation and ECM, were decreased in KJ. Finally, PPAR signaling, RAR and ECM-related DEGs were significant in KC. In the following we examine these pathways and their implications for KCN.
Multiple acute phase and tissue injury-response genes were decreased in both patient groups (Fig. 4). These included CEBPB/TCF5, a transcription factor that regulates immune and inflammatory responses, and IL1A, a pleiotropic cytokine that regulates hematopoiesis, inflammatory and injury responses. Other decreases from this network included IL6R, (interleukin 6 receptor subunit alpha), IL6ST (a signal transducer for IL6), SOCS3, SOCS8 (suppressors of cytokine signaling), and SAA1, encoding an apolipoprotein. NRF2 (nuclear factor erythroid2-related factor 2) is a transcription factor that promotes the induction of antioxidants and other components that control reactive oxygen species (ROS) to counteract their harmful effects 54 . NRF2-mediated antioxidant protection is important to the eye and NRF2-boosting treatments are being considered for ocular wound healing and macular degeneration 55 . Multiple antioxidant genes, positively regulated by NRF2, were decreased in KJ, and to a lesser extent in KC (Fig. 5A). These NRF2 target genes include glutathione S-transferase (GSTM3), thioredoxin reductase (TXNRD1), producers of reducing factors (GCLC, GCLM), a transcription regulator (FOSL1), chaperone-encoding genes (DNAJA1, DNAJB4, DNAJB9, UBAP1) involved in protein-folding, degradation and trafficking, and, HMOX1, a heme oxygenase that regulates iron metabolism. NRF2 is itself regulated post-transcriptionally, which may explain why we did not detect it by differential transcriptome analyses. Under homeostatic conditions, NRF2 protein is retained in the cytoplasm and maintained at low levels through its rapid turnover by KEAP1, which mediates interactions with CUL3 leading to ubiquitin proteasomal degradation of NRF2 54 . Elevated ROS leads to disassociation of the KEAP1-NRF2 complex, escape of NRF2 from ubiquitination and degradation (Fig. 5B). A few studies have reported expression of NRF2 target genes, or NRF2 itself in cultured corneal cells, but these did not examine NRF2 in tissue layers of the cornea. Table S3. (Fig. 6A,B) in 3 different DN and KCN corneas (Supplemental Fig. S4A,B). We find that both KEAP1 and NRF2 are primarily present in the corneal epithelial layers. In KCN corneas, KEAP1 staining is generally decreased with areas of focal increase in the epithelium (Fig. 6A). IF staining of NRF2 itself is decreased in KCN; in particular DN corneas show more nuclear NRF2 staining (Fig. 6B). Earlier we reported increased KEAP1 and CUL3 proteins in a comparative proteomics of KCN and donor corneas 56 . Validations of RNA seq findings. We validated select findings from our transcriptome analyses in three ways: testing RNA expression by qRT-PCR, protein localization and presence by IF staining of donor and keratoconus corneal sections, and against the literature. We selected ICAM1 (cell adhesion), TUBB2A, TUBB3 The p value next to each bar was calculated for that pathway by the IPA Core Analysis. "Acute Phase" is the Acute Phase Response Signaling pathway; "NRF2-mediated" is NRF2-mediated Oxidative Stress Response pathway; "ECM related" is the Hepatic Fibrosis / Hepatic Stellate Cell Activation pathway; "Rheumatoid Arthritis" is the Role of Macrophages, Fibroblasts and Endothelial Cells in Rheumatoid Arthritis pathway; and, "Osteoarthritis" includes genes associated with osteoarthritis. (2020) 10:9907 | https://doi.org/10.1038/s41598-020-66735-x www.nature.com/scientificreports www.nature.com/scientificreports/ (cytoskeletal), SOD2 (belonging to the NRF2-mediated antioxidant pathway), RXRA (nuclear retinoid receptor) and MFAP3L (microfibril associated protein3 like) for testing by qRT-PCR because RNA sequencing showed robust expression at FPKM 5 ≥ , which in our experience, allows reliable detection by qRT-PCR (Supplemental Fig. S5 and Table S7). In agreement with the RNA Seq data, qRT-PCR detected elevated MFAPL3 and decreased TUBB3 expression in KCN. We detected robust expression of RXRA by qPCR, but it was not significantly different between DN and KCN groups. However, RXRA by RNA seq was elevated in both the KC (mean 35 FPKM) and the KJ (mean 43 FPKM) patient groups compared to donor samples (17 FPKM). Expression GAS1 by qRTPCR was slightly increased in KCN, without reaching significance, although by RNA seq, it was markedly increased in both patient groups (Supplemental Table S3). HSP40/DNAJA1, detected as markedly decreased by RNA seq in patients, was decreased by qPCR as well without reaching significance. Taken together, the qPCR data is in general agreement with the RNA seq results.
Up-regulated in KJ No change in KJ Down-regulated in KJ
We selected DNAJA1and GAS1 for localization of their respective proteins in the cornea by IF staining. DNAJA1 (Hsp40 member a1), unknown to the eye, belongs to the heat shock protein family; other members of this family are known to serve molecular chaperone functions in the lens and the cornea 57 . In agreement with the RNA seq and qRTPCR results, DNA JA1/HSP40 IF staining was decreased in KCN (Fig. 7A). GAS1 is known to be expressed in the retinal pigment epithelium (RPE) and the cornea, regulates cell growth and eye development, and Gas1-null mice which are viable at birth develop microphthalmia 58 . GAS1 shows very little IF staining of KCN corneas (Fig. 7B) compared to uniform strong staining of all donor corneal epithelial layers. While IF staining is not a reliable indication of protein levels, qualitatively dramatically decreased staining of KCN corneal sections contrasts the GAS1 transcript-level increases in both patient groups (Fig. 3). It remains to be seen whether decreased GAS1 staining is due to 1) an actual decrease protein synthesis, or 2) shedding or loss of cell surface or vesicular GAS1 from the corneal layers. IF-staining of RXRA showed a dramatic increase in nuclear staining in epithelial layers of KCN corneas (Fig. 7C). This is consistent with increases in its transcript.
Overall our transcriptome results also agree with findings from earlier studies. For example, CTGF, SMAD7, TGFB1 gene expressions were decreased in both KC and KJ samples and were also reported to be decreased as a part of the Hippo, Wnt and TGFß signaling pathways 40 . Consistent with ER stress identified as important by our earlier proteomic studies 30,31,56 , here we detected changes in translation-related factors (EIF1, EIF1B, EIF4E) and proteasomal degradation components (UBAP1).
Discussion
Using deep transcriptome analysis of keratoconus and control corneas from African American and Saudi Arabian patients, we respectively identified 819 and 993 differentially expressed genes. As tissue donation is an extremely rare practice in Saudi Arabia, we selected donor controls of European Ancestry as previously it has been shown that these individuals are genetically closer to Middle Eastern individuals than Africans 43 . In addition, on average donor groups were older than the patient groups, however, there were no age-dependent distrbutionof samples in the principal component analysis. Remarkably, despite differing ancestry and geography (ecology), and the use of generally older control corneas from an eye bank, patient transcriptomes were clustered and clearly separated from control transcriptomes by principal component analysis. Previous transcriptomic analyses have also examined corneas from keratoconus patients 40,41 : in one study, keratoconus corneas were compared to non-keratoconus corneas from patients undergoing cornea grafting for other conditions. These patients had overt inflammation, infection and other eye diseases, and were likely reasons why case versus control corneas www.nature.com/scientificreports www.nature.com/scientificreports/ were poorly distinguished. The internal consistency of our data, therefore, allowed us to demonstrate decreases in NRF2-mediated oxidative stress and acute phase tissue injury responses as core pathogenic changes in keratoconus in both patient groups. HMOX1 is an important NRF2-downstream antioxidant target gene that catabolizes heme and resolves iron imbalance and toxicity, and decrease in its transcript in both keratoconus patient groups affirms NRF2 dysfunctions. Reduced HMOX1 may also contribute to the iron lines or Fleischer's Ring observed in keratoconus corneas, and the long-held belief of iron imbalance in this disease 3,59 . Additional support for NRF2 impairments in KCN comes from our earlier proteomic study of pooled KCN and donor corneas where NRF2-mediated oxidative stress response proteins were decreased in the epithelium 30 . Given that samples were pooled in that earlier study, we were not fully appreciative of the significance of NRF2 in keratoconus at that time. In a recent proteomic study of individual corneas from keratoconus patients we detected decreases in proteins that regulate complement pathways and tissue injury responses, and other proteins encoded by NRF2-target genes, emphasizing a failure in protective mechanisms in the cornea 56 . www.nature.com/scientificreports www.nature.com/scientificreports/ The healthy cornea routinely encounters and resolves reactive oxygen species (ROS) produced in response to restricted nutrient supply, UV exposure, mechanical injury and infections 60 . This ability to dissipate ROS was suspected to be malfunctioning in keratoconus, causing oxidative damage and consequent degenerative changes in corneal cells and the ECM. A "cascade hypothesis" was proposed to tie this oxidative damage to abnormal stromal ECM and keratocyte cell death, as observed in keratoconus 23 . Therefore, it is pathogenetically significant that www.nature.com/scientificreports www.nature.com/scientificreports/ we have now identified NRF-2 signal disruptions as an important early regulatory step in the poor antioxidant functions in keratoconus. Concordantly, sulforaphane, a NRF-2 activator reduces oxidative stress in an ex vivo cell culture model of Fuchs dystrophy 61 and corneal damage in a rabbit model of corneal thinning 62 .
The two patient groups do harbor some differences as discussed below. First, certain growth factor and inflammation gene-networks are decreased in the Saudi Arabian group, although there are more conjunctivitis cases in this group. The connection between inflammation and keratoconus is not clear however. Historically, KCN is described as a non-inflammatory corneal ectasia, although tear fluid IL-6 increases, TNF-α decreases 63 and increases 64 suggest otherwise. The African American cases show significant changes in RAR/Vitamin A signaling, PPAR and ECM-related networks. Interestingly, increased nuclear staining of RXRA in KCN corneas and increases in its transcript in our RNA seq data may tie in with its potential candidate gene stature. Increased transcripts for ECM-related genes, encoding COL1A1, COL3A1 and FN, potentially implicate fibrosis-related changes in the African American group, that we will investigate in a larger set of patients. In a proteomic study of pooled corneas, although on a different set of patients, we noted decreases in some of these same ECM proteins 30 . However, the relationship between transcript level changes in ECM genes and ECM protein accumulation in a tissue is often not direct, as it is influenced by feedback regulation, secretion and correct assembly of ECM proteins, and their degradation and remodeling. A vast body of work has investigated the TGF ß-ECM remodeling axis in the cornea in the context of wound healing and fibrosis [65][66][67][68][69] . Along these lines, we have previously probed TGF ß signals in cultured stromal cells from keratoconus corneas and detected increases in non-canonical TGF ß-SMAD 1/5/8 signals, and poor activation of AKT, that controls cell survival and their metabolic and biosynthetic capabilities 31 . Yet others have reported decreased regulatory SMAD7 in cultured keratoconus stromal cells 70 , which is consistent with our observations that SMAD7 transcript is decreased in African American cases. Other transcriptomic studies have also reported gene expression decreases in members of the TGF ß-SMAD pathway 40 . Finally, this study shows changes in multiple epithelial genes, providing credence to an existing hypothesis of epithelial damage as a potential driver of stromal degeneration in keratoconus.
Importantly, a small set of genes (THBS2, GAS1, OTOP2, and others) were upregulated across patients in both groups compared to controls. These may yield early biomarkers for keratoconus. In addition, a combination of genes significantly upregulated in the KJ samples only, S100A8, S100A9, ADAMTS14, LYPD2 and AOC1 may yield distinguishing biomarkers for subtypes of keratoconus. This will require additional careful phenotyping of disease and its progression, and studies of larger groups for transcript and protein-level changes.
Our study provides three major concepts in keratoconus. First, the degenerative ECM thinning phenotype of the cornea is an outcome of loss of cellular functions that is geared to counteract oxidative stress and tissue injury. Second, the NRF2-regulated gene network has a significant role in this cellular response to oxidative stress in the cornea. Finally, using our approach, analysis of transcriptomic data from different populations and patient groups will help to develop signatures and biomarkers for different subtypes of keratoconus.
Materials and Methods
Patient and donor samples. This study was conducted following the Principles of the Declaration of Helsinki on Biomedical Research involving human subjects. We obtained written informed consents from all the participants following a protocol approved by the Institutional Review Board of the Johns Hopkins Medical Institutions, Baltimore, MD and the King Khaled Eye Specialist Hospital (KKESH), Riyadh, Saudi Arabia. Further analyses were performed by study team members and additional guidance of an approved protocol from the Institutional Review Board at NYU Langone Health. A total of 7 and 12 corneas were obtained from patients undergoing cornea transplantation at the Wilmer Eye Institute (WEI) at Johns Hopkins and KKESH, respectively (Supplemental Table S1). The patients from WEI and KKESH were of African American and Saudi Arabian ancestry, respectively, and referred to as KC and KJ throughout the study. Keratoconus diagnosis was performed by trained physicians at both sites and included slit-lamp biomicroscopy, fundus evaluations and corneal topography measurements by Pentacam. Patients at both sites were graded as severe, with central keratometry (K) > 52 diopters (D) or not measurable (WEI) or K > 55D (KKESH). The Lions Eye Institute for Transplant and Research, Florida, provided control corneas, deemed unsuitable for transplants, and without keratoconus or other inflammatory diseases, obtained from deceased individuals of African American (DN) and European American (LE) ancestry. An additional 2 control corneas of unknown ancestry were obtained from WEI (DN373 and DN401). Donor corneas with an intact limbal ring packed in Optisol solution at 4 °C were shipped to Baltimore for these studies, with death to Optisol storage time being approximately 9.5 hours. The donor corneas were received within a week after their procurement and processed immediately for RNA extraction or paraffin-embedding.
Tissue preparation. Samples received from patients had the central cornea only without peripheral limbal tissue. Central cornea halves received from KKESH patients were snap frozen in liquid nitrogen and stored at −80 °C until shipment on dry ice to WEI for RNA extraction and sequencing. At WEI, patient cornea halves obtained immediately after surgery were placed directly in TRIzol and brought to the laboratory for RNA extraction. The limbal ring was removed from donor corneas, halved, and one half used for RNA extraction.
RNA extraction, quantification, sequencing and analysis. Total RNA was isolated from individual corneas by homogenization in 1 ml TRIzol Reagent (Life Technologies -15596-026) at room temperature and chloroform-extracted. The RNA in the aqueous phase was precipitated using 0.5 ml of 100% isopropanol and collected by centrifugation at 12,000 g, washed with 75% ethanol and resuspended in RNAse-free water.
A fluorescence-based (Ribogreen, Life technologies) method was used for RNA quantification. cDNA synthesis, fragmentation and library preparation and sequencing were performed by Macrogen Inc. using the TruSeq stranded RNA library kit. A template size check was performed by running the samples on an Agilent Technologies 2100 Bioanalyzer with a DNA chip. The samples were sequenced at 101 bp using the Illumina HiSeq 2500 platform. Raw images were generated using the HiSeq Control software v2.2.38, base calling using an integrated primary analysis software, Real time Analysis v1.18.61.0, and converted into FASTQ files using the Illumina package bcl2fastq v1.8.4. The sequencing library size, total reads, GC content and Q20/Q30 values were comparable for all samples and demonstrated high quality (Supplemental Table S2). After cufflink quantification (file "isoforms.fpkm_tracking" and file "genes.fpkm_tracking"), we obtained transcript-level and gene-level expression values (Table 1); expressed transcripts were used to obtain summed FPKM (fragments per kilobase of transcripts per million mapped reads) counts per gene.
Alignment of reads against the human reference genome hg19 was performed using the STAR 2.6.0a software. During alignment, we removed non-canonical junctions (option:-outFilterIntronMotifs RemoveNoncanonical) and generated XS strand attributes for splice alignments (option:-outSAMstrandField intronMotif). Gene expression quantification was completed using Cufflinks 2.2.1 45 with the following settings: 1) all annotated rRNA and mitochondrial transcripts were ignored (option: -mask-file); 2) bias detection and correction algorithm was enabled (option: -frag-bias-correct); 3) reads mapping to multiple locations in the genome were weighted during an initial estimation procedure (option: -multi-read-correct); 4) UCSC hg19 annotation file was supplied (option: -GTF-guide). We first checked for contamination, adaptor sequences, or other overrepresented sequences in the raw reads (FASTQ files) of all 36 sequenced samples by FastQC 0.11.7 and found no evidence of these. Next, differential expression analysis was conducted using Cuffdiff with bias correction and multi-reads corrections 44 . Genes identified as significantly different were confirmed by reanalysis using DESeq2 (version: 1.24.0) with default parameters 46 . In DESeq2, for final statistical significance of a gene, we used Benjamini-Hochberg adjusted p ≤ 0.01, absolute fold change ≥2, and FPKM ≥5 in at least one group.
FPKM value for summed transcripts was used as each gene's FPKM value. We defined significant differential expression by q value ≥ 0.01 (FDR adjusted p), |fold change | ≥2, and FPKM ≥5 in control or patient groups. Next, we used Ingenuity Pathway Analysis (IPA) software (Qiagen) on these significant differentially expressed genes (DEG) to identify enriched biological pathways and networks among these genes.
For Principal Component Analysis (PCA) of sequenced samples, we used two definitions: a) genes expressed at FPKM ≥5 in any one sample, and b) genes expressed at FPKM ≥5 in all samples. PCA was performed using the "prcomp()" function in R "stats" Package version 3.5.0 after expression values were log 2 (FPKM + 1) transformed. The coordinates for each sample in the principal components (PC1 and PC2) were used to calculate the Euclidian distance between samples by the function "pointDistance()" in R "raster" Package version 3.0-2.
Gene expression assays by qRT-PCR. cDNA was prepared from each RNA sample using a cDNA Reverse transcription kit (Biorad). Each cDNA (20 ng) was subjected to qRT-PCR using Applied Biosystems TaqMan assays for selected genes on a One Step Plus instrument (Applied Biosystems). The number of cycles (Ct) needed to reach the midpoint of the linear phase was noted and all observations were normalized against the housekeeping gene GAPDH. Immunofluorescence (IF) staining. Paraffin-embedded cornea sections were stained with H&E (NYU School of Medicine Center for Biospecimen Research & Development). For immunofluorescence (IF), slides were blocked with 10% animal serum or BSA in PBS, followed by overnight incubation at 4 °C in primary antibody, washed three times with Tris-buffered saline, incubated with a secondary antibody for 2 hours and the nuclei counterstained with DAPI. The following primary antibodies were used from Biorbyt Research Products: NRF2 (orb128433, 1:50), KEAP1 (orb48426, 1:100), Hsp40 (orb520080, 1:100), and GAS1 (orb414757, 1:100) and RXRA antibody were used from Novus Biologicals (NBP2-75653, 1:50). A fluorescently conjugated anti-rabbit secondary antibody (Invitrogen, Cat# A21206) was used at a concentration of 5μg/ml. Images were acquired with a Zeiss LSM 700 microscope. | 6,861.4 | 2020-06-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
The stellar cusp around the Milky Way's central black hole
The existence of stellar cusps in dense clusters around massive black holes is a fundamental, decades-old prediction of theoretical stellar dynamics. Yet, observational evidence has been difficult to obtain. With a new, improved analysis of high-angular resolution images of the central parsecs of the Galactic Center, we are finally able to provide the first solid evidence for the existence of a stellar cusp around the Milky Way's massive black hole. The existence of stellar cusps has a significant impact on predicted event rates of phenomena like tidal disruptions of stars and extreme mass ratio inspirals.
Introduction
It is a standard paradigm of modern astrophysics that the majority of galaxies -possibly with the exception of very low mass galaxies and/or dwarf irregulars -contains massive black holes (MBHs) at their centres (see, e.g., [1] and references therein). Over the past two decades, mainly thanks to studies with the Hubble Space Telescope and high-angular resolution observations with Adaptive Optics at major ground-based telescopes, we have also learned that the majority of galaxies contains so-called nuclear star clusters (NSCs) at their photometric and dynamical centres. These clusters are the densest and most massive clusters that can be found in the present-day Universe. They have masses between a few 10 5 to 10 8 M and half light radii of a few to a few tens of parsecs. NSCs do not consist of single-age stellar populations, but rather show signs of repeated star formation along their host galaxies' lifetime, i.e. they are in their properties significantly different from globular clusters. Most intriguingly, NSCs have been found to coexist with massive black holes (for the properties of NSCs, see [2,3,4] and references therein).
The evolution of a star cluster containing an MBH is a decades-old problem of stellar dynamics that has been studied by a large number of authors with a wide range of techniques (analytical, Fokker-Planck, Nbody). Peebles (1972) [5] proved that, while the statistical thermal equilibrium must be violated close to a MBH in a galactic nucleus due to tidal disruptions, stellar collisions and gravitational captures, there exists a steady state with net inward flux of stars. This is a quasi-steady state solution, where the stellar density takes a power-law form ρ ∝ r −γ . This finding was corroborated for a single-mass population by Bahcall and Wolf [6] and subsequently for multi-mass stellar populations with realistic number fractions (e.g., [7,8,9,10,11,12,13,14] and references therein). This so-called stellar density cusp will be fully developed after a so-called relaxation time, the time necessary for the randomisation of the cluster phase space via close encounters between stars (so-called two-body relaxation), which is typically below a Hubble time for a NSC in a Milky Way-like galaxy For a cluster composed only of stars of a single mass the predicted value is γ = 1.75. For realistic, multi-mass clusters, the lower mass stars have γ = 1.5, while the heavier stars (and in particular stellar mass black holes) will follow a steeper distribution (γ BH ≈ 2; see [13,14]). The cusp will be well developed inside the radius of influence of the MBH, which is roughly the radius of a sphere that contains once to twice the black hole mass in form of stars (or stellar remnants) [11]. Although stellar cusps around MBHs are a robust theoretical prediction, they have been rather elusive observationally. That is mainly because of the great distances of MBHs: Their radii of influence are often resolved only by a few pixels in imaging or spectroscopy, which means that a handful of rare bright stars can bias the measured cluster structure significantly, as can be seen very well in the case of our own Milky Way (see discussion in [15]). When comparing theoretical predictions with observations in nature, it is also important to be aware that theoretical models are often highly simplified, e.g. they typically are setup with clusters of a single age stellar population, and do not suffer from observational effects such as strong and spatially variable interstellar extinction. [17] are over-plotted onto the image. Sgr A* is visible as a near-infrared source at the image centre.
The best case for testing the stellar cusp hypothesis is the centre of the Milky Way. In the Galactic Centre (GC), an MBH of about 4×10 6 M , called Sagittarius A* (Sgr A*), is surrounded by an NSC of about 2.5 × 10 7 M , at a distance of ∼8 kpc ( [18,19,17,20] and references therein). With high angular resolution observations we can therefore probe the cluster structure and dynamics on scales of milli-parsecs, which means that the GC is a prime laboratory for studying the interaction of a stellar cluster with an MBH [21]. Figure 1 provides an overview of the GC and a progressive zoom toward the MBH Sagittarius A* (Sgr A*). It is important to keep in mind the limitations on observational studies of the GC: (1) Due to the extremely high interstellar extinction, stars can only be detected with reasonable sensitivity in the near-infrared. In this region, however, intrinsic stellar colours are small and in combination with the strong differential reddening caused by the small-scale variability of extinction it is therefore very hard to classify the stars. For example, it is not trivial to distinguish between an old, cold giant star and a young, hot massive star by means of imaging and stellar colours. (2) Due to the extreme density of the NSC, one needs to work with high angular resolution, typically at the diffraction limit of ground-based 8-10m-class telescopes, which can only be achieved with special techniques (Adaptive Optics and speckle imaging). Even then, most faint stars (main sequence stars of less than two solar masses) cannot be detected because of the source crowding. Hence, we can only observe the tip of the iceberg, on the order a few to 10% of all the stars suspected to exist at the GC.
Because of the observational limitations, evidence on the existence of a stellar cusp at the GC was elusive. Although the detection of a cusp was claimed by first high angular resolution observations [15,22], the actual situation turned out to be more complex. Only stars older than the cluster relaxation time can serve as suitable tracers of a cusp, but there are many massive young stars present in the central parsec around Sgr A*. These are too young to be dynamically relaxed. Once the pollution from young stars was taken into account, it appeared that the stellar surface number density within a few 0.1 pc of the MBH at the GC was close to flat, or even decreasing [23,24,25]. The lack of late-type giants close to the MBH had been reported before (see, e.g., discussion fo this topic in the review by [18]), but these more detailed studies were now considered to be incompatible with the existence of a stellar cusp. This is the origin of the missing cusp problem.
The existence or not of a stellar cusp around the MBH at the GC has implications for other galaxies as well, assuming that the Milky Way's centre is representative for the nuclei of normal galaxies. Observational confirmation or rejection of a stellar cusp would not only have fundamental importance for our understanding of stellar dynamics. It is also of particular importance for gravitational wave astronomy. So called Extreme Mass Ratio Inspirals (EMRIs) are considered the most exquisite probes of General Relativity and of the related astrophysics of stellar remnant plus MBH systems [26]. The possibility of observing EMRIs with future gravitational wave observatories in space, such as LISA (Laser Interferometer Space Antenna ) [27,28] or Taiji [29] is directly linked to the question whether stellar cusps exist around MBHs or not [30,14].
Motivated by the fundamental importance of this topic, we have recently undertaken new observational and theoretical studies. Observationally, we pushed the detection limit to fainter stars. The observations were then compared to new, more realistic Nbody simulations. We have found that a stellar cusp does indeed exist at the GC and that its properties agree with our expectations. The details of our work can be consulted in these three papers: 2. Surface density of stars around the MBH Sgr A* Before interpreting the stellar surface and light densities, it is important to have an approximate understanding of the stellar population we expect to observe at the GC. While we are still far from having a detailed understanding of the stellar population and its formation history in the inner few parsecs of the Milky Way, the following assumptions appear to be robust, according to our current best knowledge: The NSC has undergone repeated star formation episodes throughout its lifetime, with the most recent events occurring a few 10 6 and a few 10 Myr ago. The majority of the stars are old, with an estimated 80% having formed more than The completeness limit due to source crowding of previous studies was limited to K s 17.5, while the sensitivity limit of spectroscopic studies is K s ≈ 16. At K s < 17.5, number counts of old stars in the GC will therefore be dominated by horizontal branch/red clump (HB/RC) stars, core-helium burning giants. Right: K s luminosity function of stars within a projected distance of R 10" of Sgr A*, extracted from a deep NACO/VLT S13 image. The bump centred at K s ≈ 15.75 corresponds to the RC. The sharp drop in detected stars at K s > 18 is due to extreme source crowding, which impedes the detection of fainter stars. While RC stars and brighter giants have dominated the stellar density measurements of all previous work, we use measurements of resolved and unresolved stars at Ks 17.5 in our new work. 5 Gyr ago [18,31].
The left panel in Fig. 2 shows a simple model of the stellar population of the NSC assuming a continuous star formation history (taken from Fig. 16 in [15]). As mentioned above, it is very hard to classify stars at the GC because photometric studies do usually not cover a sufficient wavelength range and are usually not accurate and precise enough to break the degeneracy between intrinsic stellar colours and interstellar reddening and because spectroscopic studies are extremely time-consuming and limited to the brightest stars. Therefore, the diagram in the left panel in Fig. 2 is valuable because it can give us an approximate idea of the ages and masses of stars of a given observed brightness at the GC. If we are interested in identifying a stellar cusp, then it is important to focus on stars that are at least a few Gyr old. We can do this by selecting spectroscopically classified stars and/or by limiting the brightness range of the tracer population. Previous studies of the NSC structure ( [23,24,25]) were dominated by stars of K-band magnitudes brighter than ∼17.5, in particular by Red Clump (RC) stars, helium core burning giants with observed K-band magnitudes 15 − 16 at the distance and extinction of the GC.
RC stars are a convenient tracer population because they are, on average, a few Gyr old. We can also see in the left panel in Fig. 2 that stars of K-magnitudes 18 or fainter would also be suitable tracer populations because they have low mean masses and are therefore probably, on average, at least a few Gyr old. In our new work, we focused explicitly on this faint, old, low mass stellar population. We used K-band high angular resolution observations from the camera NACO at the ESO VLT 8m telescope. Several epochs of high quality images were stacked in order to increase the sensitivity. Subsequently, correction factors for stellar crowding and interstellar extinction were determined and applied to the data. Systematic effects arising from different choices of key parameters in the software packages that identify the stars in images were explored. Finally, the stellar surface densities were determined as a function of distance from the central MBH. Spherical symmetry of the cluster was assumed, which is not strictly correct, but accurate on a 10-20% level. In parallel, we also explored the diffuse light density from unresolved stars that could be measured in the images after subtracting all identified, resolved stars and the emission from diffuse, ionised gas.
In order to constrain the structure of the entire NSC and to be able to deproject the observed quantities, these data were complemented, at projected distances R 1 pc from Sgr A*, by star density/surface brightness measurements from other sources, with lower angular resolution and/or sensitivity, but taken over a larger field ( [20,32]). The latter were scaled to our data in the overlapping regions. In Fig. 3 we show the surface brightness from unresolved stars from 0.01 pc to 20 pc as well as the star counts for faint resolved stars (K-magnitudes ∼18) and RC stars (K-magnitudes ∼15-16). The diffuse flux from the unresolved stars traces sub-giants and main sequence stars of K-band magnitude 19 − 20, which have masses of about 0.8 − 1.8 M . All these three tracer populations have similar masses and can be old enough to be dynamically relaxed.
3. Properties of the stellar cusp at the GC As Fig. 3 shows, the three studied tracer populations show a very similar distribution. The projected densities can all be approximated well by single power-laws at projected distances R < 1 pc. Only the brightest population, the red clump giants, show some systematic deviations at around R = 0.2 pc and a possible decline at R < 0.05 pc. The red line in Fig. 3 indicates a fit with a Nuker law [33]: where r is the 3D distance from the central MBH, r b is the break radius, ρ is the 3D density, γ is the exponent of the inner and β the one of the outer power-law, and α defines the sharpness of the transition. We carried out fits with different parameters (e.g., for the value of α that was kept fixed at a value of 10 during the fits) and data (data at large R from [20] or [32]) and determined mean parameters and uncertainties from the resulting best-fit parameters of the different tries (the formal uncertainties were always very small). We obtain, for the unresolved stellar light r b,dif f use = 3.2 ± 0.2 pc, γ dif f use = 1.16 ± 0.02, β dif f use = 3.2 ± 0.3, and ρ dif f use (r b ) = 0.67 ± 0.05 mJy arcsec −3 , for the star counts of faint, resolved stars r b,f aint = 3.0 ± 0.4 pc (77 ± 10"), γ f aint = 1.29 ± 0.02, β f aint = 2.1 ± 0.1, and a density at the break radius of ρ f aint (r b ) = 3900±900 pc −3 , and for the RC stars r b,RC = 3.4±0.5 pc (110 ± 13"), γ RC = 1.28 ± 0.07, β RC = 2.26 ± 0.04, and ρ RC (r b ) = 1700 ± 500 pc −3 .
We cannot directly compare the values of the normalisation parameters ρ. As concerns β, it is not very well constrained. On the other hand, the break radius r b is well constrained around 3 pc and the inner power-law index γ as well. The break radius corresponds roughly to the radius of influence of the Sgr A*, i.e. the radius inside which we expect the stellar cusp to appear [11]. Both the stellar number and light densities may suffer from not well constrained In order to constrain the shape of the NSC at distances > 2 pc, we used the data from [32] and scaled them in the overlap region to our data. A constant flux of 6.4 mJy arcsec −2 was subtracted to remove the contamination by the surrounding nuclear bulge and Galactic bar, which can be approximated by a constant on these scales. The red line is a best-fit Nuker model. The blue and green lines are scaled stellar number densities from Gallego-Cano et al. (submitted to A&A, arXiv:1701.03816) for RC giants and faint subgiants/main sequence stars.
systematic biases (uncertainties in sky background subtraction or completeness corrections etc.). If we adopt conservative values from averaging the three values, we obtain r b = 3.2 ± 0.2 and γ = 1.24 ± 0.07, taking the standard deviations as uncertainties. Obviously, the stellar density displays a cusp and a core-like (flat) density law can be excluded with very high confidence.
Discussion and conclusions
We find that the stars at the GC follow a cusp-like density distribution inside the radius of influence of the central MBH. Our findings resolve the ambiguity from previous work. The cusp is less steep than what is expected for a single age population that is older than the relaxation time. In the latter case one would expect to observe a so-called Bahcall-Wolf cusp with γ ≥ 1. formation episodes every 1 Gyr. This means that most stars will not have had the time to fully relax dynamically, which will flatten the cusp. The results of the simulations agree very well with the data. There appears to be a lack of giant stars at small projected distances of R 0.1 pc from Sagittarius A*. The missing few dozens of giants could be explained if their envelopes were removed in the past, thus rendering them invisible to observations. While collisions between stars or between stellar remnants and stars are probably not effective enough, collisions with high-density clumps in a formerly existing star-forming gas disc around Sagittarius A* provide a viable mechanism [34]. We know that such a disc must have existed in the past because of the presence of many young, massive stars in this region that move in a coherent, disc-like pattern (e.g., (e.g., [35,36,18]).
We conclude that a stellar cusp exists around Sagittarius A* and that its properties agree with what we expect theoretically. Densities in excess of a few 10 7 M pc −3 are reached at distances r < 0.01 pc from the central black hole (Schödel et al., submitted to A&A, arXiv:1701.03817). This has significant implications for gravitational wave astronomy. The existence of a stellar cusp in the Milky Way implies the existence of such structures in other galaxies with similar or smaller MBHs. Therefore we can expect to observe significant numbers of EMRIs with future space-based gravitational wave observatories [30,37,27,28]. | 4,296.6 | 2017-02-01T00:00:00.000 | [
"Physics"
] |
SEPPA 3.0—enhanced spatial epitope prediction enabling glycoprotein antigens
Abstract B-cell epitope information is critical to immune therapy and vaccine design. Protein epitopes can be significantly affected by glycosylation, while no methods have considered this till now. Based on previous versions of Spatial Epitope Prediction of Protein Antigens (SEPPA), we here present an enhanced tool SEPPA 3.0, enabling glycoprotein antigens. Parameters were updated based on the latest and largest dataset. Then, additional micro-environmental features of glycosylation triangles and glycosylation-related amino acid indexes were added as important classifiers, coupled with final calibration based on neighboring antigenicity. Logistic regression model was retained as SEPPA 2.0. The AUC value of 0.794 was obtained through 10-fold cross-validation on internal validation. Independent testing on general protein antigens resulted in AUC of 0.740 with BA (balanced accuracy) of 0.657 as baseline of SEPPA 3.0. Most importantly, when tested on independent glycoprotein antigens only, SEPPA 3.0 gave an AUC of 0.749 and BA of 0.665, leading the top performance among peers. As the first server enabling accurate epitope prediction for glycoproteins, SEPPA 3.0 shows significant advantages over popular peers on both general protein and glycoprotein antigens. It can be accessed at http://bidd2.nus.edu.sg/SEPPA3/ or at http://www.badd-cao.net/seppa3/index.html. Batch query is supported.
INTRODUCTION
Antigens can be specifically bound by corresponding antibodies through interacting with epitope residues. The identification of B-cell epitopes is of primary importance to vaccine design, immune-diagnostics and antibody produc-tion (1). Pioneering work was started by CEP in 2005 (2), then continuous efforts have been paid to epitope prediction in recent years. Currently, several popular tools are available online, such as PEPITO (3), DiscoTope 2.0 (4), Epitopia (5) and SEPPA (6), which were reviewed previously (7). In 2014, SEPPA 2.0 (7) introduced new features of 'relative ASA preference of unit patch' and 'consolidated amino acid index' to establish a logistic regression model by considering immune host and subcellular localization, demonstrating both increasing accuracy and low false positive rate. Recently, BepiPred 2.0 (8) employed random forest algorithm to predict sequence-based epitopes, achieving an AUC value of 0.596 on external validation set. Besides antigen structures, several methods considered additional information for epitope prediction, such as antibody information (9) or experimental data (10), which is only beneficial to those antigens with prior knowledge. Thus, despite substantial efforts, conformational epitope prediction for protein antigens is still challenging and worthy of further devotion.
In subsequent analysis of 897 immune complexes from PDB database, we found that almost 70% of protein antigens contain N-linked glycosylation sites, indicating the potential to be post-translational modification (PTM) of Nlinked glycosylation. It is known that antigenicity of protein antigens, particularly among virus antigens, can be dramatically affected by N-linked glycosylation (11)(12)(13)(14). For example, it was reported that key glycan in HIV-1 gp120 surface could block antibody binding and inhibit antibody recognition (15). Also, a group of broadly neutralizing antibodies could directly target the glycan-dependent epitopes located in the V1/V2 region on HIV-1 gp120 (16). In addition, effective tumor vaccines are under development targeting tumorassociated carbohydrate antigens (TACA), which resulted from aberrant glycosylation (17).
It is noted that N-linked glycosylation is the most common forms of protein glycosylation (18), which may influence the local environment physio-chemically, particularly to those neighboring amino acids on protein surface. Yet how to describe the micro-environment featured by glycosylation sites is highly challenging, which has largely hindered the study of epitope prediction involving glycoprotein antigens. So far, no algorithm has considered the effect of glycosylation sites for epitope prediction. Here, we present a latest and enhanced version of SEPPA 3.0, filling the blanks for N-linked glycoproteins. In this version, parameters were refreshed based on the largest dataset till now. Then, the ratio of glycosylation triangles and glycosylation related amino acid indexes were developed and further integrated as new classifiers for glycoprotein antigens. Finally, a calibration parameter was introduced to reduce false positive rate. The performance of SEPPA 3.0 was rigorously evaluated on both glycoprotein and general protein antigens and compared with popular peers.
DATASET
All datasets were extracted from antigen-antibody complexes in Protein Data Bank (PDB) (19). For quality control, those protein antigens with over 50 residues were remained. Solvent accessible surface areas (SASA) were calculated and epitope residues were defined as the same as SEPPA 2.0 (7). Finally, 832 PDB ID with 897 unique epitope patches were remained. For model construction, 767 protein antigens (520 with N-glycosylation sites) deposited before year 2016 were selected as internal training dataset. Totally, 767 unique epitope structures including 16 544 epitope residues as positive training dataset and 172 975 nonepitope residues as negative training dataset. Two sets were adopted as independent external testing datasets. One refers to general protein antigens without discrimination of Nlinked glycosylation or not. This set contains 130 conformational epitopes structures after year 2016, including 2598 positive residues and 31 144 negative ones. Those with Nglycosylation sites were referred as second testing dataset of glycoprotein antigens covering 106 antigens. (Supplementary Table S1).
MATERIALS AND METHODS
Parameters in SEPPA 2.0 (7) were firstly updated based on the latest training dataset. Two critical parameters, the ratio of glycosylation triangles and glycosylation related amino acid indexes, were added as new classifiers to improve the prediction performance. Then, different machine learning approaches were screened, and logistic regression algorithm was selected to establish our model. Finally, calibration was introduced to reduce false positive rate. The algorithm of SEPPA 3.0 and definition of each parameter are given as below: Algorithm of SEPPA 3.0 The functions of sub-model recommendation were remained the same as in SEPPA 2.0 (7). In addition, the prediction algorithm of SEPPA 3.0 was updated as below: Step 1: determine all the surface residues in the protein antigen; For each surface residue r : Step 2: search all possible unit triangles within 15Å atom distance and calculate three triangle-related parameters: propensity index avg r (see SEPPA 1.0), relative ASA Apr e f r (see SEPPA 2.0) and ratio of glycosylation triangles Glytri r using Equation (2); Step 3: calculate clustering coefficient CC r (see SEPPA 1.0), consolidated AAindex value Index r (see SEPPA 2.0) and glycosylation related AAindex (Glyindex r ) using Equation (3); Step 4: integrate above six parameters via logistic regression model to present raw antigenicity score for each residue; Step 5: calibrate raw prediction score using Equation (4) to give the final antigenic score for residue r ; Step 6: output the final antigenicity score, and highlight those residues with scores higher than defined threshold. Visualize the subset of predicted epitopes graphically.
Ratio of glycosylation triangles
The N-glycosylation sites of Asn were defined by sequons of Asn-X-Ser/Thr, where X can be any amino acid apart from proline (20). Surface residue triangle involving Asn of glycosylation sequons was defined as glycosylation triangle (gl ytri ). Further, epitope glycosylation triangle (epi gl ytri ) was defined when it contains at least two epitope residues. By consolidating 20 amino acids into 13 functional subgroups (21), 71 different glycosylation triangles patterns of subgroups were observed based on training dataset. Ratio of glycosylation triangles (ratio i ) was intended as its general enrichment in epitope areas as Equation (1) described: where N epi glytri i represents the number of epitope glycosylation triangles pattern i in training dataset, while N glytri i is the number of glycosylation triangles pattern i in training dataset.
For each surface residue r , search all possible gl ytri within 15Å atom distance and define the occurrences times of specific glycosylation triangle pattern i as N occur (i ) . The weighted score for glycosylation triangle pattern i (gl ytri i ) was defined as W ri = ratio i * N occur (i ) and scores of 71 existing gl ytri pattern were integrated by artificial neural networks (ANNs) into a consolidated ratio of glycosylation triangles as Equation (2) described: where W ri means the weighted score of gl ytri pattern i , while Glytri r indicates the consolidated ratio of glycosylation triangles via two-layer and ten-node ANN.
Glycosylation related amino acid indexes
Statistical analysis was done to derive four glycosylationrelated amino acid indexes from AAindex database (22) as initial features classifying epitope residues from non-epitope surface ones, including KIMC930101, LAWE840101, RICJ880102 and ROBB760113 (Supplementary Figure S1). Then, shell structure model was generated to summarize different amino acid indexes into each layer (23). Based on the radius of 10Å and step size of 2Å, five layers were generated for each residue r . For each index i of residue r , the amino acid index of all residues within layer j were averaged to generate a glycosylationrelated index as gl yi ndex i j . Finally, 20 indexes (4 indexes * 5 layers) were optimized through iterative filtering, and the optimized combination of glycosylation-related amino acid indexes were consolidated by ANNs as Equation (3) illustrated: where i represents four types of AAindexes and j represents five layers of shell model, gl yi ndex i j means the averaged index i within layer j of shell model, and Glyindex r indicates glycosylation-related AAindexes of residue r .
Calibration parameter
The idea of calibration is to adjust the raw score of individual residues by the overall tendency of neighboring residues, such as a low-score/high-score residue sitting in high-epitope/low-epitope environment. The adjusted score of residue r was defined by the average score of all neighboring surface residues as Equation (4) illustrated: ad j ust score r = raw score i N where raw score i represents the sum of raw predicted scores for each neighboring surface residue within 5Å atom distance of target residue r , while N means the total number of above residues.
Internal validation of SEPPA 3.0
The performance of SEPPA 3.0 was rigorously validated through both internal and external validation. As being illustrated in SEPPA 2.0 (7), area under curve (AUC) value and balanced accuracy (BA) were adopted as evaluation parameters. For internal validation, five commonly used machine learning approaches were screened, including logistic regression, naïve Bayes, random forest, support vector machine and decision tree. The results of 10-fold crossvalidation showed that logistic regression gave the best prediction results with AUC value over 0.79 for general protein antigens (Supplementary Figure S2), and was chosen for model construction of SEPPA 3.0.
Comparing with peer methods
Further, SEPPA 3.0 was evaluated through independent testing dataset and further compared with six available peers on-line with two sets of independent testing data. Peers include Epitopia (5), Discotope-2.0 (4), Pepito (3), CBTOPE (24), SEPPA 2.0 (7) and BepiPred-2.0 (8). Two independent testing sets include: (i) group 1 contains 130 general protein antigens (G1) and (ii) group 2 contains 106 glycoprotein antigens (G2). Note that there is no recommended threshold from Epitopia (5), the optimized threshold with maximum BA on different testing set was purposely selected for Epitopia (5). The results of evaluation on G1 general protein antigens were illustrated in Supplementary Figure S3 Further performance was evaluated on G2 glycoprotein antigens (Figure 1). The AUC was pushed up to 0.749 by SEPPA 3.0, leading the most accurate prediction for glycoprotein antigens, while the second winner is Pepito (3) with AUC of 0.676. Thus, SEPPA 3.0 shows significant advantages over popular peers on both general protein and glycoprotein antigens. Detailed performance of SEPPA 3.0 and available peers can be found in Supplementary Table S2.
Case study
Here, the well-known gp120 glycoprotein antigen (PDB ID: 5IF0, Chain: G) was selected as input and the results were illustrated in Figure 2. Gp120 contains 30 epitope residues according to the complex structures from PDB ( Figure 2a), out of which, 29 were predicted as true positive by SEPPA 3.0. Under the default threshold of 0.089, though 60 residues were calculated as potential epitope residues, most of true epitope residues were ranked in the top list ( Figure 2B). For example, by raising the threshold from 0.089 to 0.29, top 16 ranking marked in red contains 15 true epitope residues, indicating the excellent performance of SEPPA 3.0. The prediction results of 5IF0 G could reach to an AUC of 0.936 with BA of 0.872. Also, the predicted epitopes of SEPPA 2.0 and other peers including Bepipred-2.0, CBTOPE, Discotope-2.0, Epitopia and Pepito were illustrated in Figure 2D to I, under default cut-offs respectively. Results illustrated that, compared with other peers includ-ing SEPPA 2.0, SEPPA 3.0 gave the best results which are most similar to reference epitopes. Meanwhile, those false positive of discrete candidates predicted by SEPPA 2.0 can be successfully adjusted through the algorithm of SEPPA 3.0. Besides the case of HIV-1 gp120, another example was also illustrated (Supplementary Figure S4) based on human glycoprotein antigens of CD27 (PDB ID: 5TLK Chain: X), which is an important antibody drug targets for autoimmune diseases and cancers (25). Model performance of 130 individual structures from testing dataset can be found in Supplementary Tables S3 and S4.
Nucleic Acids Research, 2019, Vol. 47, Web Server issue W393 USAGE Input SEPPA 3.0 (http://bidd2.nus.edu.sg/SEPPA3/ or http:// www.badd-cao.net/seppa3/index.html) accepts two types of input files: (i) Existing PDB IDs with chain name, and (ii) Local files in PDB format. Like SEPPA 2.0 (7), users are recommended to select subcellular localization of protein antigen and species of immune host if available. Also, batch query submission is encouraged. Users can submit multiple PDB IDs in batch query with specified PDB IDs, subcellular localization, species of immune host and chain name as input. After successful submission, SEPPA 3.0 will automatically recommend the best model based on user's specifications.
Output
The output results of SEPPA 3.0 will be either presented in .html format or be sent back to users via email ( Figure 3). The .html format will provide result summary of input files in sequence level, in which capital letter and lowercase represent the surface and core residues respectively. Possible epitope residues predicted were marked in red capital letters ( Figure 3A) and detailed antigenicity scores calculated were recorded in score file for each residue in query antigen (Figure 3B). Users can visualize the prediction results through online plug-in Jsmol components or local PDB files downloadable. Here, HIV-1 gp120 (PDB ID: 5IF0, Chain: G) is shown as an example to illustrate the predicted results of SEPPA 3.0. Residues were colored according to predicted scores ( Figure 3C). More information can be found in the HELP page of SEPPA 3.0.
DISCUSSION
In recent decade, the progress in conformational epitope prediction goes smoothly but slowly. The space to improve may lie in several aspects, including accumulation of more epitope structures, appropriate classifying features, more refined models for specific datasets, and so on. In order to maintain the top performance, SEPPA 3.0 updated the largest datasets, the latest features, and designed two new classifiers for N-linked glycoprotein antigens. Our statistical analysis showed that N-glycosylation sites were significantly enriched in epitope areas rather than in non-epitope surface regions (paired t test, P = 3.577e-14, 95% CI = [0.032, 0.051]), suggesting that antibodies seem to prefer certain surface region containing N-glycosylation sites. In other words, comparing to those in non-epitope surface areas, N-glycosylation sites in epitope regions might have different local context residues layout, as well as physio-chemical fields resulted from neighboring residues. Thus two types of parameters were designed to test the potential difference of the micro-environment nearing glycosylation sites between epitope areas and non-epitope surface regions. After statistical analysis, the unit patch of residue triangles (6) involving N-glycosylation sites and four amino acid indexes were selected out to indicate residues layout and physio-chemical fields respectively. Meanwhile, by introducing shell structure to construct the glycosylation-related AAindexes, the layers of micro-environmental variations were fully considered and summarized for each residue. Despite of the diverse features invented till now, predicting results can only be lifted by combining multiple of them, suggesting the inherent complexity of epitope nature.
Another task for epitope prediction is to reduce the high False Positive Rate (FPR). Most available algorithms calculated epitope scores based on each individual residue. As structural epitope areas are surface patches being recognized and bound by CDR regions of antibodies, collective effects might play much more important roles than we had expected. Differed from other popular peers, SEPPA 3.0 designed the parameters of the local structural layout and micro-environment around each target residue to describe the collective effects in epitope regions. More importantly, a calibration procedure was elaborated to further reduce FPR based on neighboring influence. In fact, after calibration, the FPR of SEPPA 3.0 was significantly decreased from 0.35 to 0.26 for general protein antigens and from 0.34 to 0.25 for glycoprotein antigens.
For binary classification problems, AUC value was often adopted to evaluate the model performance under different thresholds. Another evaluation parameter we took is balanced accuracy, which fully considered the balance between sensitivity and specificity. Using the above two parameters, the performance of different tools can be fairly compared on independent datasets. It is noted that the structure information of conformational epitope is rapidly accumulating. Coupled with new classifiers and FPR reducing methods, it is expected that epitope prediction models could better serve the biological needs and assist the process of potential vaccine and immune therapeutic development.
SUPPLEMENTARY DATA
Supplementary Data are available at NAR Online. | 3,983 | 2019-05-22T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Transmutation of a Trans-series: The Gross-Witten-Wadia Phase Transition
We study the change in the resurgent asymptotic properties of a trans-series in two parameters, a coupling $g^2$ and a gauge index $N$, as a system passes through a large $N$ phase transition, using the universal example of the Gross-Witten-Wadia third-order phase transition in the unitary matrix model. This transition is well-studied in the immediate vicinity of the transition point, where it is characterized by a double-scaling limit Painlev\'e II equation, and also away from the transition point using the pre-string difference equation. Here we present a complementary analysis of the transition at all coupling and all finite N, in terms of a differential equation, using the explicit Tracy-Widom mapping of the Gross-Witten-Wadia partition function to a solution of a Painlev\'e III equation. This mapping provides a simple method to generate trans-series expansions in all parameter regimes, and to study their transmutation as the parameters are varied. For example, at any finite N the weak coupling expansion is divergent, with a non-perturbative trans-series completion; on the other hand, the strong coupling expansion is convergent, and yet there is still a non-perturbative trans-series completion. We show how the different instanton terms `condense' at the transition point to match with the double-scaling limit trans-series. We also define a uniform large N strong-coupling expansion (a non-linear analogue of uniform WKB), which is much more precise than the conventional large N expansion through the transition region, and apply it to the evaluation of Wilson loops.
6 Matching finite N trans-series to the double-scaling limit 28 6.1 Coalescence of Painlevé III to Painlevé II 28 6.2 Matching to the Double-scaling region from the weak-coupling side: t → 1 − 29 6.3 Matching to the Double-scaling region from the strong-coupling side: t → 1 +
Weak coupling Strong coupling Fixed N ; expansion in coupling g 2 g 2 N g 2 N • divergent (non-alternating) • convergent • trans-series completion • trans-series completion • imaginary trans-series parameter • real trans-series parameter Large N 't Hooft limit: N → ∞, t ≡ N g 2 /2 fixed; expansion in 1/N • divergent (alternating) • trans-series completion • trans-series completion • imaginary trans-series parameter • real trans-series parameter Double-scaling limit: N → ∞, t ∼ 1 + κ/N 2/3 ; expansion in κ κ ≤ 0 κ ≥ 0 • divergent (non-alternating) • divergent (alternating) • trans-series completion • trans-series completion • imaginary trans-series parameter • real trans-series parameter Table 1. The basic structure of the weak-coupling and strong-coupling expansions of the free energy ln Z(g 2 , N ) in three physical limits. At fixed N , the weak coupling expansion is divergent, with a transseries completion having a pure imaginary trans-series parameter. In contrast, at fixed N the strong coupling expansion is convergent; nevertheless it has a non-perturbative trans-series completion as an instanton expansion: see Section 5.2. In the 't Hooft large N limit, both weak-coupling and strong-coupling expansions are divergent, with non-perturbative trans-series completions, but the form of the trans-series is quite different on either side of the Gross-Witten-Wadia phase transition, which occurs at 't Hooft parameter t = 1, where t ≡ N g 2 /2. In the double-scaling limit, the trans-series match smoothly to those of the 't Hooft limit, but also to the fixed N expansions, in a way discussed in Section 6.
This paper addresses the question of how a trans-series in two parameters, g 2 and N , rearranges itself at weak-and strong-coupling, and at large and small values of the parameter N . The trans-series expression for the partition function (or the free energy, or the specific heat, or a Wilson loop) involves both perturbative and non-perturbative contributions, in a unified self-consistent form that encodes the full analytic structure. Often, perturbative coupling expansions may be convergent in one regime (weak or strong coupling), but divergent in the other [53]. Divergent expansions are naturally related to non-perturbative contributions beyond perturbation theory, but we can ask how such non-perturbative terms arise for a convergent expansion, as indeed occurs in the GWW model. This behavior becomes considerably richer when another parameter, N , is introduced. A generic phenomenon is the possible appearance of a phase transition in the N → ∞ limit at a particular critical value of a scaled "'t Hooft" parameter t = N g 2 /2. The large N expansion is divergent on either side of the transition, where the trans-series has very different structure. The immediate vicinity of the transition point is described by a double-scaling limit (N → ∞ and t → t c in a correlated way such that N is scaled out), in terms of a solution to the Painlevé II equation. This universal behavior is captured in a simple and explicit form by the Gross-Witten-Wadia unitary matrix model. In order to probe more finely the full rearrangement of the trans-series, we study this transition keeping the full N dependence. This has been done long ago by Marino using the pre-string equation [33], and a detailed study of finite N hermitean matrix models appeared in [41]. Here we use a different approach, using the explicit connection of the GWW unitary matrix model to the Painlevé III equation for all N and all coupling.
The essential trans-series structure in various limits is sketched in Table 1, and in slightly more detail in equations (1.1)-(1.2) and (1.3)-(1.4). The weak coupling perturbative expansion, for fixed N , is divergent and becomes a trans-series when the associated non-perturbative terms are included. In contrast, the strong coupling expansion, for fixed N , is convergent. Nevertheless, it also has a non-perturbative completion as a trans-series, as shown in Section 5.2. When expressed as large N expansions, these are divergent in both weak and strong coupling, although the form of the expansion is radically different in these two coupling regimes, passing through the GWW phase transition. For example, in the large N 't Hooft limit, the free energy, ln Z(t, N ) has the following trans-series structure (for details see Section 5 below): At weak coupling, the perturbative expansion is in powers of 1/N 2 , while higher instanton terms have fluctuation expansions in powers of 1/N . All these perturbative expansions are factorially divergent and non-alternating in sign, and correspondingly the trans-series parameter appearing in the instanton sum has an imaginary part (in fact, with the appropriate boundary conditions it is pure imaginary). At strong coupling, there is only one perturbative term, and all the fluctuation expansions in the instanton sum are in powers of 1/N , and the fluctuations coefficients are factorially divergent and alternating in sign. Correspondingly the trans-series parameter is real, with a value fixed by boundary conditions. These large N expansions break down at the transition point, because the actions S weak (t) and S strong (t) vanish there, and also because the prefactors P (k) (t) diverge as t → 1. This can be improved by introducing a uniform large N expansion at strong coupling, expanding in instanton factors of Ai 3 2 N S strong (t) 2/3 and Ai 3 2 N S strong (t) 2/3 , instead of the usual exponential instanton factors e −N Sstrong(t) . Then the prefactors and fluctuation terms are smooth all the way through the transition point, and provide a much better approximation all the way into the double-scaling region. This is a non-linear extension of the familiar uniform WKB approximation (see Sections 5.4 and 7). In the double-scaling limit, the parameters t and N are scaled together as t ∼ 1 + κ/N 2/3 , which effectively zooms in on a narrow window, of width 1/N 2/3 , of the transition point. The associated Hastings-McLeod solution of the Painlevé II equation has trans-series structure [33] (see Section 6 below): Notice the factor of √ 2 difference in the instanton factor exponents at strong and weak coupling. These double-scaling limit solutions match to the weak-coupling and strong-coupling large N expansions in (1.1)-(1.2). But the expansions (1.3, 1.4) also blow up at the transition point, where κ → 0, because the exponential factors become of order 1, and the prefactors diverge. As above, this can be cured by a uniform approximation, expressing the double-scaling strong-coupling expansion as a trans-series in powers of Airy functions Ai and Ai . See for example the Wilson loop expressions in Section 7.
In Section 2 we describe the basic structure of the trans-series expansions for the partition function Z, using the resurgent trans-series expansions for the individual entries of the Toeplitz determinant representation of Z. While this reveals the essential form, it is not sufficiently explicit to study the large N limit. Sections 3 and 4 show how the partition function can be obtained from simpler functions, such as the expectation value ∆ ≡ det U , which satisfy Painlevé III type equations with respect to the inverse coupling, with N appearing as a parameter in the equation. These Painlevé III equations make it straightforward to generate trans-series expansions in any parameter regime. Section 5 develops the detailed trans-series expansions for det U , and the corresponding expansions for other physical quantities such as the partition function, the free energy and the specific heat. The connection to the double-scaling limit, in terms of the coalescence of the Painlevé III equation to the Painlevé II equation is described in Section 6. Section 7 studies new uniform large N expansions for winding Wilson loops in the GWW model.
Towards trans-series expansions of the partition function
The Gross-Witten-Wadia (GWW) model [1][2][3] is a unitary matrix model whose partition function is defined by an integral over N × N unitary matrices: This matrix integral can be evaluated as a Toeplitz determinant, for any N and any coupling g 2 [8,11,54,55]: where I j (x) is the modified Bessel function, evaluated at twice the inverse coupling We will also use a "'t Hooft" parameter, t, when studying the large N limit: The GWW model is known to have a third order phase transition at the critical value t c = 1, in the N → ∞ limit [1][2][3].
Other important functions considered in this paper include the expectation value [56] ∆ We first discuss the basic trans-series structure of the weak-coupling and strong-coupling expansions for Z(x, N ) at fixed finite N , which follow from the large and small x expansions of the Bessel functions appearing in the determinant expression (2.2). These 'brute-force' expansions reveal much of the essential structure, but also illustrate that more sophisticated methods are needed in order to probe the full details, in particular in the strong-coupling regime.
Coupling expansions of the partition function Z(x, N ), at fixed N
The partition function is a function of two variables, the inverse coupling x = 2/g 2 and the integer N of the U (N ) group. We first consider expansions in the coupling, for fixed but arbitrary N .
2.1.1 Weak coupling expansion of the partition function Z(x, N ), at fixed N At weak coupling, x → +∞, we need the large argument asymptotics of the modified Bessel functions [58]. At fixed index j, the large x resurgent asymptotic expansion of the modified Bessel function involves two exponential terms: 1 where the fluctuation coefficients are Note that we need to keep both exponential terms in (2.7) in order to have direct access to the nonperturbative terms in the weak-coupling expansion. 2 The fluctuation factors about each of the two exponentials in (2.7) are related by resurgence. First, they are clearly related since they only differ from one another by an alternating sign. Second, they exhibit the generic Berry-Howls [31,59] type of resurgence relation connecting the large order growth of the coefficients about one non-perturbative term to the low orders of the expansion coefficients about the other non-perturbative term. Indeed, at large order n (for a fixed but arbitrary Bessel index j) we have the remarkable relation: Notice that the large order coefficients are factorially divergent in the expansion order n, with leading and sub-leading coefficients that are expressed in terms of their own low order terms: α 0 (j), α 1 (j), α 2 (j), . . . , for any Bessel index j. This is a strong form of "self-resurgence". Furthermore, notice that when the Bessel index j is a half-odd-integer, the large-order expression (2.9) vanishes, consistent with the fact that in this case the asymptotic expansions in (2.7) truncate. Physically, this is an illustrative example of a cancellation due to interference between different saddle contributions [60,61]. We now consider how this kind of large-order/low-order resurgence behavior manifests itself in the partition function Z(x, N ) in (2.2), which is a determinant involving Bessel I j (x) functions with indices ranging from j = 0 to j = N − 1. Thus, the partition function consists structurally of a sum of products of individual Bessel functions, with varying indices. It is therefore clear that since each Bessel function has two different exponential terms in its resurgent asymptotic expansion (2.7), the expanded determinant has an expansion involving (N + 1) different exponential terms: 3 x n (2.10) The leading piece, corresponding to taking for each Bessel function the leading growing exponential factor, e x √ 2πx , together with its first fluctuation correction, is easily deduced as [8] Z where G is the Barnes G-function [62]. For a given integer N there is only a finite number of "instanton" terms in the trans-series (2.10). 4 Also notice that because of the simple relation between the expansion coefficients for the fluctuations about the two different exponentials in (2.7), there is a symmetry between the fluctuation coefficients a where ξ N is 1 (i) when N is even (odd), respectively. By brute-force expansion of the Toeplitz determinant (2.2) in the weak-coupling limit, x → ∞, for various values of N , we can deduce the early coefficients in these expansions. For example, the first few terms of the zero-instanton, one-instanton and two-instanton fluctuation terms are: (2.14) For a given N , each of the fluctuation series in (2.12) is divergent. Resurgence relations appear in the large order (in n) behavior of the fluctuation expansion coefficients (for a given instanton sector k). For example, at large order n of the zero-instanton fluctuation series in (2.13): (2.16) In this expression for the large order behavior of the zero-instanton series (2.13) we recognize the low order coefficients of the one-instanton series (2.14). See Fig. 1. Similarly, the large-order growth of the fluctuation coefficients in the one-instanton sector is given (for N ≥ 3) by: with coefficients that appear in the low order expansion of the two-instanton series in (2.15). See Fig. 2. This pattern continues for higher instanton sectors. In fact, these results simply confirm well-known general results for the resurgence properties of solutions to (higher order) linear differential equations [30], following immediately from the fact that Z(x, N ) satisfies a linear ODE of order (N + 1).
To summarize so far: direct expansion of the Toeplitz determinant expression (2.2), using the full resurgent Bessel asymptotics in (2.7), explains in elementary terms the structural form of the weak-coupling trans-series in (2.10) and (2.12), and confirms its resurgence properties. However, this approach becomes less practical at large N , so we need to develop a more powerful approach in order to study the full trans-series structure at general N .
2.1.2 Strong coupling expansion of the partition function Z(x, N ), at fixed N The strong coupling limit is x → 0 + , so we use the convergent small argument expansion [63] of the modified Bessel functions in the determinant expression (2.2): As noted long ago by Wadia [2], a direct strong-coupling expansion of the partition function determinant produces a leading factor whose first N terms coincide with the expansion of e x 2 /4 . But there are further corrections to this, which we can write as an expansion in even powers of x: In contrast to the divergent weak-coupling fluctuation expansions (2.13-2.15), the strong coupling fluctuation expansion in (2.19) is a convergent expansion, for a given N . Naively, this suggests the absence of further non-perturbative terms. However, there are in fact additional non-perturbative contributions to (2.19), which are difficult to deduce from this kind of brute-force determinant expansion for small x, with fixed N . 5 The non-perturbative trans-series completion of this convergent strong-coupling expansion (2.19) is discussed below in Sections 3 and 5.
2.2 Large N expansions of the partition function Z(t, N ) in the 't Hooft limit The large N expansion in the 't Hooft limit involves taking N → ∞, keeping fixed the 't Hooft coupling The form of the large N expansion depends on the magnitude of t relative to the critical value t c = 1. And we will see that the large N trans-series structure changes radically through this GWW phase transition.
2.2.1
Large N expansion of Z(t, N ) at weak coupling: t < 1 When N → ∞, and x → ∞, such that t = N x < 1 is fixed, we can formally rearrange the weak coupling expansion (2.12) as follows. First, note that which involves the well-known large N asymptotics of the Barnes function [62]. The zero instanton fluctuation term in (2.13) becomes a series in inverse powers of N 2 : The one-instanton (and higher instanton) fluctuation terms that are obtained by re-arranging the weak-coupling fluctuation series in (2.14-2.15) have a different structure: they are expansions in all inverse powers of N (not N 2 ). Furthermore, this rearrangement procedure produces terms with factors of N in the numerator. For example, rewriting (2.14) in the small t and large N limit leads to: This structure is an artifact of changing the order of limits, as the expansions (2.14-2.15) were generated at large x with fixed N , rather than in a strict 't Hooft large N limit. The proper interpretation of these extra terms is that they are associated with the small t and large N expansion of large N instanton factors exp [−N S weak (t)], as illustrated below in Section 9.1. 6 In order to do this, we first need to develop a simpler and more direct approach that reveals the full trans-series structure of the large N expansion.
A similar rearrangement of the strong coupling expansion (2.19), in the large N and large t limits, also produces an unusual structure: Again we note that the fluctuation term involves positive powers of N . As in the weak-coupling case (2.24), these terms should be interpreted in terms of the expansion of a large N instanton factor, exp [−N S strong (t)], here in the strong-coupling region. See Section 9.1 below for details.
Tracy-Widom mapping to the Painlevé III equation
The previous Section showed that some features of the trans-series structure of the partition function are simply inherited from the trans-series structure (2.7) of the individual Bessel functions appearing in the Toeplitz determinant (2.2), but that in the strong-coupling and large N limits there are additional contributions which are rather difficult to probe using this determinant form. In order to probe the large N limit in more detail we make use of classic results of Tracy and Widom from random matrix theory [11, 26-28, 73, 74], which relate the GWW partition function Z(x, N ) to the solution of a particular integrable non-linear ODE, the Painlevé III equation. This connection with the Painlevé III equation holds for all N and for all coupling, and so it provides a simple and direct probe of the GWW partition function Z(x, N ) for all N and all x, far from the phase transition as well as close to it. Given an explicit differential equation, it is then a straightforward exercise to generate trans-series expansions [30]. This differential equation approach is complementary to the difference equation approach (based on the pre-string-equation) to trans-series developed for the GWW model in [33], and for other matrix models in [75]. The Painlevé III equation is a differential equation with respect to the inverse coupling parameter x, with the unitary group index N appearing as a parameter, so this can be used to define an analytic 6 This is analogous to the situation in quantum mechanical models [44,[68][69][70][71][72], such as the double-well potential or the Mathieu cosine potential, where the fluctuations in the perturbative sector, as a function of and (N + 1 2 ), where N is the perturbative level number, can be re-arranged into a large (N + 1 2 ) expansion of the form in (2.22) in inverse powers of (N + 1 2 ) 2 , while the fluctuations in the one-instanton sector have a re-arranged large (N + 1 2 ) series of the form in (2.24) in inverse powers of (N + 1 2 ), but also with powers of (N + 1 2 ) in the numerators.
continuation in N away from the positive integers, into the complex plane, 7 which is necessary to understand fully the resurgent properties of the large N expansion. The Tracy-Widom mapping relation [27] can be expressed by defining s = x 2 , and which effectively pulls out the leading exponential of the strong-coupling expansion (2.19). This quantity is of interest for random matrix theory in connection with the fluctuations in the distribution of the largest eigenvalue in the Gaussian unitary ensemble (GUE) [11]. Then, further defining the function σ N (s) via one finds that σ N (s) satisfies the Okamoto form of Painlevé III [11,27,74,76,77]: where the prime means differentiation with respect to s. This Painlevé III equation is satisfied for all N , and for all coupling (i.e. for all s). So, we can generate trans-series expansions for σ N (s) in any regime, and map them back to corresponding trans-series expansions for the partition function Z(x, N ) using (3.1-3.2). From this Painlevé III connection, we can then "zoom in" to the vicinity of the GWW phase transition region at t c = 1 using the known coalescence of the Painlevé III equation to the Painlevé II equation [78]. This coalescence limit is the well-known double-scaling limit [1,2,8] taking N → ∞, with the 't Hooft parameter t ≡ N/x scaled in a particular way with N (see Section 6 below) close to the critical value t c = 1. In this case, as is also well known, the free energy is related to a particular solution (the Hastings-McLeod solution [79,80]) of the Painlevé II equation (with zero parameter).
Here we wish to study the full trans-series structure in both parameters, x and N (or t and N ), not just in the double-scaling region. The explicit Tracy-Widom mapping (3.1-3.2) of Z(x, N ) to the Painlevé III equation (3.3) provides a simple way of generating the various trans-series expansions in all parameter regions. These can then be used to study how the different forms of the trans-series expansions re-arrange themselves through the GWW phase transition.
Weak coupling expansion, for all N , from Painlevé III
Given the explicit differential equation (3.3) for σ N (s), with N appearing as a parameter, it is a straightforward exercise to develop weak coupling trans-series expansions in terms of the variable s. Tracy and Widom [27] give the formal perturbative weak-coupling expansion for E N (s) as: Recalling that s = x 2 , we recognize the fluctuation factor here as the perturbative fluctuation factor Γ (0) (x, N ) in (2.12, 2.13). Further terms in the weak-coupling trans-series can be generated by inserting into the Painlevé III equation (3.3) for σ N (s) an weak-coupling trans-series ansatz. The perturbative weak-coupling expansion is The linearized form of (3.3) reveals also non-perturbative exponentially small corrections, based on all powers of the basic exponential factor e −2 √ s , leading to the trans-series ansatz: where the weak-coupling trans-series parameter ξ weak formally counts the "instanton order", and is set to 1 at the end to match the appropriate boundary conditions. Inserting this ansatz into the differential equation (3.3) and expanding in powers of ξ weak produces recurrence relations which iteratively determine the fluctuation coefficients, σ (k) n (N ), along with the prefactors, d k ( √ s, N ), for each instanton sector k. The trans-series (3.6) can then be mapped back to a trans-series expansion for the partition function Z(x, N ) using the relations (3.1)-(3.2). This confirms the general weak-coupling trans-series structure in (2.10). 8 For example, the leading s 4 term in (3.6) corresponds to the e −s/4 exponential factor in (3.1); the subleading − 1 2 N √ s term in (3.6) generates the e N x factor in Z 0 (x, N ) in (2.11); and the next N 2 4 term in (3.6) generates the x −N 2 /2 factor in Z 0 (x, N ) in (2.11). The Ndependent overall normalization factor in Z 0 (x, N ) is fixed by comparison with the explicit Toeplitz determinant expression (2.2).
Strong coupling expansion, for all N , from Painlevé III
Tracy and Widom [27] expressed the strong coupling (small s) expansion of σ N (s) as (here we add a few more terms) where the numerical coefficient C N is defined as: As a cross-check, the reader can verify that the leading term, in the first parentheses in (3.7), agrees with the first terms in (2.19) when mapped back to the partition function using the relations (3.1)-(3.2).
We have found a new closed-expression for the leading term of σ N (s) in (3.7). We differentiate (3.3) with respect to s to find a simpler equation (the overall factor of σ N may be dropped since the solution is monotonic for s ≥ 0): The linearized part of this equation can be solved in closed form: Choosing the multiplicative constant to be 1, the small s expansion agrees with the first parentheses term in the strong-coupling expansion (3.7), to all orders. This makes it clear that this expansion is convergent, with an infinite radius of convergence. Figure 3 shows that at strong coupling (small s) the "linearized" solution in (3.10) is an excellent approximation. And yet, despite the fact that this expansion is convergent, there are extra non-perturbative terms in the strong-coupling trans-series (3.7). 9 Indeed, the remaining terms in (3.7) should be understood as a strong-coupling trans-series for σ N (s): in terms of strong-coupling instanton factors: A similar phenomenon occurs in the re-arrangement of the convergent strong-coupling expressions for gap edges in the Mathieu equation, into trans-series expressions [42]. multiplied by (convergent!) fluctuation factors f (k) strong (s, N ) in each instanton sector. In (3.11), ξ σ strong is the strong-coupling trans-series parameter, whose value from the boundary conditions is determined to be ξ σ strong = 1. We will understand more details about this unusual strong-coupling trans-series structure (3.11) in Section 5 below, using properties of the Painlevé III equation.
Large N expansions in the 't Hooft limit, from Painlevé III
We can convert these weak-and strong-coupling expansions into large N expansions in the 't Hooft limit, as before. But a more direct way to generate such expansions is to rescale the Painlevé III equation (3.3) in terms of the 't Hooft parameter t: where the prime means differentiation with respect to t. Then one can develop trans-series expansions with an appropriate ansatz form. But we defer this step to Section 5, where we study the trans-series structure of an even simpler form of the Painlevé III equation, from which all these other trans-series can be derived in a much more explicit manner.
Mapping to a simpler form of the Painlevé III equation
It turns out to be significantly easier to work with another function ∆(x, N ): which is the expectation value of det U [56]. The function ∆(x, N ) is directly related to σ N (s) [see Equation (4.5) below], and also to the partition function Z(x, N ) [see Equation (4.7) below], but it is easier to work with ∆(x, N ) because it satisfies a much simpler nonlinear equation, for all N . ∆(x, N ) satisfies the following differential-difference and difference equations [8,56] 2∆ These two equations can be combined into a single nonlinear differential equation for ∆(x, N ), which we refer to as Rossi's equation [8,56]: Note that this Rossi equation (4.4) is valid for all x and all N ; i.e., for all N , and for all coupling. From results of Tracy and Widom [27], it can be shown that there is a simple relation between ∆(x, N ) and σ N (s), and therefore also between ∆(x, N ) and the partition function Z(x, N ). To make this relation completely explicit, define, for all N and s (recall the identification: s = x 2 ): If ∆(x, N ) satisfies the Rossi equation (4.4), then σ N (s) defined in (4.5) satisfies Okamoto's Painlevé III equation (3.3), with the identification: s = x 2 . Therefore, a trans-series expansion for ∆(x, N ) can immediately be converted into a trans-series expansion for σ N (s), and hence for the partition function Z(x, N ), for all N and for any coupling. The relation between ∆(x, N ) and σ N (s) can also be expressed in difference form as Furthermore, ∆(x, N ) is also related directly to the partition function Z(x, N ) as: Finally, there is yet another (related) connection to the Painlevé III equation: the Rossi equation ( Then c(s, N ) satisfies the Painlevé V equation, with parameters α = 0, β = −N 2 /2, γ = 1/2, δ = 0, and for such parameters, this can be converted to Painlevé III [76,77]. .1), analogous to the treatment of the partition function in Section 2.1.1. However, a much simpler method is to insert an appropriate trans-series ansatz into the Rossi differential equation (4.4) and match terms. We find a trans-series This weak-coupling trans-series structure for ∆(x, N ) clearly matches the weak-coupling transseries form in (2.10), which was deduced from the Toeplitz determinant expression (2.2). As was the case with the partition function in Section 2.1.1, we observe resurgence relations in the large order behavior of the fluctuation coefficients appearing in the weak-coupling trans-series (5.1) for ∆(x, N ). For example, for a given N , the perturbative coefficients in (5.2) have the large order growth (see Figure 4): We recognize typical resurgent behavior: the low order coefficients A (1) n (N ) of the one-instanton for σ N (s) suggests a suitable strong-coupling trans-series form (3.11). However, for ∆(x, N ) this is a much simpler question, because ∆(x, N ) satisfies a relatively simple differential equation (4.4).
In the strong coupling limit we notice from explicit computations that ∆(x, N ) ∼ (x/2) N N ! . Therefore, in this regime we can linearize the differential-difference and difference equations (4.2, 4.3): We recognize these as the defining relations for the Bessel functions. Correspondingly, in this limit the nonlinear equation (4.4) linearizes to become the Bessel equation [8,56]: Thus, the leading strong coupling behavior is given by where the overall multiplicative constant ξ ∆ strong must be chosen to be ξ ∆ strong = 1 in order to match with a direct computation of the strong-coupling expansion of the determinant ratio in (4.1), as well as to match the strong-coupling expansion of the partition function in (2.19), via (3.1) and (3.2) [or via (4.7)]. 10 Notice that already this leading-order term, ∆ (1) (x, N ), in the strong-coupling expansion is a non-perturbative "one-instanton" contribution: there is no "perturbative" contribution to ∆(x, N ). Using the relation (4.6) we deduce from (5.8) that the leading strong-coupling expression for σ N (s) agrees precisely with the expression in (3.10), which was obtained by solving the linearized part of (3.9). We can then use the relation (4.7) to derive a closed-form expression for the leading strong-coupling expansion contribution to ln Z(x, N ): , N + 1, N + 1; N, N + 2, N + 2, 2N + 1; −x 2 (5.9) The first "perturbative" term x 2 4 is general, simply coming from the relation (3.1) between Z(x, N ) and E N (s) [recall that s ≡ x 2 ]. The leading correction term in (5.9) can be re-written as a sum, − ∞ l=1 l (J N +l (x)) 2 , an expression which had been observed empirically to order x 4N +2 in [8]. We now show that these higher power corrections should be understood as the non-perturbative transseries completion of the strong-coupling expansion. The parameter ξ ∆ strong also has an interpretation as a strong-coupling trans-series parameter, which effectively counts the instanton sectors, in addition to imposing the appropriate physical boundary condition. Therefore, returning to the full nonlinear Rossi equation (4.4), we find a strong-coupling (small x) trans-series ansatz of the form (with ξ ∆ strong set equal to 1 at the end): The trans-series (5.10) is a sum over all odd instanton powers in terms of ξ ∆ strong . Matching these powers of ξ ∆ strong converts Rossi's nonlinear equation (4.4) into a tower of linear equations, the first of which is the Bessel equation (5.7) satisfied by ∆ (1) (x, N ), and the second of which is the inhomogeneous Bessel equation for ∆ (3) (x, N ): Since the homogeneous part of this equation is just the Bessel operator, we can immediately write the exact solution to (5.11) as where h (3) is the source term in (5.12), and the factor πx 2 arises from the Bessel Wronskian. From (5.13) we deduce the (convergent) strong-coupling expansion of this next correction term: .7). Also notice that while ∆ (1) (x, N ) is linear in the Bessel function J N (x), the next term in the trans-series, ∆ (3) (x, N ), is essentially cubic in J N (x), multiplied by a convergent fluctuation factor. Figure 5 shows the x dependence, for N = 5, of the exact ∆(x, N ) computed from (4.1), compared to the leading non-perturbative expression, ∆ (1) (x, N ) in (5.8), and also including the the next contribution, ∆ (3) (x, N ) in (5.13). The agreement is excellent, even in the transition region where x ≈ N .
Given the sub-leading term ∆ (3) (x, N ) in (5.13), we can immediately deduce the associated subleading term for ln Z(x, N ): While the first non-perturbative correction in (5.9) is quadratic in J N (x), the sub-leading term in (5.15) is quartic in J N (x). Figure 6 shows the x dependence, for N = 5, of the exact ln Z(x, N ) computed from (2.1), compared to the leading non-perturbative expression in (5.9), and also including the the next contribution in (5.15). The agreement is excellent, even in the transition region where x ≈ N . Even though the correction ∆ (3) (x, N ) in (5.14) looks like it contributes innocent (and convergent) further powers of x in the strong coupling expansion, this analysis shows that it is really the next term in a trans-series expansion (5.10). Similarly, the next term, ∆ (5) (x, N ), in the strong-coupling transseries (5.10) satisfies an inhomogeneous linear (Bessel) equation 17) The homogeneous part involves the Bessel operator, so the solution has the same form as (5.13), but with the source term h (3) (x , N ) replaced now by h (5) (x , N ) from (5.17). Given ∆ (5) (x, N ), we can deduce the next contribution to the trans-series expansion for ln Z(x, N ), as a nested integral. This structure clearly continues at higher orders: all higher terms of the strong-coupling trans-series (5.10) can be written as solutions to an inhomogeneous Bessel equation, with source terms expressed in terms of the previous trans-series solutions.
In general, this suggests expressing the strong-coupling trans-series as in terms of odd powers of Bessel functions J N (x), multiplied by prefactors, P N ). This structure is particularly well-suited for studying the large N 't Hooft limit at strong-coupling, as discussed below in Section 5.3.2.
Large N expansions for ∆(t, N ) in the 't Hooft limit
To generate the large N expansions in the 't Hooft limit, we rescale the Rossi equation (4.4) to express it in terms of the 't Hooft parameter t = N x : The GWW phase transition in the large N limit can be seen directly in the behavior of ∆(t, N ) for large N . At weak coupling, the leading large N solution for ∆(t, N ) is obtained algebraically from the vanishing of the N 2 terms in (5.20): The fluctuations about this zero-instanton part of the weak-coupling large N trans-series for ∆(t, N ) are obtained by inserting an expansion of the form which gives a recursive solution for the coefficient functions d The non-perturbative large N instanton parts of the trans-series are then obtained by inserting into the differential equation (5.20) the weak-coupling trans-series ansatz At leading order in ξ ∆ weak we obtain the following equations which determine the weak-coupling action to be and the one-instanton prefactor to be The fluctuation terms in the large N expansion in this one-instanton sector are: Note that the perturbative fluctuation series (5.24) is a series in inverse powers of N 2 , while the oneinstanton fluctuation series (5.30) [and also all higher order instanton fluctuation series] is a series in all inverse powers of N . This explains the structure found in Section 2.2.1 for the partition function Z(t, N ). The divergent weak-coupling large N expansions in (5.24, 5.30) also display characteristic large order/low order resurgent behavior. For example, for a given t < 1, the perturbative coefficients in (5.24) grow with order n as: In this large-order behavior of the perturbative fluctuation coefficients (5.24) we recognize the loworder coefficients of the one-instanton fluctuations, from (5.30). See Figure 8 for an illustrative plot. Similar relations connect the fluctuations in higher order instanton sectors.
Large
N expansions for ∆(t, N ) at strong coupling: t > 1.
The leading strong-coupling contribution to ∆(x, N ) is (5.8) [recall t ≡ N/x]: Thus, the large N 't Hooft limit at strong coupling is based on the Debye expansion [83] where we identify 1 t ↔ sech α, and the polynomials U n (p) are generated by the following recursion relation (with U 0 (p) ≡ 1): In the strong coupling regime t > 1, and from (5.33) we identify the strong-coupling large N action as (α − tanh α): 11 S strong (t) = arccosh(t) − 1 − 1/t 2 (5. 35) In fact, the Debye expansion (5.33) is derived by inserting a trans-series ansatz into the Bessel equation, and matching terms in order to determine the functional form of S strong (t), the prefactor P (t, N ), and a recursive expression for the fluctuation coefficients U n (q(t)), where q(t) = We observe resurgent large order behavior in the coefficients U n (coth α) ≡ U n (q(t)), of the large N expansion (5.33), as they have large order growth as n → ∞ (for any given t > 1): (2S strong (t)) (n − 1) + U 2 (q(t)) (2S strong (t)) 2 (n − 1)(n − 2) + . . . (5.37) Analogous to the situation of resurgence in the Bessel function asymptotics in (2.9), we observe the phenomenon of "self-resurgence", whereby the large-order growth (5.37) of the coefficients U n (q(t)) in the large N Debye expansion (5.33) involves the same low-order coefficients: U 0 (q(t)), U 1 (q(t)), U 2 (q(t)), . . . , for any given t > 1. This is illustrated in Figure 9. The factor of 2 multiplying S strong (t) in the large-order behavior (5.37) implies that the next term in the large N strong-coupling trans-series has exponential factor exp(−3N S strong (t)). Matching terms in the differential equation (5.20), we find: Given the differential equation (5.20), it is straightforward to generate the fluctuations about higher instanton sectors, producing a strong-coupling large N trans-series of the form in odd powers of the leading instanton factor, with prefactors and fluctuations at each instanton order. This is consistent with the strong coupling form in (5.18).
Another interesting comparison consists of plotting the N dependence of the first non-perturbative terms of the strong-coupling expansion (5.10), converted to expressions as functions of t and N , rather than x and N . In Figure 10 we plot the leading strong-coupling term ∆ (1) (t, N ) = J N (N/t) as a function of N , compared with the exact ∆(t, N ) from (4.1), for a fixed strong-coupling value: t = 3. There is excellent agreement, even at N = 1, between the exact values (the black dots) and the function ∆ (1) (3, N ) = J N (N/3) (solid blue curve). But in fact, this agreement is not perfect: there is a tiny non-perturbative difference. Figure 11 compares the difference, ∆ (1) (3, N ) − J N (N/3), with the next strong-coupling trans-series term, ∆ (3) (3, N ), from (5.13), with the conversion: x = N/3. The deviation is very small, and again there is excellent agreement. This procedure can be continued to higher order in the strong-coupling large N trans-series expansion. N ). Notice that the agreement appears to be excellent, but in fact there are tiny non-peerturbative corrections, shown in Figure 11, coming from the next term ∆ (3) in the strong-coupling trans-series (5.10).
Uniform Large N Strong Coupling Expansion for ∆(t, N )
The strong-coupling large N expansion in (5.38), based on the Debye expansion (5.33) of the Bessel function J N (N/t), has the disadvantage that when truncated at any finite order it diverges at t = 1, which is the N = ∞ critical point. This is an unphysical divergence, associated with the Debye approximation itself, and it persists for very large N . Physically, the full contribution to ∆(t, N ) is finite and smooth at t = 1, at any instanton order. But the fact that the strong coupling transseries (5.18,5.19) are expansions in powers of Bessel functions means that we can interpret the phase transition as the coalescence of saddle points. Therefore, this deficiency of the conventional large N expansion can be overcome by using a uniform large N expansion [84] for the Bessel functions: This large N expansion is valid uniformly in t, for all t ∈ (0, ∞), including both small t (weak coupling, 0 < t ≤ 1) and large t (strong coupling, 1 ≤ t < ∞). In (5.41), ζ(z) is defined as [84]: and A k (ζ) and B k (ζ) are functions defined in [84], with A 0 (ζ) ≡ 1. Note that ζ(z) is related to the strong coupling action (5.35) as: Also note that for large argument, Ai (x) ∼ − √ x Ai(x), so the second sum term in (5.41) could be reexpanded amd combined with the first sum, giving inverse odd powers of N in the large N expansion. . The leading uniform approximation is indistinguishable from the exact function, while the Debye large N expansion diverges at t = 1. And this plot is for the very small value of N = 5. In fact, the agreement of the uniform large N approximation to the Bessel function is remarkably good even for N = 1 or N = 2. On the other hand, the divergence of the Debye expansion at t = 1 is responsible for an un-physical divergence as t → 1 + for ∆(t, N ), as plotted in Figure 13. Figure 13 shows that the weak-coupling large N expansion for ∆(t, N ) also diverges as t → 1 − , from the weak-coupling side, where we have plotted the leading large N contribution from the weak-coupling expression (5.24). These divergences persist at large N . In contrast, the uniform approximation is smooth through the transition, and even the leading term is remarkably accurate at t = 1.
These un-physical divergences (at t = 1) of the conventional large N approximations for ∆(t, N ) are inherited by the analogous conventional large N approximations for physical quantities such as the free energy, ln Z(t, N ), and the specific heat, which are discussed below in the Section 9. In Section 7 we show how the uniform large N approximation (5.41) can be applied to Wilson loop expectation values.
The coalescence of the Painlevé III equation to the Painlevé II equation is achieved by rescaling the variable z, the function w(z), and the parameters (α, β, γ, δ) and (a) in a correlated manner [78]: Then in the limit → 0, the Painlevé III equation (6.2) reduces to the Painlevé II equation (6.1). Since the Rossi equation (4.4) for ∆(x, N ) is directly related to the Painlevé III equation, we can use this same coalescence rescaling (6.3) in the Rossi equation (4.4), in order to probe more finely the vicinity of the GWW phase transition at t c = 1. We define a new new variable κ and a new function W (κ) by: which effectively uses the N → ∞ limit as the → 0 limit. The parameter κ measures the deviation from the critical 't Hooft coupling, t c = 1, rescaled by N −2/3 . Then it is straightforward to check that the t form (5.20) of Rossi's equation reduces as N → ∞ to the Painlevé II equation (6.1) with parameter a = 0, and with the argument κ and function W (κ) each rescaled by a factor of 2 1/3 : This coalescence limit is precisely the double-scaling limit [1,2,11], in which we zoom in on the region close to the critical point t c = 1, scaled by the factor 1 N 2/3 . With our conventions here, the weak-coupling (t < 1) side of the double-scaling limit is κ < 0, and the strong-coupling (t > 1) side is κ > 0, with the GWW phase transition occurring at κ = 0. 12 It is well known that the character of the Hastings-McLeod solution to Painlevé II changes at κ = 0 [33,79,80]. See for example the illustrative plots at [85].
6.2 Matching to the Double-scaling region from the weak-coupling side: t → 1 − Approaching the GWW phase transition from the weak-coupling side, as t → 1 from below, in the double-scaling limit (6.4) with N → ∞, the leading term (5.24) of the large N trans-series for ∆(t, N ) scales as (recall that κ < 0 at weak coupling): This agrees (see for example equation (4.1) in [26], or equation (4.45) in [33]) with the leading perturbative term in the weak-coupling trans-series expansion of the double-scaling limit solution to the Painlevé II equation (6.5). Similarly, the first non-perturbative term in the weak-coupling trans-series expansion (5.26) scales as N t Here we have used the fact that in the t → 1 − and N → ∞ double-scaling limit the weak-coupling action (5.28) behaves as Expression (6.7) agrees with the leading one-instanton term in the weak-coupling trans-series expansion of the (appropriately scaled) double-scaling solution to the Painlevé II equation: see for example equation (4.2) in [26], or equation (4.52) in [33].
In fact, the correspondence of these and all higher trans-series terms follows immediately from the coalescence reduction of the Rossi equation (5.20) to the Painlevé II equation (6.5). Thus, we see that the weak-coupling trans-series expansion (5.26) for ∆(t, N ), which is valid for all 't Hooft coupling t < 1, and for all N , matches smoothly to the weak-coupling trans-series expansion for the double-scaling solution W (κ) to the Painlevé II equation (6.5), in the weak-coupling κ < 0 region.
6.3
Matching to the Double-scaling region from the strong-coupling side: t → 1 + On the strong-coupling side of the GWW phase transition, the leading term (5.8) of the trans-series expression for ∆(t, N ) is a Bessel function: ∆(t, N ) ∼ J N N t . In the large N double-scaling limit (6.4) this reduces to a (re-scaled) Airy function due to the identity [86]: Allowing for the rescaling of the argument and function by 2 1/3 in (6.5), this shows that the choice of multiplicative constant ξ ∆ strong = 1 in (5.8) coincides precisely with the choice of multiplicative constant to be the identity in the Hastings-McLeod solution of the standard Painlevé II equation [79,80,85]. This is also consistent with the large N expansion for ∆(t, N ) in (5.38). Expanding the strongcoupling action S strong (t) in (5.35) near the transition region t → 1 + (note that κ > 0 in the strongcoupling region): Therefore, the leading large N behavior in (5.38) yields N t which is the leading large κ behavior of the Hastings-McLeod solution to the scaled Painlevé II equation (6.5). Furthermore, the sub-leading strong-coupling trans-series term in (5.38) yields N t in agreement with the next term in the trans-series expansion of the Hastings-McLeod solution [79,80] on the strong-coupling side. Thus the strong-coupling trans-series for ∆(t, N ), valid for all N and all t > 1 on the strongcoupling side of the GWW phase transition, matches smoothly to the double-scaling limit form of the strong-coupling trans-series expansion for the solution to Painlevé II.
Instanton condensation
As we approach the GWW phase transition, either from the strong-coupling or weak-coupling side, the large N approximation trans-series expansions (5.26) and (5.38) break down numerically. This is in part because the relevant instanton factors, e −N Sstrong(t) and e −N S weak (t) , approach 1 and so are no longer exponentially small. Physically, this is the phenomenon of "instanton condensation" [16,87], whereby the one-instanton approximation is no longer a good approximation, requiring the treatment of all instanton orders. But the breakdown of the large N expansion at t = 1 is also because the prefactor terms diverge as t → 1. This means we must have a better treatment of the fluctuations about the instanton factors, as well as the full effect of multi-instanton interactions at higher instanton order. In the case of the GWW model we can probe this condensation in great detail on the strong coupling side of the transition because we have simple closed-form expressions, in terms of Bessel functions, for the various orders of the instanton expansion. While (5.38) is indeed the structure of the strong-coupling large N trans-series as t → ∞, we also know the structure of the strong-coupling large N trans-series for all t ≥ 1. The leading one-instanton term (5.32) is exactly ∆ (1) (t, N ) = J N (N/t), while the next three-instanton term is exactly ∆ (3) (t, N ) given in (5.13), with the replacement x → N/t. This is illustrated in Figure 14, where we see that the one-instanton and three-instanton terms ∆ (1) (t, N ), and ∆ (3) (t, N ), give an excellent approximation to ∆(t, N ), all the way down to the transition point at t = 1, and even below [compare with Figure 13 which shows the leading instanton terms]. In fact, the agreement at the transition point t = 1 is better than 3.4% with just the leading term, and better than 0.1% when the next term is included. See Figure 15. On the other hand, the corresponding terms from the usual large N expansion (5.38) diverge as we approach t = 1 from above, as shown in Figure 14. [79,80,88,89]. This condensation phenomenon can be probed even more precisely very close to the transition point using the double-scaling limit. Analogous to the above interpretation for ∆(t, N ), we can consider W (κ) in the vicinity of the transition point at κ = 0. The standard weak-coupling and strong-coupling trans-series expansions diverge at κ = 0 for the same reason: the exponential factors are no longer small, and also the prefactors diverge. On the strong-coupling side we can uniformize this behavior by regarding the Airy functions Ai(κ) as the basic trans-series building blocks, much as the Bessel functions J N (N/t) are the basic trans-series building blocks at finite N [see equation (5.18)].
In order to facilitate comparison with numerical results, we consider the Painlevé II equation with its conventional normalization (6.1), and with a = 0. Then, write a trans-series expansion in the strong-coupling (χ > 0) region: (6.14) where the physical Hastings-McLeod solution has the trans-series parameter ξ W strong = 1. Expanding in instanton order, we find: Remaining terms can be generated by iterating the integral equation form of the Painlevé II equation: In Figure 16 we plot the contributions of the first two instanton terms, W (1) (χ) and W (3) (χ) in (6.15), Therefore, taking into account the first two instanton orders of the trans-series (6.14) yields an approximation: which agrees to better than 0.1% precision with the actual numerical Hastings-McLeod solution value [88,89], W (0) = 0.367061552... . Similarly, the derivative at χ = 0 is approximated well by the first two terms of the uniformized instanton (trans-series) expansion (6.14): compared to the precise numerical value W (0) = −0.295372105... [88,89].
Uniform Large N Expansion for Wilson Loops
Wilson loop expectation values in the GWW model can also be expressed in terms of the function ∆(x, N ) ≡ det U defined in (4.1). For example, the (normalized) one-winding Wilson loop is Here we have used the relations (3.1, 3.2) between the partition function Z(x, N ) and σ N , and the relation (4.6) between σ N and ∆(x, N ). This means we can immediately deduce trans-series expansions for the Wilson loop from the trans-series expansions for ∆(x, N ), described in Section 5. For example, using the leading instanton strong coupling expression for ∆(x, N ) ≈ J N (x) in (5.8), we deduce a simple strong coupling approximation for the Wilson loop (recall t = g 2 N/2 = N/x) The first term is the familiar perturbative expression at strong coupling [recall that the strong coupling large N expansion truncates after just one term]. The second term is the full leading instanton contribution, with all fluctuation factors resummed to all orders. Expanding the Bessel functions 13 We have used the integrals: ∞ 0 Ai 4 (χ) dχ = ln 3 24π 2 , and ∞ 0 Bi(χ)Ai 3 (χ) dχ = 1 24π . according to the Debye expansion (5.33) leads to the usual leading instanton expression at large N (see [51,52]), which diverges at t = 1 when truncated at any fluctuation order: On the other hand, the uniform large N approximation introduced in Section 5.4 leads to a uniform large N approximation for W 1 (t, N ) (we use also the uniform approximation for J N (N/t) at [84]): (7.6) This uniform approximation is indistinguishable from the Bessel expression (7.4) even for very small N , and both are in excellent agreement with the exact result even through the transition point. This is illustrated in Figure 17. Furthermore, it is a non-trivial check that the uniform approximation (7.6) reduces to the Debye large N expansion in (7.5) if we expand the Airy functions at large N , in which case all the ζ dependence in the fluctuation terms cancels.
This analysis of Wilson loops in terms of the function ∆(t, N ) extends also to higher winding Wilson loops. It is interesting to compare with the recent large N instanton computations in [51,52] for these higher winding Wilson loops. Using the results of [57] higher winding Wilson loops are related to W 1 . For example (see [52]), for the double-winding Wilson loop: Therefore, (7.3) implies that W 2 can also be expressed in terms of ∆: Figure 18. The solid blue curve shows the Wilson loop W2(t, N ) as a function of 't Hooft coupling t, with N = 5. The red solid curve is the leading strong coupling approximation in (7.4), for which the uniform strong coupling approximation is indistinguishable. The grey and black dashed lines show the leading terms of the conventional large N approximations for the weak and strong coupling regimes, respectively. Note that the conventional large N approximations diverge at t = 1, while the uniform approximation is smooth and in excellent agreement with the exact result, even at this small value of N .
This is the full leading instanton contribution at strong coupling, with all fluctuation factors resummed in closed form. This is plotted in Figure 18, showing excellent agreement, in contrast to the standard large N approximations which diverge at the transition point. The leading uniform large N approximation, using the uniform approximation for the Bessel functions in the two-instanton contribution (7.9), is indistinguishable from the Bessel function expression. Similarly, all higher W p (x, N ) may be expressed in terms of ∆(x, N ), and the trans-series expressions for ∆(x, N ) in Section 5 produce corresponding trans-series expressions for the winding Wilson loops. The Wilson loop W 4 (x, N ) has been studied in [52], where it is expressed in terms of W 1 (x, N ) as: Using the leading strong coupling instanton approximation (7.4) for W 1 we obtain the leading instanton approximation for W 4 which can be written in terms of the 't Hooft coupling as: These expressions includes all fluctuations in this instanton sector, resummed to all orders. In Figure 19 we plot W 4 (x, N ) for N = 5, comparing the exact result, from equations (7.10) and (7.3), with the leading instanton contribution (7.11). We also plot the expression including the effect of the next instanton term ∆ (3) (x, N ) from the strong-coupling trans-series (5.10) for ∆(x, N ). The black dotted and dashed curves show the conventional large N approximation, including just the leading fluctuation term, or also including five fluctuation terms: see expression (4.8) in [52]. The approximation (7.11), and its uniform large N approximation, are much better, all the way to the transition point x = N . We can also compare directly to Figure 1 of [52]: in Figure 20 we show a log plot of the Wilson loop W 4 (x, N ) with its exponential prefactor removed:Ŵ 4 (x, N ) ≡ 4πN 2 e 2N Sstrong W 4 (x, N ). Notice that including the higher 1 N corrections improves the agreement at strong coupling (small x/N ) but yields worse agreement near the transition point x = N . On the other hand, using just the leading instanton expression (7.4) for W 1 (x, N ) in the expression (7.10) for W 4 (x, N ) produces much better agreement all the way to x = N . The leading large N uniform approximation is indistinguishable, already for N = 5, with even better accuracy at larger N . Large N five terms Figure 20. The solid blue curve shows the exact numerically calculated Wilson loopŴ4 ≡ 4πN 2 e 2N S strong W4 as a function of the inverse 't Hooft coupling 1/t ≡ x/N . Compare with Figure 1 of [52]. The large dashed red curve showsŴ4 calculated from the leading approximation for ∆ in (5.10) (∆ ≈ ∆ (1) ). This leading approximation is excellent all the way to the transition point at x = N . The black curves with medium and small dashes showŴ4 calculated from the one-instanton series in the large N strong coupling expansion (5.38) with just the leading term and the first five terms, respectively (see for example, expression (4.8) in [52]). The latter plot agrees well at very small x (very strong coupling) but diverges well before the transition point. All quantities are calculated at N = 5.
Conclusions
In this paper we have studied in detail how the trans-series structure in the Gross-Witten-Wadia (GWW) unitary matrix model changes as the coupling g 2 and gauge index N are varied. The transseries has a very different form in the different parameter regions, and in different limits. The appearance of trans-series expansions are often thought to be a 'consequence' of divergent perturbative expansions. However, the fixed N strong coupling expansion in the GWW model is convergent and still has a non-perturbative instanton expansion as a trans-series. The fact that resurgent relations, connecting the fluctuation expansions about different non-perturbative sectors in the trans-series, appear at finite N is a simple consequence of the fact that the partition function Z(x, N ) (where x = 2/g 2 is the inverse coupling variable) satisfies an (N + 1) th order linear differential equation with respect to x. It is, nevertheless, instructive to see how these resurgent relations work in practice. To probe the transition to the large N limit we have used the Tracy-Widom mapping to connect the GWW model partition function to solutions of simple nonlinear ordinary differential equations with respect to x, with N appearing as a parameter in the equation. These nonlinear equations are of Painlevé III type, and the well-known double-scaling limit of the GWW model is the coalescence cascading of Painlevé III to Painlevé II. This is a non-linear analogue of the reduction of the Bessel equation to the Airy equation. Having an explicit differential equation permits straightforward algorithmic generation of trans-series expansions. The large N trans-series have a different form, and different actions, in the weak coupling and strong coupling phases, but in each phase we again observe resurgent relations between different non-perturbative sectors. The Painlevé III differential equation leads to a uniform large N approximation which connects accurately and smoothly to the exact solution even in the immediate vicinity of the transition point. We explored implications of this uniform large N approximation for winding Wilson loops. The uniform large N approximation identifies the GWW phase transition as the merging of saddle points, and it would be interesting to understand this in more physical detail in terms of semiclassical saddle configurations, especially in associated finite N gauge theories.
9 Appendix: Large N trans-series expansions for physical quantities In this Appendix we record trans-series expansions for the partition function, the free energy and the specific heat, which can all be derived directly from the trans-series expansions for the function ∆(t, N ) studied in Section 5. (9.1) Expanding the first perturbative term at weak coupling (small t), we obtain precisely the large N expansion in (2.22). To analyze the first non-perturbative term in (9.1) we use the small t expansion of S weak (t) to find (suppressing the obvious prefactor terms) Compare this with the first instanton term in (2.12, 2.14), which was found from the large x expansion at fixed N , again suppressing the obvious prefactor terms: Comparing the expressions (9.3) and (9.4), we see that the − 1 12N term in (9.3) arises in (9.4) from the large N expansion of the factorial; the − N t 4 term in (9.4) is generated in (9.3) from the e −N t/4 factor coming from expanding S weak (t) at small t; the 7t 8 term in (9.4) is generated by the prefactor in (9.3); and the − 3t 4N term in (9.3) is generated from the fluctuation factor in (9.4). The large N strong-coupling expansion for Z(t, N ) can be derived from the corresponding large N strong-coupling expansion for ∆(t, N ) in (5.38). We find: 96N (t 2 − 1) 3/2 (2t 2 + 1) + . . . where we have used the fact that the strong-coupling action (5.35) behaves at large 't Hooft coupling as: S strong (t) ∼ −1 + ln(2t) + 1 4t 2 + 1 32t 4 + . . . , t → ∞ (9.7) The expression (9.6) should be compared with the strong coupling result from (2.25), in the large N limit: Once again we see that these leading terms match, coming from very different origins: either from the expansion of the action, from prefactors, from combinatorial large N factors, or from fluctuation terms.
Large N trans-series expansions for the free energy
The free energy per degree of freedom is given by Structurally, these have the same form as the strong coupling large N trans-series expansions for ∆(t, N ) in (5.38). Recall that the factor of 2 in the exponent in (9.12) explains why the strong coupling action in (5.35) differs by a factor of 2 from the strong-coupling action for the partition function and free energy [33].
Large N trans-series expansions for the specific heat
The specific heat is given by [1,8] C = x 2 2 ∂ 2 F ∂x 2 (9.14) This can be expressed in terms of the function σ N defined in Section 3: And since σ N is expressed in terms of ∆(x, N ) via the explicit mapping (4.5), we can use the transseries expansions for ∆ to write trans-series expressions for the specific heat. The leading strong coupling expression, following from the leading strong coupling approximation for ∆(x, N ) ≈ J N (x), reads: This expression resums, in closed form, all fluctuations about the two-instanton sector. Further, it can be converted to a uniform large N expression using uniform approximations for the Bessel functions. | 15,302 | 2017-10-04T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Precise yield of high-energy photons from Higgsino dark matter annihilation
The impact of electroweak Sudakov logarithms on the endpoint of the photon spectrum for wino dark matter annihilation was studied intensively over the last several years. In this work, we extend these results to Higgsino dark matter χ10\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\chi}_1^0 $$\end{document}. We achieve NLL′ resummation accuracy for narrow and intermediate spectral energy resolutions, of order mW2/mχ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {m}_W^2/{m}_{\chi } $$\end{document} and mW, respectively. This is the most accurate prediction to date for the yield of high-energy γ-rays from χ10χ10\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\chi}_1^0{\chi}_1^0 $$\end{document}→ γ + X annihilation for the energy resolutions realized by current and next-generation telescopes. We also discuss for the first time the effect of power corrections in mW/mχ in this context and argue why they are not sizeable.
Introduction
In recent years much attention has been devoted to understanding the nature of the dark matter (DM) in the Universe. This type of matter is not accounted for in the Standard Model (SM) of particle physics and its existence is supported by compelling evidence from Galactic [1] to Cosmological scales [2]. Due to their accidental relationship with the electroweak scale, DM candidates known as weakly interacting massive particles (WIMPs) [3] have received most of the attention. Negative results from direct and indirect searches [4,5] of these WIMPs especially in the electroweak (EW) mass scale range suggest that more attention should be paid to less explored avenues such as the multi-TeV WIMP realizations.
A benchmark example for such heavy candidates is the Higgsino-like neutralino in supersymmetric extensions of the SM [6] and the limit of the pure Higgsino DM model, JHEP03(2020)030 which extends the SM by a single fermionic SU (2) doublet. Its mass has to be approximately 1 TeV if its cosmological abundance is to be explained by a freeze-out process. As a consequence of this large mass and the fact that in this scenario the Higgsino is then the lightest supersymmetric particle (LSP), the Higgsino DM model does not quite address the naturalness problem. The attractiveness of the model is its simplicity. However, direct searches for TeV-mass Higgsinos with the LHC are almost impossible and their scattering cross section with nucleons σ SI ∼ 10 −48 cm 2 [7,8] is below the so-called neutrino floor in direct DM detection experiments.
Nevertheless, DM models of the Higgsino type can be discovered indirectly if a line signature in the gamma-ray spectrum from, for example, the innermost region of our Galaxy, is observed. The quasi-monochromaticity of this part of the spectrum is a consequence of the kinematics of pair annihilation of non-relativistic WIMPs -a key prediction of many WIMP models including the Higgsino. This signal is particularly interesting because disentangling it from the uncertain astrophysical foregrounds is easier than for other type of spectra. Moreover, there is no known astrophysical mechanism that features all the aforementioned properties, rendering the observation of such spectral-line signals a smoking-gun discovery of annihilating DM.
Searches for this type of signature have already been performed by existing gamma-ray telescopes such as Fermi-LAT [9], H.E.S.S. [10] and MAGIC [11]. Particularly promising is the next-generation Cherenkov Telescope Array (CTA) [12], expected to improve the existing limits on spectral lines by one order of magnitude for TeV-scale gamma-ray energies. In these searches, though, it is assumed that the endpoint gamma-ray spectrum from DM annihilation is dominated by the fully-exclusive χ 0 1 χ 0 1 → γγ process. Under this assumption, the endpoint signal would be a perfectly monochromatic signal at the gamma-ray energy E γ = m χ as required by kinematics.
Even though the monochromatic approximation might be a reasonable assumption for some (light) WIMP models, it is not for generic models. The unavoidable finite energy resolution and the fact that only a single photon is observed at a time, imply that the proper observable is the flux that originates from the χ 0 1 χ 0 1 → γ + X process, where X is any type of (undetected) primary and secondary radiation that can be emitted in association with the observed gamma ray, and where the energy of the detected photon is close to the maximal value m χ within the energy resolution.
Theoretical predictions of spectral-line signals and the semi-inclusive photon spectrum near maximal photon energy for heavy WIMPs are not as straightforward as it might appear, since the perturbative expansion in the EW coupling constants breaks down. The effects giving rise to this are twofold. First, one has to take into account the so-called Sommerfeld effect, which is generated by the electroweak Yukawa force acting on the DM particles prior to their annihilation [13][14][15][16]. Secondly, for heavy DM annihilation into energetic particles, electroweak Sudakov (double) logarithms O((α 2 ln 2 (m χ /m W ))) are large and need to be resummed to all orders in the coupling constant [17][18][19][20][21][22][23]. The treatment of the Sommerfeld effect for Higgsino DM is well known. In this work, we focus on the resummation of large logarithmic corrections in the annihilation process χ 0 1 χ 0 1 → γ + X for Higgsino DM, using soft collinear effective theory (SCET).
JHEP03(2020)030
In recent papers we [22,23] and others [17][18][19][20][21] discussed these effects for the pure electroweak Majorana triplet model (wino-like neutralino) [24]. In those papers we established factorization formulas and obtained precise predictions for the photon yield for two different telescope energy-resolution regimes by using SCET methods [25][26][27][28]. We also provided the necessary functions for the evaluation of the factorization formulas at the next-to-leading logarithmic prime (NLL ) accuracy.
Here we extend these results to the case of the Higgsino DM, which is slightly more involved, due to the non-zero hypercharge of the Higgsino. As already mentioned, if the thermally produced Higgsino is to make up all the DM in the Universe, its mass should be approximately 1 TeV. This mass is indeed much larger than the masses of the EW gauge bosons and resummation of large Sudakov logs is necessary. Nevertheless, it is three times smaller than the mass of the thermally produced wino DM. As a consequence, m W /m χ mass corrections that are systematically neglected in the SCET leading-power resummations developed up to now might be as large as (or even larger than) the percent-level accuracy of NLL resummation. We thus include for the first time a quantitative discussion of the next-to-leading power mass correction for Higgsino DM (with direct applicability for wino DM as well).
The resummation of large Sudakov logarithms is achieved by breaking down the annihilation cross section into a few calculable factors representing the different scales and momentum configurations that build up the large logarithms. In order to prove such factorization formulas, some assumptions on the resolution E γ res of the photon energy measurement are necessary. In [23] we distinguished three parametrically different resolution regimes: 1 (1.1) Given the projected energy resolution of the upcoming CTA experiment [12] and the thermal Higgsino DM mass m χ ≈ 1 TeV, it is clear that the narrow and intermediate resolution regimes defined above are the most appropriate when computing the resummed cross section (see figure 1 in [23]). The article is organized as follows. In section 2 we briefly review the Higgsino model, summarize the SCET annihilation operator basis, and the factorization formula for the intermediate energy resolution. In section 3 we consider mass corrections. In section 4 we present the numerical calculation of the semi-inclusive annihilation cross section, after which we conclude in section 5. The main text is held short and focuses on non-technical results. In a series of appendices we give the potential used for the computation of the Sommerfeld effect, provide details of the factorization formula in the intermediate and narrow energy resolution regime and summarize the complete NLO expressions for all relevant functions that enter the factorization formula. 1 The theoretical framework for the "narrow" regime also applies to E γ res m 2 W /mχ including the classic γγ line with E γ res = 0.
JHEP03(2020)030
2 Endpoint spectrum of χ 0 1 χ 0 1 → γ + X The Higgsino model consists of the simple extension of the SM by an EW vector-like fermion SU(2) doublet with hypercharge Y = 1/2, such that one component is electromagnetically neutral after EW symmetry breaking (EWSB) [24,29]. The Lagrangian of the model is given by With our conventions (C.14) the choice of EW charges is such that the lower component of the multiplet is neutral, that is, in terms of components χ = (χ + , χ 0 D ), where the superscript denotes the electric charge. The mass eigenstates after EWSB are two selfconjugate (Majorana) particles (χ 0 1 and χ 0 2 ) defined in such way that χ 0 D = (χ 0 1 + iχ 0 2 )/ √ 2 and an electromagnetically charged Dirac (chargino) particle χ + .
A higher-dimension effective operator, for example L dim-5 = 1 Λ (χΦ)iγ 5 (Φ † χ) where Φ is the standard Higgs doublet, is necessary to provide the χ 0 2 particle with a slightly (at least O(100) keV) larger mass than the χ 0 1 particle. Otherwise, Z-boson mediated tree-level couplings of the Higgsinos to the light quarks would induce a large nucleon-DM cross section already ruled out by direct-detection experiments. On the other hand, the mass splitting δm = m χ − − m χ 0 1 between the charged and neutral component of the Higgsino doublet is induced radiatively after EWSB. At the one-loop order [30] Due to the Sommerfeld effect, the annihilation cross section is very sensitive to small variations of the mass splittings. However, the resummation of the Sudakov logarithms is insensitive to these variations. For instance, we checked that the effect of NLL resummation changes by less than 1% when the mass splitting of the neutral particles is varied by a factor of 10. Keeping this in mind, we will fix the chargino-to-Higgsino mass splitting to δm = 355 MeV, and further adopt δm N = m χ 0 2 − m χ 0 1 = 20 MeV for the mass splitting of the two neutral fermions. This value is small enough that dimension−5 operators responsible for it do not modify δm appreciably but, at the same time, is large enough that -for the range of DM masses considered here -χ 0 2 χ 0 2 cannot be produced for typical Galactic DM velocities v ∼ 10 −3 . We refer to [31] for a comprehensive discussion of the rich Higgsino mass splitting phenomenology.
The derivation of the factorization of the photon spectrum in the γ +X final state from DM annihilation near maximal photon energy 2 E γ = m χ has been thoroughly discussed in section 2.1 of [23], and holds for generic weakly interacting DM. For the sake of brevity, we will therefore provide here the model-specific expressions for the Higgsino and recommend the reader to consult [23] for theoretical background information.
Since the Higgsino multiplet has non-vanishing hypercharge, we must extend the basis (O 1-3 ) of short-distance annihilation operators with three more operators (O 4-6 ). For the annihilation process investigated here, only four out of the six operators are relevant (see JHEP03(2020)030 appendix C.1 for the complete set of operators) 3 where B µ is the SCET building block for the U(1) Y gauge field, and the spin-singlet matrix Γ µν is defined in [23]. In accordance with [23] the non-relativistic fields are represented in an unbroken-index notation. However, unlike in the wino case, in the unbroken Higgsino multiplet particles are not their own antiparticles. Thus, we adopt the nomenclature of [32] where the η v fields represent particles and ζ v the corresponding antiparticles. Additionally, for Higgsino DM. For j > 1/2 SU(2) multiplets the operators O 1 and O 2 are linearly independent. Hence, the factorization formula can be simplified by introducing the Wilson coefficientsC The photon energy spectrum in χ where S IJ captures the Sommerfeld effect, Γ IJ (E γ ) is the Sudakov-resummed annihilation rate, and the indices I, J run over Higgsino two-particle states χ i χ j . The Sommerfeld factor is computed with the leading-order potential in non-relativistic effective theory. The method of computation is discussed in [33] and the potential encapsulating the long-range force between the DM particles is given in appendix A. For gamma-ray energies E γ in a bin of (intermediate) energy resolution E γ res ∼ O(m W ) near the endpoint of the spectrum, 3 As in [32], the fields ζv and ηv are destruction operators for non-relativistic particles and antiparticles in the 2 and 2 * representations of SU(2), respectively (before EWSB). Note that after interchanging ζv and ηv, the operators remain the same (up to a sign in some cases). In contrast to [32] we do not include these redundant operators, and hence a factor of 2 appears relative to the operators defined in [22,23] for the wino case. With this convention the tree-level matching coefficient of the operator O2 is the same for Majorana and Dirac DM. 4 The overall factor of 2 is necessary in the method-2 computation of the Sommerfeld effect [33] for the annihilation of two identical particles to compensate for the method-2 factor 1/( √ 2) n id in (2.9). This factor of 2 has been missed in [22,23] and hence the absolute values of σv shown in figure 4 in [22] and figures 3-5 in [23] (published and arXiv version 1) must be multiplied by two.
JHEP03(2020)030
the Sudakov-resummed annihilation rate can be further factorized: This factorization formula resembles the one obtained in [23] and the meaning of the functions (and their arguments ω, µ and ν) is analogous:C are the short-distance coefficients of the hard annihilation processes, Z γ is the photon jet function describing the detected gamma-ray, J int is the jet function describing the unobserved hard-collinear radiation, and W describes the soft radiation. The indices W, Y = 3, 4 appearing in the soft functions and the photon-jet function are adjoint indices of the SU (2) . Specific expressions of these functions and their resummation are discussed in appendix C. Relative to the wino case, a new component due to the hypercharge of the Higgsino appears in (2.9).
In appendix B, we provide a more general version of the intermediate resolution factorization theorem, valid for general SU(2) multiplets, and more discussion of the functions appearing therein. We also provide the corresponding Higgsino-DM factorization formula for the narrow energy resolution regime E γ res ∼ m 2 W /m χ .
Mass corrections
The factorization formulas (2.8), (2.9) and (B.4), and the ones obtained in [22,23] are valid up to power corrections, i.e. terms proportional to v 2 , δm/m χ or λ ∼ m W /m χ and higher powers. The former two can safely be neglected as v 2 ∼ 10 −6 for DM annihilation in Milky Way-sized galaxies, and δm/m χ ∼ 10 −4 for the Higgsino and wino models. However, m W /m χ ∼ 0.1 × (1 TeV/m χ ) is larger than the percent-level accuracy of NLL resummed annihilation rates. In this section we shall investigate whether such linear O(λ), next-toleading-power mass corrections may be important. In order to assess the impact of power corrections on (2.8) we calculate the χ 0 1 χ 0 1 → γγ amplitude at the lowest, one-loop order in the coupling expansion. The computation of JHEP03(2020)030 χ 0 1 χ 0 1 → γZ would be similar. We find for the amplitude at vanishing relative velocity v = 0, expanded in λ = m W /m χ up to O(λ), the expression 5 In order to arrive at this result, we used feynrules [34] for the model implementation, FeynArts [35] for the Feynman rules and diagram generation, FormCalc [36] for amplitude processing, and PackageX [37] for the evaluation of the loop integrals. The finite and logarithmic pieces of the amplitude are all included in the Sudakov term Γ (11)(11) (Γ wino (00)(00) ). This can be verified by removing the m χ /m W and m W /m χ terms, squaring the resulting amplitude and comparing the result with eq. (313) in [23]. The term that is proportional to λ −1 = m χ /m W is the familiar one-loop Sommerfeld-enhancement factor.
Of particular interest for this discussion is the linear mass correction −2πm χ /m W × m 2 W /(24m 2 χ ). We find that this term is also associated with the non-relativistic dynamics of the problem. We verified this by showing that it arises exclusively from expanding the diagram displayed in figure 1 (and the one with the photon lines crossed) to subleading power in the potential loop momentum region. In the context of non-relativistic effective theory, the coefficient 1/24 originates from subleading-power potentials 6 (−1/8) and (at the squared-amplitude level) the matrix element of the dimension-8 derivative S-wave operator P( 1 S 0 ) introduced in [33,39] (+1/6). 7 Since in the non-relativistic theory leading-power contributions are O(λ −1 ), such O(λ) contributions should be counted as O(λ 2 ) corrections to the leading Sommerfeld enhancement. Together with the small coefficient 1/24, which results in a 3 × 10 −4 correction, we may conclude that we do not expect mass corrections to the leading-power factorization formula to degrade the percent level accuracy of the NLL resummation in the Higgsino mass range of interest.
Numerical results
As in [22,23] we turn our attention on the cumulative endpoint annihilation rate (4.1) 5 We also repeated this calculation for wino DM and obtained an almost identical result, iM χ 0 χ 0 →γγ For the numerical results given in this section we use the couplings at the scale m Z = 91.1876 GeV in the MS scheme as input: The upper panel in figure 2 shows the integrated spectrum σv (E γ res ) plotted as a function of the DM mass m χ , for the intermediate telescope resolution (2.8) set to E γ res = m W for definiteness. The displayed DM mass range includes the first Sommerfeld resonance. The different lines refer to the calculation of Γ IJ at tree level (black-dotted), the LL (magentadotted-dashed), the NLL (blue-dashed) and the NLL (red-solid) resummed expression for Γ IJ . The latter represents the result with the highest accuracy. For better visibility of the resummation effect the lower panel of the figure 2 shows the LL, NLL and NLL resummed annihilation rates normalized to the tree, i.e. Sommerfeld-only result. We see that resummation leads to a reduction of the cross section for large mass, as is generally expected for Sudakov resummation. In the low-mass regime around m χ = 1 TeV and below, however, resummation enhances the NLL and NLL annihilation rate.
This behaviour can be understood from the following observations. 1) At large masses the entries of the Sommerfeld matrix S IJ in (2.8) have similar magnitude and the sum over I, J is dominated by the (+−), (+−) term. The annihilation rate Γ (+−)(+−) in the diagonal χ + χ − → γγ channel starts at tree-level and has a standard series of exponentiated negative double-logarithmic corrections. On the other hand, for masses smaller than about 1 TeV the Sommerfeld effect is not very effective in mixing the various channels, and S (11)(11) ≈ 1, while the other elements are much smaller. Since, however, Γ (11)(11) is non-vanishing only from O(α 3 1,2 ), all terms in the sum over I, J can contribute equally to the annihilation rate and partial cancellations may occur. 2) Quite generally, at small masses it is not guaranteed that the leading logarithms dominate. Moreover, the leading logarithms in the neutral annihilation channels are positive, as we discuss below. This effect dominates over the negative interference term from Γ (11)(+−) and the Sudakov suppression of Γ (+−)(+−) , resulting in an enhanced annihilation rate at masses below 1 TeV.
The resummed predictions are shown with theoretical uncertainty bands computed from a parameter scan over the scales, with simultaneous variations of a factor of two of all scales (see also [23]). In the large-mass region the scale dependence decreases as higher orders are successively added from LL to NLL to NLL , but in the low-mass region the error band of the NLL prediction exceeds the error of the LL result and is very large in absolute terms. For small masses the inclusion of the non-logarithmic one-loop corrections to the hard, jet and soft functions is therefore necessary to gain control over the scale uncertainty, as is done in the NLL approximation. We then find that the resid-JHEP03(2020)030 ual theoretical uncertainty at the NLL order given by the width of the red-solid curve in figure 2 is small, and practically negligible for high masses. Numerically, for the two mass values m χ = 1 TeV (10 TeV The pattern of scale dependence at large masses is similar to the case of the wino [23], but the large NLL uncertainty at small masses was not seen there. An analysis of the analytic expressions for the resummed Higgsino annihilation matrix allows us to trace this behaviour to the following feature of the Higgsino model. The vanishing of the treelevel amplitude for the annihilation of a pair of neutral Higgsinos into γγ and γZ, which implies the vanishing of the tree-level annihilation matrix elements in the (00), (00) and off-diagonal (00), (+−) ((+−), (00)) components, 8 is caused by a cancellation between the short-distance coefficients of the three operators O 1,4,6 . This cancellation is not preserved by the electroweak renormalization group evolution, since the three operators have different anomalous dimensions and do not mix. 9 As a consequence there is a double logarithmic enhancement proportional to L 2 = ln 2 (4m 2 χ /m 2 W ) in the one-loop amplitudes despite the absence of a tree amplitude. Then the lowest non-vanishing order in Γ NLL (00),(00) , which is O(α 4 1,2 ), 10 already carries a L 4 enhancement. This is in stark contrast to the wino model, where Γ (00),(00) contains at most L 2 at O(α 4 2 ) (see appendix E of [23]). The different logarithmic structure is related to the hypercharge of the Higgsino, which allows the decay The higher powers of logarithms in the neutral annihilation matrix channel, which is relevant at low masses, is also the origin of the large scale dependence of the NLL approximation. Defining l µs ≡ ln(µ 2 s /m 2 W ) and l µ h ≡ ln(µ 2 h /4m 2 χ ), we have for the first non-vanishing, O(α 4 2 ) contribution Γ NLL (00),(00) =α where we write explicitly only the terms relevant to the discussion. The existence of a L 4 term implies that the coefficients of L 2 and L are dependent on the matching scales, consistent with the fact that these coefficients are not yet properly summed in the NLL approximation. We then find that the large scale dependence is caused by the π 2 -enhanced terms shown in (4.2), which in turn stem from the imaginary parts of the one-loop anomalous dimensions. The variation of these single-logarithmic terms under scale variation is larger than the scale-independent part of Γ NLL (00),(00) , which causes an O(1) scale dependence at small masses. 11 Adding the one-loop non-logarithmic terms to the hard, jet and soft JHEP03(2020)030 functions at NLL removes the scale-dependent terms shown in (4.2), which explains the dramatic reduction of the theoretical uncertainty when going from NLL to NLL . The same large scale-dependent terms are also present in the LL approximation. However, in this case there is an accidental cancellation of large scale dependence between the (00), (00) and (00), (+−) contributions to the sum in (2.8), resulting in an accidentally small, and highly asymmetric scale uncertainty, as seen in figure 2. None of these observations is relevant to the high-mass regime, where the sum is by far dominated by the (+−), (+−) component, which has a very small scale dependence already at NLL. Neither are they for the wino model, where the large scale dependent terms do not appear. Furthermore, the high-mass regime sets in earlier for the wino, because the Sommerfeld potential is stronger than for the Higgsino.
As for the case of wino DM, it is instructive to investigate whether the narrow (E γ res ∼ m 2 W /m χ ) and the intermediate (E γ res ∼ m W ) resolution results can be matched to provide an accurate prediction for the entire range from E γ res ∼ 0 to E γ res ≈ 4m W . In figure 3, we show the annihilation cross sections for the narrow (blue-dotted) and the intermediate We observe that there is a wide range of E γ res , where the two resolutions match very closely. This implies an accurate theoretical prediction for the photon energy spectrum in JHEP03(2020)030 DM annihilation in the entire range from E γ res ∼ 0 to E γ res ≈ 4m W . A similar matching was performed in the wino DM case and we found the same degree of matching for the two resolution regimes. An in depth investigation of the structure of the large logarithms was performed for the case of wino DM in [23], which explained the high degree of matching of the two resolution cases. A similar structure holds for the Higgsino.
There is a steep rise in the narrow resolution cross section, which occurs at E γ res m 2 Z /(4m χ ). Above this value, the γZ contribution cannot be resolved, resulting in the sharp increase of the semi-inclusive rate. Since the unobserved jet function for the intermediate resolution regime is computed assuming massless particles, it misses this effect. The invariant mass of the narrow resolution unobserved jet function also passes through the W + W − , ZH and tt thresholds, which are however invisible on the scale of the plot.
Due to the high-precision agreement between the two resolution regimes over a wide range of E γ res , we can conclude that in figure 2 the integrated photon energy spectrum of the narrow resolution case would look indistinguishable from the intermediate resolution results, provided the chosen value of E γ res lies somewhat above m 2 Z /(4m χ ) and below 4m 2 W /m χ . After performing a similar parameter scan as in the intermediate resolution case to determine the theoretical uncertainty, we find that the reduction of the scale uncertainty with increasing accuracy is comparable for the narrow resolution case.
Conclusions
The differential annihilation cross section for Higgsino DM annihilation into γ + X near the endpoint was calculated in this paper with the greatest accuracy to date (O(2%) at m χ = 1 TeV and less at higher masses). This computation systematically includes the resummation of the Sommerfeld effect and electroweak Sudakov logarithms at the NLL order and is valid for intermediate and narrow energy resolutions. To achieve the above-mentioned accuracy, it is necessary to perform resummation at NLL accuracy. Our predictions are appropriate for dedicated searches for Higgsino DM (within the thermal production hypothesis) using next-generation gamma-ray telescopes. Electroweak Sudakov resummation decreases the rate of photons by approximately 3% for a Higgsino mass of 1 TeV. For larger and smaller masses resummation leads to suppression (45% at 10 TeV) or enhancement, respectively (numbers refer to E γ res = m W ). We also discussed for the first time in the context of indirect DM detection the impact of power-suppressed mass corrections on the endpoint spectrum. In the case of wino and Higgsino DM we find these corrections to the one-loop amplitude to be of O(m 2 W /m 2 χ ) relative to the leading Sommerfeld effect and thus compatible with the accuracy achieved by NLL computations. The largest theoretical uncertainty for the spectrum is expected to be associated with NLO corrections to the Sommerfeld potential, which has recently been computed for the wino [38] but is not yet known for Higgsino DM.
The Higgsino annihilation cross sections σv (E γ res ) ∼ 2×10 −28 cm 3 /s (for m χ = 1 TeV) predicted here are below the limits obtained by existing gamma-ray observatories but will be probed by the CTA experiment. Concretely, the H.E.S.S. experiment quotes an upper limit (2σ) on σv γγ of 4 × 10 −28 cm 3 /s [10] for their most aggressive assumptions on JHEP03(2020)030 the DM mass distribution of the Milky Way, which translates to about σv H.E.S.S. limit γ+X ≈ 8×10 −28 cm 3 /s for the semi-inclusive rate with energy resolution E γ res ≈ 100 GeV. With the help of our results, future measurements of CTA can be converted into precise constraints on the Higgsino mass given the dark matter profile of the galaxy.
Acknowledgments
We thank Clara Peset for very useful discussions. This work was supported in part by the DFG Collaborative Research Centre "Neutrinos and Dark Matter in Astro-and Particle Physics" (SFB 1258).
A Sommerfeld potential
In our numerical evaluations, the method-2 Sommerfeld matrix S IJ (see [33]) is obtained by solving the Schrödinger equation with the spin-singlet potential The indices are ordered in the following way: (11), (22) and (+−). We added the contribution of the mass-splitting matrix δm IJ . As mentioned in the main text, there is no interaction between the above three two-particle states and the mixed (12) state.
B.1 Intermediate resolution regime
The derivation of the factorization theorem is independent of whether the DM particle is charged under hypercharge or not. Using the result from [23], we can thus write the general form of the Sudakov-resummed annihilation rate for the intermediate resolution case as As already discussed in section 2, we now augmented the operator basis to include hard annihilation into the hypercharge gauge boson. The anti-collinear function Z Y W γ (photon jet function), the hard-collinear function J XV int (unobserved-jet collinear function) and the soft function W ij IJ,V W XY are generalizations of the corresponding jet and soft functions defined in [23]. In particular, the SU(2) ⊗ U(1) Y indices W and Y can now adopt the values 3 and 4, where 3 refers to the SU(2) gauge boson W 3 and 4 to the U(1) Y gauge JHEP03(2020)030 boson B. It is convenient to split the unobserved jet function into an SU(2) and a U(1) Y component, as follows: was already obtained in [23], while J
U(1) int
is new to the case of Higgsino DM. The results are presented in appendix C.3. With (B.2) we can reduce the number of relevant indices of the soft function by introducing Here X, V are summed from 1 to 4. For the pure Higgsino with isospin j = 1/2, we can also make use of the degeneracy of operators O 1 and O 2 in (2.6), from which the factorized Sudakov annihilation rate (2.9) immediately follows.
B.2 Narrow resolution regime
Assuming the energy resolution E γ res ∼ m 2 W /m χ puts us into the narrow resolution regime. The hierarchy of scales then changes to E γ res m W , m X ∼ m W , which means that the unobserved jet now has collinear rather than hard-collinear virtuality. Furthermore, real soft gauge boson radiation is power suppressed making it convenient to write the soft function as soft Wilson coefficients D. A detailed discussion of the factorization theorem for wino DM in the narrow resolution case is provided in [22] and [23]. Since there are no conceptual differences for Higgsino DM, we will not repeat it here. The Sudakov-resummed annihilation rate Γ IJ for the narrow resolution case is given by (the Sommerfeld factor S IJ remains the same) where the SU(2) ⊗ U(1) Y indices summed over are V, W, X, Y = 3, 4.
C NLO expressions
In this appendix we provide all expressions that are required for the evaluation of the factorization formulas (2.9) and (B.4) at the one-loop order as is needed for the NLL resummation. We also provide the necessary results for resumming the various functions.
Since there are no conceptual differences with respect to the resummation in the wino DM case, we refer to [23] for an in depth discussion of the systematics of resummation and will keep the present discussion rather brief. Also, all function definitions have already been presented in [22] and [23] and will not be repeated here.
C.1 Operator basis and Wilson coefficients
In the general case where the field χ in (2.1) is a (2j + 1)-plet of SU(2) with non-vanishing hypercharge, the operators allowed by the symmetries of the SM are The derivation of this operator basis is as in [23]. As argued there, operator O 3 is irrelevant for the χ 0 1 χ 0 1 → γ + X process. We also find that O 5 is irrelevant as it is a spin-triplet operator similar to O 3 . The spin-triplet operators do not contribute, since there is no spintriplet χ 0 1 χ 0 1 initial state, and the Sommerfeld-enhanced scattering prior to annihilation does not change the spin.
The one-loop short-distance coefficients of operators O 1 and O 2 have already been obtained in [23]. For completeness we provide the Wilson coefficients for the entire operator basis (C.1)-(C.6). The Wilson coefficients are computed from matching the full-theory amplitude to the effective theory amplitude. The Feynman diagrams are calculated in the unbroken SU(2) ⊗ U(1) Y gauge theory. For general j and Y , the coefficients are given by
JHEP03(2020)030
Here c 2 (j) = j(j + 1) is the SU(2) Casimir of the isospin-j representation and n G = 3 is the number of fermion generations. 12 Note that C 3 (µ)| Dirac in (C.7) is specific to a Dirac SU(2) ⊗ U(1) Y multiplet. If, on the other hand, we assume a Majorana multiplet, we find Let us comment on a subtlety in the computation of C 3 . Naively, one would expect that there should be no counterterm contributions as the tree-level contribution cancels. However, as also observed for the corresponding quarkonium calculation in QCD [40], the vanishing tree-level result stems from a cancellation between an s-channel diagram and the t/u-channel diagrams. Since DM mass counterterm insertions exist only for the tand u-channel diagram, a mass renormalization contribution survives, which is required to obtain a finite result for C 3 . One might wonder how this result can be obtained in bare perturbation theory, as the tree-level result vanishes, and hence there is apparently no bare mass to substitute by the renormalized one. In bare perturbation theory, the appearance of the counterterm is related to a subtlety in the matching procedure. When performing one-loop on-shell matching we must set the mass of the external particles to their renormalized on-shell values at the corresponding loop order. The tree-level diagrams then contain the bare mass from the explicit mass in the propagator and the renormalized mass from momenta after applying on-shell kinematics. The above mentioned cancellation then leaves over the one-loop difference between bare and renormalized mass in the t-and u-channel tree diagrams and the same result as before is recovered in bare perturbation theory.
The operator O 5 is irrelevant for χ 0 1 χ 0 1 annihilation, but we also find its coefficient to be zero at the one-loop order. This is a consequence of the Landau-Yang theorem. Although it does not hold in non-abelian gauge theories (see, for instance, [40,41]), the violation arises due to the fact that the final state bosons carry an internal quantum number and structures can be constructed involving the group structure constant. At least to the oneloop order, there are no such Feynman diagram structures for O 5 as one of the final-state gauge bosons is abelian.
Next we discuss the renormalization-group (RG) evolution of the short-distance coefficients. Recall that we are considering the annihilation process χ 0 1 χ 0 1 → γ + X for Higgsino DM, for which the operators O 3 and O 5 are irrelevant. Hence, we will disregard C 3 (µ) and C 5 (µ) from now on. The evolution of the vectorC = (C 1 ,C 4 ,C 6 ) defined in (2.6) and (2.7) is given byC The matching coefficient for O3 was previously given for j = 1 and Y = 0, i.e. the pure wino, in [20].
The result given there differs from ours. We attribute this difference to an opposite sign in one of the diagrams in [20] -namely T 5b , and missing mass renormalization (counterterm) diagrams. However, as mentioned before, O3 does not contribute to the annihilation rate into photons.
JHEP03(2020)030
where n i,SU (2) and n i,U (1) give the number of SU(2) and U(1) gauge fields in operator O i . Eq. (C.10) is solved numerically to obtain the resummed short-distance coefficients (C.9). Most anomalous dimensions in (C.11) have already been given in [22] and will not be repeated here. The new ones relevant for the evolution ofC 4 (µ) andC 6 (µ) are γ U(1) (α 1 ) = γ The result for the photon jet function Z 33 γ was already given in [22,23] and will not be repeated here. For the case of Higgsino DM, which has non-vanishing hypercharge, we also need to take into account the index combinations Z 34 γ , Z 43 γ and Z 44 γ , which can be obtained by adapting Z 33 γ accordingly. To do so, it is helpful to remark that only diagram (a) ((b)) contributes to Z 34 γ (Z 43 γ ), while Z 44 γ does not receive contributions from (a) and (b). Hence, the Wilson line part of Z 34 γ and Z 43 γ is multiplied with a factor of 1/2 with respect to Z 33 γ while Z 44 γ does not have a Wilson line contribution at all. To make the computation more transparent, it is convenient to write down the rotation from the weak basis to the mass JHEP03(2020)030 of the gauge fields. One can now use (C.14) to compute the remaining photon jet functions Z 34 γ , Z 43 γ and Z 44 γ by carefully analyzing the position of the cut in diagrams (a), (b) and (c) and by keeping track of whether the γ or the Z boson originated from a W 3 or a B boson in the unbroken theory. We find The discussion of the RG and rapidity RG evolution will be limited to Z 34 γ = Z 43 γ and Z 44 γ . The resummation of Z 33 γ was discussed in [23] in great detail, which allows us to keep the following analysis rather brief. The RG equations are given by with the anomalous dimensions The anomalous dimensions γ
JHEP03(2020)030
Since Z 44 γ is independent of the rapidity scale ν, we only need to consider the rapidity RG equation for Z 34 γ . It reads where the rapidity anomalous dimension γ ν Z 34 γ is given by Solving (C. 18) and (C.22), we can compute the resummed photon jet functions where the integrals in the exponents are computed numerically.
C.3 Unobserved jet function C.3.1 Intermediate resolution
In addition to the SU(2) gauge-boson jet function J SU(2) int (p 2 , µ) obtained at NLO in [23], the factorization formula (2.9) also requires its U(1) Y counterpart. It is given by where l = 1/(e γ E τ 2 ) and the explicit result reads The corresponding RG equation is the ordinary differential equation with the Laplace-space anomalous dimension is needed at the one-loop order for NLL resummation: The RG equation (C.27) is solved by where µ j ∼ 2m χ m W is the natural scale of the hard-collinear jet function. To return to momentum space, we perform a standard inverse Laplace transform (remembering that τ 2 = 1/(e γ E l)). We find for the resummed unobserved jet function J where U (µ j , µ) is taken from (C.30) and J U (1) int (p 2 , µ j ) on the right-hand side of (C.31) is given by (C.24). The integral in the evolution factor U (µ j , µ) is evaluated numerically.
C.3.2 Narrow resolution
As for the photon jet function discussed in section C.2, the unobserved jet function in the narrow resolution case receives contributions from the index combinations J 33 (p 2 ), J 34 (p 2 ), J 43 (p 2 ) and J 44 (p 2 ). The function J 33 (p 2 ) was already computed in [22] and a detailed discussion of its derivation is presented in [23]. The result is a complicated function of the masses of the SM particles, the invariant mass of the jet and of the virtuality and rapidity scales µ and ν, respectively. The computation of J 34 (p 2 ), J 43 (p 2 ) and J 44 (p 2 ) is similar to the case of the photon jet function. One has to be careful which functions receive contributions from Wilson line diagrams and whether the gauge boson crossing the cut originates from a W 3 or a B boson, in order to determine the correct prefactor using (C.14). Since J 34 (p 2 ) = J 43 (p 2 ) we will content ourselves with presenting the results for J 34 (p 2 ) and J 44 (p 2 ). They read for J 34 (p 2 ) and (ω) = δ (ω) , (C.46) where we introduced n ij IJ = (−1) δ I(00) δ i4 (−1) δ J(00) δ j4 with (00) = (11) or (22), (C.47) to allow for a more compact notation. The resummation of the soft function is conceptually analogous to the resummation of the soft function in the case of wino DM [23], to which we refer to for a detailed discussion. The Laplace transform of the soft function and its inverse are defined as (ω, µ s , ν) . (C.64) As in [23], we did not include the rapidity evolution factor in (C.64), since we evolve the photon jet function in ν from ν h to ν s , which makes the soft function rapidity evolution factors unity (see (C.58)).
C.4.2 Narrow resolution
In this appendix, we collect the soft functions appearing in the factorization theorem for the narrow resolution case (B.4). The definition and computation of these terms is analogous to the wino DM case and we refer the reader to [22] for more details. The expressions take the following form: | 10,039.4 | 2020-03-01T00:00:00.000 | [
"Physics"
] |
Increased CCL2, CCL3, CCL5, and IL-1β cytokine concentration in piriform cortex, hippocampus, and neocortex after pilocarpine-induced seizures
Cytokines and chemokines play an important role in the neuroinflammatory response to an initial precipitating injury such as status epilepticus (SE). These signaling molecules participate in recruitment of immune cells, including brain macrophages (microglia), as well as neuroplastic changes, deterioration of damaged tissue, and epileptogenesis. This study describes the temporal and brain region pattern expression of numerous cytokines, including chemokines, after pilocarpine-induced seizures and discusses them in the larger context of their potential involvement in the changes that precede the development of epilepsy. Adult rats received pilocarpine to induce SE and 90 min after seizure onset were treated with diazepam to mitigate seizures. Rats were subsequently deeply anesthetized and brain regions (hippocampus, piriform cortex, neocortex, and cerebellum) were freshly dissected at 2, 6, and 24 h or 5 days after seizures. Using methodology identical to our previous studies, simultaneous assay of multiple cytokines (CCL2, CCL3, CCL5, interleukin IL-1β, tumor necrosis factor (TNF-α)), and vascular endothelial growth factor (VEGF) was performed and compared to control rats. These proteins were selected based on existing evidence implicating them in the epileptogenic progression. A robust increase in CCL2 and CCL3 concentrations in the hippocampus, piriform cortex, and neocortex was observed at all time-points. The concentrations peaked with a ~200-fold increase 24 h after seizures and were two orders of magnitude greater than the significant increases observed for CCL5 and IL-1β in the same brain structures. TNF-α levels were altered in the piriform cortex and neocortex (24 h) and in the hippocampus (5 days) after SE. Pilocarpine-induced status epilepticus causes a rapid increase of multiple cytokines in limbic and neocortical regions. Understanding the precise spatial and temporal pattern of cytokines and chemokine changes could provide more viable therapeutic targets to reduce, reverse, or prevent the development of epilepsy following a precipitating injury.
Findings Introduction
Secretion of cytokines is a fundamental component of the inflammatory response in the nervous system [1][2][3][4]. These proteins are important immunomodulators, and chemokines, a special class of cytokines, also possess chemoattractant properties. Following an initial precipitating injury such as status epilepticus (SE), alterations occur to the milieu of these neuroinflammatory proteins. Of the numerous functional outcomes attributed to these alterations, neuroplasticity has been implicated to play a prominent role in the epileptogenic progression. Other changes that have also been implicated in pathogenesis include vascular permeability, angiogenesis, and immune responses that contribute to tissue damage [2]. Considering that SE increases subsequent seizure susceptibility and the likelihood of developing epilepsy, immunomodulatory signaling proteins are garnering great interest as therapeutic targets to treat neuropathology and the epileptogenic progression [1,3]. Further evidence in support of the potential for these proteins as therapeutic targets is found in patients with epilepsy. In these patients, studies have observed increased levels of chemokines CCL2, CCL3, and CCL5 (among others) in the hippocampus, other temporal lobe structures, and neocortex [4]. Moreover, elevated cytokines such as interleukin 6 (IL-6) and tumor necrosis factor (TNF) can be detected in serum [1], and previous experimental studies in rodents have yielded promising results regarding the potential therapeutic efficacy of targeting these particular inflammatory molecules. Considering these data, a greater understanding of the temporal and spatial changes of these proteins can yield more efficacious treatment options, as well as more precise targeting.
After the acute and early phases of SE, the brain typically enters into a latent period that precedes the development of spontaneous seizures. During this period, it has been suggested that neuroplastic and neuroinflammatory changes provide a substrate for hyperexcitable neural networks [5,6]. Indeed, our previous studies have implicated the neuroinflammatory response in the hippocampus to provide a substrate for the aberrant sprouting of basal dendrites from immature and mature granule cells [7]. Considering the complex interactions that occur among the cytokines both in immune and nervous systems, the most efficacious method for examining their presence in epileptogenesis is through simultaneous multiplex immunoassay [8]. We previously demonstrated elevated hippocampal concentrations of chemokine C-C motif ligand 2 (CCL2) during the acute phase after SE [7]. This report extends these previous findings to further identify acute and early alterations to cytokine concentrations in the hippocampus, cerebellum, and piriform cortex, brain structures that are known to be involved in seizures and epilepsy.
Animals and seizure induction
Male Sprague-Dawley rats, weighing 230-235 g at the beginning of experiments were used in this study. The experimental procedure was approved by the IACUC of the Texas A&M University Health Science Center (protocol #2008-002-R). SE was induced as previously described [7]. Animals were treated with pilocarpine hydrochloride in a single dose of 320 mg/kg i.p. or saline. The onset of SE was considered when animals exhibited a class 3 motor seizure in Racine's scale of limbic seizures [9]. Only animals that experienced stage 5 seizures were used for analysis (n = 48). Ninety minutes after SE onset, rats were treated with diazepam (10 mg/kg) to mitigate seizures; control animals also received diazepam.
Tissue harvest
Animals were deeply anesthetized using Euthasol (pentobarbital sodium and phenytoin, 1 ml/kg) and decapitated 2, 6, and 24 h or 5 days (n = 12 per timepoint) after diazepam treatment. The brain was rapidly removed from the skull, and the hippocampus, cerebellum, piriform cortex, and a large slice of neocortex close to the midline containing the retrosplenial, cingulate, and primary motor cortex were dissected and immediately frozen.
Cytokine analyses
Simultaneous measuring of different cytokines in brain regions was performed as previously described [8]. The following cytokines were assayed: CCL2, CCL3, CCL5, IL-1β, TNF-α, and vascular endothelial growth factor (VEGF) ( Table 1) in tissue homogenates using a MILLI-PLEX MAP kit (Millipore, Billerica, MA, USA) on a BIOPLEX 200 analyzer (BioRad, Hercules, CA, USA). The tissue concentration of each analyte was normalized to the total protein concentration measured with a Bradford assay and presented as a proportion of cytokines in picograms (pg) per microgram (μg) of total protein ± SEM. Student's t test was employed to detect statistical differences in values of control and experimental groups.
Results
The results revealed significant increases in the concentration of numerous cytokines at different time-points in several of the brain regions examined (Tables 1 and 2).
Tissue concentration of CCL2 was significantly increased in seizure animals at all time-points, and in all brain regions, except in the cerebellum. Concentrations peaked at 24 h after SE in the piriform cortex hippocampus, and midline neocortex. Five days after seizures, CCL2 levels were still significantly elevated in the piriform cortex, hippocampus, and neocortex but had declined from the 24-h peak levels (Fig. 1). The chemokine CCL3 presented a larger increase at earlier time-points relative to CCL2 protein observed 2 h after SE in the piriform cortex, hippocampus, and midline neocortex (Fig. 2). At 6 h after SE, increase in the same areas was followed by a peak concentration 24 h after SE in all regions, except the cerebellum. Five days after SE, concentrations of CCL3 remained significantly elevated in the hippocampus, piriform cortex, and midline neocortex, but to a lesser extent than at the 24-h peak (Fig. 2).
Concentration of CCL5 was significantly elevated at 24 h after SE in the piriform cortex and neocortex. No significant alterations were observed in either the hippocampus or the cerebellum. Five days after SE, the CCL5 concentration was still significantly elevated in the piriform cortex but returned close to control levels in other regions (Fig. 3).
The concentration of IL-1β was significantly elevated 2 h after SE in the neocortex, piriform cortex, and hippocampus, but not in the cerebellum (Fig. 4). Concentrations of IL-1β were significantly elevated 6 h after SE in the neocortex and hippocampus, but not in the piriform cortex or cerebellum (Fig. 4). The peak concentration was observed 24 h after SE in the piriform cortex and neocortex. Five days after seizures, IL-1β levels returned to levels similar to control rats in all brain regions (Fig. 4).
Analysis of TNF-α levels revealed significant concentration decrease in the piriform cortex and hippocampus 24 h after SE and a significant concentration increase in the hippocampus 5 days after SE (Table 1). No significant alterations in VEGF levels were observed at any of the timepoints, nor in any of the regions analyzed (Tables 1 and 2).
Discussion
Simultaneous analysis of multiple cytokines in different brain regions, at different time-points after SE, revealed significant alterations in the concentration of CCL2, CCL3, CCL5, IL-1β, and TNF-α. CCL2 and CCL3 showed the largest concentration increases in the piriform cortex, hippocampus, and neocortex that peaked at 24 h after SE. CCL5 and IL-1β were also significantly elevated in these brain regions, although the Fig. 2 CCL3 concentrations presented an increase of 60 times the basal levels 2 h after SE, 25 times the control levels 6 h after SE, and reached a peak 24 h after SE with a~250-fold increase in the hippocampus, piriform cortex, and neocortex. Five days after seizures, CCL3 concentration was still significantly elevated in the piriform cortex (~50-fold), hippocampus (~250-fold), and neocortex (~100-fold; *p < 0.001 in all cases). No significant alterations were observed in the cerebellum increases were orders of magnitude lower relative to CCL2 and CCL3. These novel data in the rat pilocarpine SE model are consistent with findings in human epileptic patients, where microarray analyses showed upregulation of genes ccl3 and ccl4 in surgically removed hippocampus [10]. Similarly, animal studies showed increased CCL2 and CCL3 mRNA that peak 1 day after electrically induced SE [11], and following kainic acid-induced seizures in rats, the gene for ccl2 was upregulated in entorhinal, perirhinal, posterior piriform cortices and hippocampus, where its expression level was the highest relative to the other genes examined [12]. Therefore, future translational studies can selectively target one or more of these inflammatory proteins in the pilocarpine SE model.
Further highlighting the clinical relevance of these chemokines, elevated CCL2 protein was observed in the temporal lobe and hippocampus from patients with intractable epilepsy [13], and both CCL2 and CCL3 proteins were elevated in temporal lobe epilepsy patients measured with multiplex assay [4]. We have also previously shown that CCL2 and its receptor CCR2 were elevated in the rat hippocampus 5 days after seizures in rats [8], whereas other studies demonstrated increased CCL3 protein within the first 2 h after pilocarpine injection in mice [14]. Results consistent with these findings were also found in other experimental models of temporal lobe epilepsy, such as tetanic stimulation of the hippocampus [11] and following kainic acid-induced seizures [5]. Thus, there is substantial accumulating evidence that changes in these cytokines may play an important role in epileptogenesis and thus may represent viable therapeutic targets.
From a functional perspective, it is possible that CCL2 is involved in direct impairment of neuronal survival in epileptogenic structures. For example, CCL2 activates caspase signaling pathways [15], which can contribute directly to cell death. CCL2 can also influence glia activation and the ensuing inflammation [16], which results in uncoupling of gap junctions between astrocytes and thus a reduced calcium buffering potential by the astrocytic network observed in temporal lobe epilepsy patients [17]. Further evidence in support for direct functional impairment attributed to CCL2 elevation was reported in a study in which expression by astrocytes in transgenic mice impairs synaptic plasticity in the hippocampus [18]. Indeed, a study that inhibited CCL2 secretion with pyrrolidine dithiocarbamate prior to pilocarpine-induced SE demonstrated beneficial outcomes with less microglial activation and decreased neuronal damage [19]. While such experimentation may not be clinically viable, it does provide evidence for the therapeutic potential of targeting CCL2 to improve epileptogenic outcomes.
Targeting the seizure-induced increase in CCR5 may be another putative therapeutic target to ameliorate the epileptogenic progression. The chemokine CCL5 is chemotactic to T cells, eosinophils, and basophils. Consistent with the findings in the present study, there is an upregulation of CCR5 expression after kainic acidinduced seizures especially in hippocampal formation neurons and glial cells [20] and after pilocarpineinduced seizures in hippocampal astrocytes [21]. Interestingly, the use of RNA interference to lower CCR5 expression in rats protected against kainic acid-induced Fig. 4 A significant increase in IL-1β concentration was observed in the piriform cortex, hippocampus, and neocortex 2 h after SE (*p < 0.05) and in the hippocampus and neocortex 6 h after SE (*p < 0.05), but not in the cerebellum. The peak concentration was observed 24 h after SE with a~2-fold increase in the piriform cortex (*p < 0.05) and a~2.5-fold in the neocortex (*p < 0.001) with no significant alteration in the hippocampus. Five days after seizures, IL-1β levels returned to control levels in all brain regions seizures by decreasing hippocampal neuronal loss and macrophage/microglial infiltration [22]. The fact that CCR5-knockout mice did not show the same protection against inflammatory hippocampal injury suggests that compensatory mechanisms may render it necessary to target multiple inflammatory molecules to achieve optimal benefit [23].
The result of elevated IL-1β following SE is consistent with previous studies showing that increased concentration of IL-1β after seizures exerts a pro-epileptogenic action, and blocking of IL-1β presented anticonvulsive effects while exogenous application of the cytokine worsened electrographic seizure development [2,4]. In contrast, the results from the current study were inconsistent with previous reports of increased VEGF after pilocarpine-induced seizures [24]. One possible explanation for this discrepancy is that the current study examined total VEGF protein in brain homogenates, whereas the latter study observed elevated cellular VEGF in neurons and astrocytes [24]. Thus, it is possible that the altered VEGF expression observed is a qualitative rather than a quantitative one.
Conclusions
The data from this short report shows, for the first time, elevated CCL2 and CCL3, as well as increased CCL5, IL-1β, and differentially altered TNF-α. It is becoming increasingly clear that the neuroinflammatory response after an initial precipitating injury such as SE plays an important role in nervous tissue pathology. Understanding the temporal and spatial expression patterns of the neuroinflammatory milieu is essential to develop improved therapeutic interventions following these types of injury. Therefore, simultaneous measurement of multiple cytokines from different brain regions is especially important to elucidate the extensive, intertwined, and complex interactions among the inflammatory proteins. Such studies will enable subsequent studies that target multiple cytokine/chemokines at different time-points and possibly different brain regions to achieve optimal therapeutic efficacy. | 3,320.8 | 2015-07-02T00:00:00.000 | [
"Biology"
] |
Multi-Objective Decision Aid Application for RFID Network Planning
. The Radio Frequency Identification system (RFID) is a generic term for technologies of a type of automatic identification system that use radio-frequency waves to automatically identify people or objects. Multiple RFID readers are required to detect and monitor the tags in the network area due to their limited interrogation range. Therefore, the cost of the RFID System, its e ffi ciency in the communication process, and the number of tags covered directly depend on the number of deployed readers. In this work, we present a new many / multi-objective model with the aim to achieve the optimal solutions of the NP-hard problem of RFID Network Planning (RNP). The model involves many objectives and constraints as determining the minimum number of readers, finding their optimal 3 D deployment and localization, guarantying a full coverage of all the tags distributed in the area, and minimizing the energy and the interferences caused by an overlap in the readers signals and the tags responds at the same time. Moreover, we provide a new decision aid application with the aim to access, manage, solve, and optimize the RNP problem involving the needs of researchers and decision makers.
Introduction
The term RFID refers to the Radio Frequency Identification system, which is an automated data collection technology that use radio-frequency waves to automatically identify and track objects.The RFID system consists mainly of three components: tag attached to the object to be uniquely identified, reader to transmit and receive data through radio waves, and finally the RFID middleware [1] to manage the network, as well as to filter and formats data.Due to the limitations of the reader's interrogation range, many sensors are required to cover all the tags in an area.Therefore, the efficiency of the RFID network depends mainly with the number of deployed readers and the coverage of the tags.In order to achieve the optimal deployment of the RFID Network Planning problem (RNP) [2,3], it's becoming difficult to ignore the number of objectives to satisfy as determining the minimum number of deployed readers, guarantying a full coverage of all the tags distributed in the area, and minimizing the energy and the interferences caused by an overlap in the readers signals and the tags responds at the same time.Therefore, several attempts have been made to achieve the optimal RFID network planning, mostly by using the multi-objective optimization [4] based on operators and algorithms.There is a large volume of published studies describing the role of the RFID Network Planning (RNP) and presenting several keys in order to achieve the optimal deployment of RFID readers.As in [5], authors presented a tentative reader elimination operator (TRE) based on PSO algorithm, and with the aim to delete and recover the deployed readers during the search process.Detailed multi-objective op-timization of RNP showed in [6,7], in order to find the globally best Pareto-optimal solutions of the problem.In other major studies and with the aim to propose a new way to solve the RNP problem, [8][9][10] describe a new optimization algorithms called respectively, hierarchical artificial bee colony (HABC), the multi-colony bacteria foraging optimization (MC-BFO), and the cooperative multiobjective artificial colony (CMOABC).[11][12][13] provide in-depth analysis and results of the RNP problem, by using new models in order to obtain an optimal deployment.Moreover in other studies as in [14], the authors reached the robust optimal solutions of the deployment readers problem which are insensitive to the uncertainties and the uncontrollable variations on the optimization parameters using a robustness multi-objective optimization.Or in [15], where the research provides a probabilistic model of coverage for solving the RFID network planning.In this work, we will describe a new robust many/multiobjective model for a 3D deployment and localization of the RNP problem involving many objectives and constraints.Moreover, we deal with the needs of the decision maker by providing a full decision aid platform with the aim to access, manage, solve, and optimize the RNP problem for academic and real-world applications of the RFID system.
Multi-objective Optimization
The optimization deals with the study of those kinds of problems with the aim to minimize or maximize one or more objectives simultaneously that are functions of some real or integer variables, and to obtain their optimal values MATEC Web of Conferences 200, 00017 (2018) https://doi.org/10.1051/matecconf/201820000017IWTSCE '18 while respecting some constraints.The process of modeling, solving and optimizing problems with two or more conflicting and non measurable goals is called Multi-objective optimization.Problems with multiple objectives can be found in many real-world applications and various fields [20,21], and known as Multi-objective Optimization Problems (MOP).They generally have more than one solution in the feasible region, which satisfy the trade-offs among the objective functions.The set of those solutions is called the Pareto set [4].A basic formulation of a multi-objective optimization problem can be defined as: where ( f i ) M i=1 are the objective functions, x is the design variable vector, x L and x U are respectively the lower and upper bounds of x, p is the parameter vector, and the functions (g j ) J j=1 , (h k ) K k=1 are the constraints.In the optimization's studies including multi-objective optimization, the main focus is placed in finding the global optimum or global Pareto-optimal solutions (called by economist Vilfredo Pareto), representing the best possible objectives values, and where the quality of each solution is incomparable without any prior knowledge of preference.In the optimization of problems with a single-objective, the superiority and the quality of a solution over others are easily determined by comparing their objective function values.On the other hand in the multi-objective optimization problems, the goodness of a solution is determined by the dominance.The multi-objective optimization techniques ideally generate a set of non-dominated solutions, and for this reason the concept of global optimum no longer makes sense.In general this set of solutions is known as the set of non-dominated solutions, or non-inferior or Pareto front.The set of Pareto optimal solutions is called the Pareto optimal set, and its image in the objective space is known as the Pareto front.Thus, it is important in the multi-objective optimization to find the solutions as close as possible to the Pareto front.The figure 1 shows an example of the Pareto-optimal of a multi-objective problem, with the aim to minimize the two objectives f 1 and f 2 .A solution is said to be dominated when it cannot be improved without worsen another solution of the set found so far.All Pareto-optimal solutions are non-dominated, which mean that no other solutions in the solution space (Feasible Region) are superior to them while taking into consideration all of the objectives.
OptRFID Solution for an optimal RNP Problem
OptRFID solution is a valuable tool for the decision makers, researchers, and industrial designed to solve and optimize the deployment of readers in the RFID network.
It provides an interface to access, manage, solve, and optimize the RNP problem.
Our experiences using and maintaining of many academic and real-worlds of the RFID system, have helped us to develop and improve the OptRFID design and implementation by including some features as simplicity, facility of use, flexibility, extensibility and portability.
The traditional methods of the optimization management pose several problems of the incapacities such as the absence of an easy-to-use, flexible, extensible and portable software framework for solving the RNP problem.Indeed, the decision maker may be interested in consulting and identifying in real time all probable set of efficient solutions for different scenarios in the research or the decision making process.Therefore, it's very difficult to find a tool that gathers and provides all the above needs due to the complexity of the problem or to the large number and diversity of scenarios.Moreover, with the aim of allowing a minimum optimizing and programming effort, there is a lack of solutions which provide a set of ready-to-use algorithms implementations.Our aim was to develop one of the world's leading optimization of the deployment of RFID readers.All functions of the problem deployment, used algorithms and optimization methods can be tracked and managed in real-time, through a cost effective and innovative solution 'OptRFID'.
OptRFID's Mathematical Modeling
In this section, we outline the different objectives and constraints that we considered and included in the Op-tRFID platform for reaching the optimal deployment of RFID readers.The goal is to find a set of the coordinates readers for covering all the tags deployed in the whole area.At the same time, the total number of readers and their interference must be minimized.Figure 2 shows an example of RFID deployment: readers are presented by red cross and their interrogation range by red circles, tags are presented by blue diamond nodes.Throughout this paper, we use the following notations: • N R : Number of the available RFID readers.
• N T : Number of the tags deployed in the area.
• IR j : Interrogation range of the j th reader.
• d ji : Distance between the j th reader and the i th tag.
Number of deployed readers
The first function OptRFID tool represents the number of RFID readers, which is an important index for evaluating the performance of the deployment.Minimizing the total number of readers aims to reduce the cost.The optimization algorithm will search for the minimum value of this objective function.It can be defined as: Where, r j = 1 if the j th reader is deployed.0 otherwise.
Moreover, OptRFID finds the optimal position of each deployed reader and ensures that we can only place each reader within a certain area.For a rectangular area with height H, width W, and depth D, this constraint is satisfied:
Full coverage
The main objective function for the deployment of RFID readers in OptRFID is to cover all the tags in the whole space.That means each tag must be in the interrogation range of at least one deployed reader.Therefore, we should reduce the number of non-covert tags, which can be formulated as follows: Where, . .N R such that (d ji ≤ IR j and r j = 1) 0 otherwise.
Tags Interference
The tags interference is the interrogation of a tag with several readers at the same time as we can see in figure 3. It decreases the efficiency of the RFID system.The interference mainly occurs when at least two deployed readers in the area cover a tag.The interference must be reduced by using the following expression: Where,
Readers Interference
The readers interference is the interrogation of several readers at the same time as we can see in figure 3. It decreases the efficiency of the RFID system.The interference mainly occurs when at least two deployed readers in the area overlap.The interference must be reduced by using the following expression: Where,
and,
) and r j * r k = 1 0 otherwise.
Power of readers
In the RFID system, the identification of objects relates on the communication process between the reader and the tag.This process depends on correlated factors and parameters to find the ideal power received at an antenna using the Friis Transmission Equation: ) 2 Where, G r and G t are the gain of the transmitting and receiving antenna, respectively.P r is the power at the receiving antenna, P t is the output power of transmitting antenna, λ the wavelength, and R is the distance between the MATEC Web of Conferences 200, 00017 (2018) https://doi.org/10.1051/matecconf/201820000017IWTSCE '18 antennas.Therefore, instead using the power of readers in the objective functions, we decide to work with the distance between the antennas, call it in the the manuscript d ji : the distance between the j th reader and the i th tag, and using IR j : the interrogation range of the j th reader.Optimizing the RFID system related to minimizing the power of readers: Our OptRFID optimization deployment for the RNP problem is formulated as a multi-objective problem: Min F(x, y, z, ir, r) = ( f 1 (x, y, z, ir, r), f 2 (x, y, z, ir, r), f 3 (x, y, z, ir, r), f 4 (x, y, z, ir, r), f 5 (x, y, z, ir, r)) where N is the maximum number of RFID readers that can be deployed in the working area, (x k i , y k i ) is the coordinate of the k th reader and r k i its availability, which equals 1 if its deployed and 0 otherwise.
OptRFID's Architecture, Design and Implementation
OptRFID is easy to understand and use, which respond to any specific needs of decision makers for the RFID deployment.It's allowing to automate the process of organization of the problem's optimization, and to facilitate the traceability.Furthermore, the framework includes the implementation of many algorithms, which can be used as drafts for developing and covering large scenarios.Also, it's flexible according to the decision needs by requiring minimum updates to solve the problem under different configurations, including algorithm parameters as well as those related to the problem, and the variable variation.OptRFID is indigenously interfaced, but can also be deployed separately in order to supports any needs from any user.Therefore, the solution allows to design new problems, to integrate new elements, or even to combine different optimizations methods and algorithms in order to be able to evaluate and to compare different scenarios fairly.
Moreover, OptRFID provides a single platform for managing and optimizing the RFID network planning problem, which means a single place for entering, managing, maintaining parameters, algorithms, etc.This drives naturally multiple benefits as being more efficient in decision-makers servicing; each user can have his own configuration matching with his own needs; A change of parameters is automatically spread across all the platform and optimization processes; and one system to install, maintain, upgrade, get trained on, etc.
OptRFID is divided into a set of modules that cover the entire spectrum of the optimal RNP deployment: • RNP Module for defining the RNP problem parameters as tags deployments (tag's coordinates), readers coverage (reader interrogation range, variation), problem dimension (2D or 3D), etc.
• Optimization module for manipulating and choosing the optimization setting as the robustness (acceptable variation for all the five objectives described in section 3.1), the solutions, the desired objectives (Choose the objectives to include in the optimization) and constraints, etc.
• Algorithm module which includes all provided algorithms (SMPSO, NSGA-II), Operators (Mutation, Crossover, etc), Quality indicators (Hyper Volume, GD, IGD, Spread, Epsilon, etc), etc.Through this option, OptRFID allows to the decision maker to choose specific settings about the desired algorithm for solving the RNP problem.
The following table 1 provide an example of the parameters allowed to the decision maker if using for SMPSO or NSGA-II algorithms.OptRFID is a parametrizable system, which mean that the details of the RNP problem, tags, algorithms, optimizations are defined through a specific set of parameters that can be changed at any-time, and from any-where.Changing a parameter does not require any lines of codes, but at the opposite can be done by any user, and the new optimization can be started immediately.
OptRFID provides an easy to use interface that can be used by the users, and within two or three screens, a decision-maker puts all data required to define the parameters of all deployed components in the RFID network.Moreover, OptRFID platform provided with a database and an appropriate GUI.For this we opted for Oracle to create the database, and the languages PlSql, PHP, JavaScript, Json, Angular, etc for the development.It's designed for traceability, secure management, and adaptable the needs of the decision makers.The figures 5 and 6 show some screens of OptRFID for managing the algorithms and readers parameters.
The figure 4 shows the Conceptual Data Model (CDM) by presenting the structure of the information system from the view of data (dependences or relations between the various data of the information system).
Our platform provides pre-formatted data that allows external system to reprocess and to build a proper statement out of data provided by OptRFID.Moreover, with the aim to get the expected data to the specific needs, OptRFID allows to the user choosing the results type by providing a set of data output available in multiple formats (csv, matlab, maple, R, latex tables, json, oracle database, excels, matplotlib, scilab, etc), and reaching a clear and fast visibility on the optimal deployment of a specific problem.
The experiments were conducted to evaluate and validate the performance of OptRFID platform for solving a benchmark using the provided optimization and design solutions.The table 2 presents some used parameters for solving the benchmark using OptRFID.The results are graphically supported in Figures 7 and 8 by the Pareto optimal solution for the problem according to the desired objectives and the optimal deployment of RFID readers.The OptRFID tools provides the best overall performance by guarantying the quality of the non dominated solutions and yield higher accuracy and efficiency in a faster and cheaper way, which facilitates the decision aiding.
As perspectives, we opt opening the solutions to the external word by providing standard interfaces (mainly batch or web services) that can be used by any user in order to provide a great flexibility for the decision makers, and let them deploying OptRFID solution at the pace they wish.We plan also to integrate more algorithms and optimizations types to let more choices and visibility for the decision maker.
Conclusion
The main goal of the current study was to develop a global decision aid solution of a complete RFID system application based on practical and theoretical propositions.Therefore, we proposed and developed a new mathematical model with the aim to achieve an optimal deployment for the 3D NP-HARD problem of RFID Network Planning (RNP).As perspectives, we plan to present a new approach in order to deal with the uncontrollable parameters and variables.We aim also to apply the game theory and hierarchical strategies [16][17][18][19] in the context of RFID network planning, in order to achieve a robust optimal deployment of RFID readers with lower computation times.Moreover, we plan to integrate our global solution for solving many problems in various fields such as logistics, air, military, security, and personal services.
Figure 1 .
Figure 1.Illustrative example of The Pareto front of a set of solutions in a bi-objective space.
Figure 3 .
Figure 3. Example of the interference of six tags covered by three RFID readers.
Table 1 .
Allowed parameters if using SMPSO or NSGA-II algorithms.
Table 2 .
Properties of multi-objective optimization of RNP Problem using OptRFID. | 4,267.2 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Markov property of Lagrangian turbulence
Based on direct numerical simulations with point-like inertial particles, with Stokes numbers, $\textrm{St}=0, 0.5$, $3$, and $6$, transported by homogeneous and isotropic turbulent flows, we present in this letter for the first time evidence for the existence of Markov property in Lagrangian turbulence. We show that the Markov property is valid for a finite step size larger than a Stokes number-dependent Einstein-Markov coherence time scale. This enables the description of multi-scale statistics of Lagrangian particles by Fokker-Planck equations, which can be embedded in an interdisciplinary approach linking the statistical description of turbulence with fluctuation theorems of non-equilibrium stochastic thermodynamics and local flow structures. The formalism allows estimation of the stochastic thermodynamics entropy exchange associated with the particles' Lagrangian trajectories. Entropy consuming trajectories of the particles are related to specific evolution of velocity increments through scales and may be seen as intermittent structures. Statistical features of Lagrangian paths and entropy values are thus fixed by the fluctuation theorems.
Abstract -Based on direct numerical simulations with point-like inertial particles, with Stokes numbers, St = 0, 0.5, 3, and 6, transported by homogeneous and isotropic turbulent flows, we present in this letter for the first time evidence for the existence of Markov property in Lagrangian turbulence. We show that the Markov property is valid for a finite step size larger than a Stokes number-dependent Einstein-Markov coherence time scale. This enables the description of multi-scale statistics of Lagrangian particles by Fokker-Planck equations, which can be embedded in an interdisciplinary approach linking the statistical description of turbulence with fluctuation theorems of non-equilibrium stochastic thermodynamics and local flow structures. The formalism allows estimation of the stochastic thermodynamics entropy exchange associated with the particles' Lagrangian trajectories. Entropy consuming trajectories of the particles are related to specific evolution of velocity increments through scales and may be seen as intermittent structures. Statistical features of Lagrangian paths and entropy values are thus fixed by the fluctuation theorems.
Introduction.-The physics of particles submerged in fluids has played a central role in the development of statistical mechanics, and in our current understanding of out-of-equilibrium systems.
Inertial particles are inclusions in the flow which are denser or lighter than the fluid, and have a size smaller or larger than the smallest relevant flow scale (the scale of the smallest eddies, also called the flow dissipative scale). Such particles are carried by the fluid, but they also have their own inertia, and therefore the fluid and the particles have different dynamics. In the limit of point-wise particles with negligible inertia, the particles become Lagrangian tracers, which perfectly follow the fluid elements. In all these two-phase systems the mechanisms that explain how turbulence affects the motion of the particles are not completely clear. As an example, turbulence can both enhance or hinder the settling velocity of inertial particles [1]. For heavy particles, an initially homogeneous distribution of particles may, after interacting with a turbulent flow, regroup into clusters forming dense areas and voids, in a phenomenon called preferential concentration where turbulence somehow unmixes the particles [2]. Thus, the velocity and statistics of tracers and inertial particles is also different: while tracers sample the flow homogeneously, inertial particles only see specific regions of the flow, with either low acceleration or vorticity, and their velocity can differ significantly from the fluid velocity. It is only in the last decades when sufficient time and spatial resolution have been achieved in experiments and numerical studies to allow the analysis of these phenomena. Frequently new numerical and experimental data has been in contradiction with theoretical models, and previous knowledge on fluid-particle interactions had to be reconsidered even in simplified cases [3]. Furthermore, there are many open questions concerning inhomogeneous flows [4], finite-size [5,6] and non-spherical particles [7], among others. More recently, colloidal particles were used in the first experiments to verify fluctuation relations such as the Jarzynski equality [8], which links the statistics of fluctuating quantities in a non-equilibrium process with equilibrium quantities. Colloidal particles were also used to verify the thermodynamic cost of information processing [9] proposed by Landauer. Fluctuation theorems for more general p-1 problems, in which a physical system is coupled with (and driven by) another out-of-equilibrium system, would open potential applications in other complex coupled systems as in soft and active matter. But what fundamental statistical relations are satisfied by particles that interact with a complex and out-of-equilibrium turbulent flow? Brownian motion set the path for the study of diffusion and random processes. It becomes more complicated if the particle motion is driven by turbulence; even a simple point-wise passive particle provides a challenge in the modeling of many aspects of its dynamics [10,11].
Friedrich-Peinke approach to turbulence.-In this letter we show that the Markov property is valid for the dynamics of dense sub-Kolmogorov particles coupled to a turbulent velocity field of three-dimensional homogeneous and isotropic turbulence (HIT) at inertial range time scales. To this end we use pseudo-spectral direct numerical simulations (DNSs) solving the incompressible Navier-Stokes equation with a simple point particle model [12]. To characterize HIT it is customary to study the statistics of Eulerian velocity increments u r = [u(x + r) − u(x)] · r/|r| for scales r = |r|. However, two-point statistics of velocity increments do not fully characterize small-scale turbulence [13], and many attempts at dealing with multi-point statistics have been considered. One way to do this is to use the Friedrich-Peinke approach [14], in which the stochastic dynamics of velocity increments u r are considered as they go through the cascade from large to small scales r. A central assumption is that the evolution of the stochastic variable u r possesses a Markov process "evolving" in r. Previous studies showed that u r can be considered as Markovian to a reasonable approximation [14][15][16], at least down to a scale close to the Taylor scale [16,17]. Furthermore, u r satisfies a diffusion process [18] −∂ r u r = D (1) (u r , r) + [D (2) where a linear dependence on the value of the increment for the drift coefficient D (1) (u r , r), and a quadratic dependence for the diffusion coefficient D (2) (u r , r), was found [13,16,[19][20][21]. Further details on the noise term Γ(r) and the empirical estimation of the drift and diffusion coefficients D (1,2) (u r , r) are given in [22] (Appendix A) .
Up to now, this approach has been applied to an Eulerian description of turbulent flows. Based on the general interest in the relation between Eulerian and Lagrangian properties of turbulence the question of the application of this approach and, in particular, the verification of the necessary condition of Markov property for Lagrangian turbulence and inertial particles arises.
DNS of inertial particles in turbulence.-To this end we use DNSs of forced HIT. A large-scale external mechanical forcing is given by a superposition of modes with slowly evolving random phases, following standard practices for its temporal integration and de-aliasing procedures. An adequate spatial resolution of the smallest scales, i.e., κη 1 is chosen [23]. Here, η is the Kolmogorov or dissipation length scale, η = (ν 3 /ε) 1/4 (where ε is the kinetic energy dissipation rate, and ν the kinematic viscosity of the fluid), and κ = N DNS /3 is the maximum resolved wavenumber in Fourier space (with N DNS = 512 the linear spatial resolution in each direction). The DNSs are characterized by a Reynolds number based on the Taylor microscale of Re λ ≈ 240 (see [2] for details). Inertial particles were modeled using the Maxey-Riley-Gatignol equation in the limit of point heavy particles, which for a particle with velocity v in the position x p submerged in a flow with velocity u, readṡ with τ p the particle Stokes time. For tracers, τ p = 0.
There is no particle-particle or particle-fluid interaction (i.e., we use a one-way coupling approximation). The distinction between particles is quantified by the Stokes number St = τ p /τ η (the ratio of the particle relaxation time to the Kolmogorov time, defined as τ η = [ν/ǫ] 1/2 ).
Cascade trajectories.-To study the Markov property of particles trajectories, we study v(t) (the Lagrangian velocity of the particle at time t) for St > 0, and for Lagrangian tracers with St = 0 the analysis is performed using v(t) = u(x p , t) (the velocity of the fluid element where the tracer is located). Building now velocity increments in time instead of the commonly used spatial scales r (e i corresponds to the unitary vector in the direction x, y, or z specified in the figures) we define for every particle trajectory a "cascade trajectory" (or "sequence" of velocity increments at decreasing scales) [u(·)] = {u T , . . . , u τ f } for different time-separations or time scales τ , from the initial T to the final time scale τ f with T > τ f . The notation [u(·)] indicates the entire path through the hierarchy of time scales from large to small scales instead of a distinct value u τ ; T ≈ 120τ η and τ f ≈ 0.2τ η were chosen as references.
The advantage of a Lagrangian perspective compared to a spatial (Eulerian) perspective is that Lagrangian particle trajectories have a clear physical meaning: each particle follows a path according to the fluid dynamics and its own inertia. In other words, the time increments have a direct interpretation as values corresponding to changes in the velocity along trajectories as the system evolves. In the Eulerian case, the spatial distances (or the "evolution" in r) have an indirect interpretation in the context of a Fokker-Planck formulation of the dynamics.
Results.-To verify the existence of the Markov property for Lagrangian turbulence, we integrate three sets of 1.3 × 10 5 inertial particles, respectively with St = 0, 0.5, 3, and 6.
In Fig. 1 a qualitative validation of the Markov property based on the alignment of the single conditioned p (u τ2 |u τ1 ) and double conditioned p (u τ2 |u τ1 , u τ0 ) probability density functions (PDFs) of datasets of increments for a chosen set of three different time scales τ 0 > τ 1 > τ 2 is shown as contour plots. Each scale is separated by ∆τ ≈ 13τ η , which is equal to the Einstein-Markov coherence time scale ∆ EM for St = 3. As described below and in the discussion accompanying Fig. 2, ∆ EM is the smallest time-scale for which the Markovian assumption can be considered valid. For Markovian processes the relation p (u τ2 |u τ1 ) = p (u τ2 |..., u τ1 , u τ0 ) holds. For finite datasets, is commonly assumed to be a sufficient condition. The close alignment between the PDFs (black and red solid lines in Fig. 1) confirms the validity of the Markov property. Note that while Fig. 1 shows results for St = 3, all our datasets give similar results. In Fig. 2 the Markovian approximation is checked systematically for different values of ∆τ , using the Wilcoxon test, which is a quantitative and parameter-free test that determines the Einstein-Markov coherence time scale ∆ EM (see [16,17] and [22], Appendix G, for details and the description of essential notions). This test is a reliable procedure to validate Eq. (4) [16,17]. Based on this analysis, the Markovian approximation is valid for time-scales larger than or equal to this St-dependent critical time separation ∆ EM ≈ 7, 10, 13 and 16τ η for St = 0, 0.5, 3, and 6 respectively (see Fig. 2(a)). Accordingly, the complexity of the dynamics of inertial particles in turbulent flows, expressed by the evolution of the stochastic variables u τ at decreasing time scales, can be treated as a Markov process, with an Einstein-Markov coherence time scale that increases with St. Our analysis presented in [22] (Appendix C and D) demonstrates that on the scale that the Markov property holds the non-Gaussian behavior is still present and that our results are well linked to the range containing Lagrangian small-scale intermittency. Furthermore, we see that ∆ EM is the same (at fixed St) for all velocity components (see Fig. 2(b)). We performed a similar analysis for the fluid velocity at the particle position u(x p , t) shown in [22] (Appendix F). Interestingly, the Einstein-Markov time-scale decreases with increasing St, unlike the behavior observed for v(t). While this remarkable behavior and the effect this has on the fluid velocity observed by the particles requires further study, it may be related to the fact that a larger St number implies a larger drag, and therefore a tendency of the system to dissipate energy more efficiently and faster. This would result in a reduction of the associated Einstein-Markov time of u(x p , t). . Note, that the normalized expectation value t(τ, ∆τ ) is supposed to be close to 1 if the Markovian assumption is valid (see [16,17] and [22] (Appendix G) for details and the description of essential notions).
The results shown above can be linked to stochastic thermodynamics [20,24] (also called stochastic energetics [25]), developed for many different physical systems [26][27][28][29], and to integral and detailed fluctuation theorems. Experimental studies of stochastic thermodynamics mainly focus on nanoscale or quantum systems, or when dealing with classical systems, on biological systems [30,31], which are assumed to be well off the thermodynamic limit so that the probabilistic nature of balance relations becomes clearer. We test these theorems starting with the integral case as it should be more robust.
For the case of passive particles, it is the turbulent surroundings that impose the non-equilibrium condition. Turbulence can be understood, and especially its energy cascade, as a process leading a fluid under specific conditions and parameters (the Reynolds number) from a nonequilibrium state into another non-equilibrium state, by the combination of energy injection and dissipation. In the spirit of non-equilibrium stochastic thermodynamics [32], by using the Fokker-Planck equation, it is possible to assop-3 ciate with every individual trajectory [u(·)] a total entropy variation [20,24,25,29,32] given by the sum of two terms A thermodynamic interpretation of this quantity based on the relation between heat, work, and inner energy, can be given [25,29,32]. ∆S sys is the change in entropy associated with the change in state of the system for a single realization of a process that takes some initial probability p (u T , T ) and changes to a different final probability p u τ f , τ f . It is simply the logarithmic ratio of the probabilities of the stochastic process, so that its variation is The other term, the entropy exchanged with the surrounding medium ∆S med from the initial to the final time scale, measures the irreversibility of the trajectories: Then, the dynamics of inertial particles coupled to a turbulent flow are characterized by the empirically estimated drift and diffusion coefficients D (1,2) (see [22] for a presentation of D (1,2) for all Stokes numbers considered). The knowledge of drift and diffusion coefficient allows to determine for each [u(·)] an entropy value, for which the thermodynamic fluctuations can be investigated. The dashed line in Fig. 3(a) . We find that for the four Stokes numbers the results are in good agreement with the IFT, which is a fundamental entropy law of non-equilibrium stochastic thermodynamics [29,32]. This is not trivial, as the IFT is extremely sensitive to errors in the empirical estimation of D (1,2) defining the Fokker-Planck equation. For this reason, our empirical observation on the validity of the IFT for the inertial particles provides another verification of our approach. Furthermore, inertial particles (i.e., for sufficiently large St) sample preferentially some specific regions of the flow [2]. The remarkable observation here is that they do it always in agreement with this theorem. Figure 3(b) shows that in addition to the IFT also the detailed fluctuation theorem (DFT) holds for our data coming from a turbulent flow with finite Reynols number, which is expressed as Thus, in addition to the IFT, the DFT expresses the balance (explicit exponential symmetry constraint) between entropy-consuming (∆S tot < 0) and entropy-producing (∆S tot > 0) trajectories. See [22] for the illustration of p (∆S tot ) for St = 0.5, 3 and 6. Finally we ask whether these findings of the entropy fluctuation are just a statistical result or if they have some relation to flow structures. In particular, it is of interest if the negative entropy events have some special features. Therefore we study the mean absolute velocity increment trajectories conditioned on a specific total entropy variation |u τ | ∆Stot . Figures 4(a) and (c) show that the total entropy variation is linked to distinct trajectories. Entropy-consuming trajectories are characterized by an increase in the averaged absolute values of the increments |u τ | ∆Stot=−2 with decreasing time scale τ , while trajectories marked by entropy-production |u τ | ∆Stot=3 smoothly decrease their absolute increments with decreasing τ . This is further highlighted by Fig. 4(b) and (d). In these three-dimensional representations the absolute velocity increment at initial and final scale, T and ∆ EM , conditioned on the entropy for all trajectories for which the conditional averaging was performed, are shown. In [22] (Appendix E) we show that the majority of large and intermittent small scale increments (heavy tailed statistics of the small-scale increment PDF) are contained in the negative entropy trajectories.
Conclusion.-In this letter we showed the existence of Markov property for Lagrangian turbulence. The dynamics of inertial particles coupled to a turbulent flow can be treated as a Markov process for a finite step size larger than a Stokes number-dependent Einstein-Markov coherence time scale. This opens an interesting interpretation of the St number in terms of the Markov memory of the particles trajectories, and is compatible with the picture that particles with more inertia filter fast and small scale fluctuations of the carrying flow: for particles with larger St (and thus larger particle response times) the Markovianization of trajectories by the turbulence takes place at longer times. This can be used to quantify effective parameters in cases where the particles' inertia is not clearly defined, like finite-size, or non-spherical particles.
Moreover, based on the theory of Markov processes it is possible to derive an explicit Fokker-Planck equation for the description of multi-scale statistics of Lagrangian particles. The Markov property corresponds to a threepoint (two-scale) closure of the general joint multi-scale statistics [33], representing a step towards a more comprehensive understanding of turbulence, which is characterized by a challenging complexity with many high order correlations. This results in a tremendous reduction of the degrees of freedom, and shows that the long-standing problem in turbulence of the closure of the hierarchy of equations can be approached in a Lagrangian framework.
The interpretation of the stochastic process as an analogue of a non-equilibrium thermodynamic process [20,24,33] when applied to Lagrangian turbulence allowed us to show that inertial particles satisfy both in-tegral and detailed fluctuation theorems in a strict sense. The verification of these theorems, which have contributed significantly to the understanding of open and out-ofequilibrium systems and to the development of statistical physics, adds strength to the validity of the Markov property and the description by a Fokker-Planck equation. Previous observations of the irreversibility of particles in turbulence considered a Jarzynski-like equality for the particles energetics [34], but the slope was not one as expected for the DFT. Reproductions of such exact results are found very rarely in turbulence research, so that the result provides a novel constraint for the particles' evolution. The fluctuation theorems may be taken also as a kind of new conservations laws.
These fluctuation theorems also shed light on asymmetries in particle trajectories, as reported before in [34], and as shown here for individual trajectories depending on their entropy evolution. A connection can be made between entropy-producing and -consuming trajectories with specific evolution of velocity increments with decreasing time scale. Interestingly, our study also shows that particle trajectories are out-of-equilibrium while keeping ergodicity. These abnormal behaviors (or structures) in the Lagrangian paths are statistically embedded in the fulfillment of the two fluctuation theorems. Thus, the presented results may shed new light on the long-lasting debate of the statistical approach to turbulence versus the study of flow structures.
The case of out-of-equilibrium systems like turbulence coupled to dense particles has applications in geophysics, astrophysics, and aerosol dispersion. In the cases studied here we have non-trivially coupled systems, whose particles not always randomly sample the flow topology [2]. Note that preferential concentration for St = 0 implies that particles are not sampling the flow homogeneously, and a non-random sub-sampling of the Eulerian flow velocity, plus the particles own inertia, could break down the IFT and DFT theorems. In spite of this, the theorems are satisfied for particles with different response times (albeit, in the limit St → ∞, the relations should not hold as particles decouple from the flow). Also, the formalism proposed here only requires measuring the trajectory and velocity of particles. Such study is accessible with stateof-the-art experimental techniques, even at large Reynolds numbers, while most models to describe the dynamics of these systems require much more detail, such as measurements also of particles' acceleration. At last, we point out that the entropy analysis may also serve to compare different turbulent situations, as it was successfully done by the statistics p(∆S tot ) for turbulent wave situations [35]. * * * This work has been partially supported by the ECOS project A18ST04, by the Volkswagen Foundation (96528) and by the Laboratoire d'Excellence LANEF in Grenoble (ANR-10-LABX-51-01). p-5 | 5,049.8 | 2021-04-07T00:00:00.000 | [
"Physics"
] |
Using double-stranded RNA to prevent in vitro and in vivo viral infections by recombinant baculovirus.
Introduction of double-stranded RNA (dsRNA) into a wide variety of cells and organisms results in post-transcriptional depletion of the homologue endogenous mRNA. This well-preserved phenomenon known as RNA interference (RNAi) is present in evolutionarily diverse organisms such as plants, fungi, insects, metazoans, and mammals. Because the identification of the targeted mRNA by the RNAi machinery depends upon Watson-Crick base-pairing interactions, RNAi can be exquisitely specific. We took advantage of this powerful and flexible technique to demonstrate that selective silencing of genes essential for viral propagation prevents in vitro and in vivo viral infection. Using the baculovirus Autographa californica, a rapidly replicating and highly cytolytic double-stranded DNA virus that infects many different insect species, we show for the first time that introduction of dsRNA from gp64 and ie1, two genes essential for baculovirus propagation, results in prevention of viral infection in vitro and in vivo. This is the first report demonstrating the use of RNAi to inhibit a viral infection in animals. This inhibition was specific, because dsRNA from the polyhedrin promoter (used as control) or unrelated dsRNAs did not affect the time course of viral infection. The most relevant consequences from the present study are: 1) RNAi offers a rapid and efficient way to interfere with viral genes to assess the role of specific proteins in viral function and 2) using RNAi to interfere with viral genes essential for cell infection may provide a powerful therapeutic tool for the treatment of viral infections.
Introduction of double-stranded RNA (dsRNA) 1 into a wide variety of cells and organisms results in post-transcriptional depletion of the homologue endogenous mRNA (1). This wellpreserved phenomenon known as RNA interference (RNAi) is present in evolutionarily diverse organisms such as plants, fungi, insects, metazoans, and mammals (fore review see Ref. 2). It is generally accepted that in plants RNAi may serve as a mechanisms of defense against viral infections (3). The physiological role of RNAi in animals remains to be established, although possible roles in organism development, germ line fate, and host defenses against transposable elements and viruses have been suggested (4). In this regard, recent experimental evidence shows modulation of HIV-1 replication in human cell lines by exogenous dsRNA from several portions of the viral genome (5). In a different study, dsRNA conferred viral immunity in human cells against poliovirus (6).
Although the effect of dsRNA to prevent viral infections has been tested in cell lines, it remains to be established whether RNAi also works in organisms. To test this hypothesis we used a well-studied insect virus, the Autographa californica nucleopolyhedrovirus (AcNPV). This virus is a member of a group (family Baculoviridae) of large double-stranded DNA viruses that infect many different insect species (7). The AcNPV GP64 glycoprotein is a major component of the nucleocapsid of budded viruses (BV) and is required for BV entry into host cells by endocytosis (8). Thus, GP64 is a key element for cell-to-cell infection and virus propagation (8). Another gene essential for baculovirus proliferation produces the immediate early protein (IE1). This phosphoprotein regulates the transcription of early viral genes (9). Deletion of the N-terminal domain of IE1 results in the loss of transcriptional activation, and the resulting viruses cannot be replicated (9). Thus, these two proteins are excellent candidates to test if RNAi can prevent viral infection in vitro and in vivo.
Recombinant AcNPV in conjunction with Spodoptera frugiperda (Sf 9 and Sf 21) insect cell lines have been extensively used for the heterologous expression of a wide variety of genes (7). We have produced a recombinant baculovirus containing the genes for the green fluorescent protein (GFP) and -galactosidase (AcNPV-GFP), which we have used as reporters of cell infection in Sf21 insect cells and larvae.
Because introduction of dsRNA results in powerful and selective gene silencing in different cells (including insect cells), one would expect that transfecting insect cells with dsRNA from gp64 prior to virus exposure could prevent virus propagation, because the BV produced by dsRNA-treated cells would lack GP64 protein.
In agreement with this prediction, cells infected with Ac-NPV-GFP and previously transfected with dsRNA from a portion of gp64 (dsRNA-gp64) do not show GFP fluorescence, and the GP64 protein could not be detected in the membrane of these cells. Because the very late polyhedrin promoter drives GFP production, the lack of GFP expression indicates that dsRNA treatment prevents viral infection in culture. Similar results were obtained with cells transfected with dsRNA from ie1 (dsRNA-ie1). In this case the inhibition was even stronger as expected when interfering with the synthesis of a transcrip-tional activator essential for viral replication.
To test if silencing gp64 and ie1 genes may interfere also with viral infection in vivo, we used the larvae from the insect Tenebrio mollitor (mealworm beetle). Injection of AcNPV-GFP into the hemolymph of the larvae from this insect results in death of 100% of the larva within the following 7 days. Dead insects show high levels of GFP fluorescence and -galactosidase activity, the two reporter genes present in the recombinant AcNPV-GFP baculovirus. Injection of dsRNA-gp64 or dsRNA-ie1 1 day prior to the injection with AcNPV-GFP prevents larva death by over 90%, whereas the surviving insects do not show GFP fluorescence or -galactosidase activity.
All these results show for the first time that the introduction of dsRNA into culture cells and living animals, to interfere with the synthesis of viral proteins essential for baculovirus infection, results in potent inhibition of viral infection in vitro and in vivo. The most relevant consequences from the present study are: 1) RNAi offers a rapid and efficient way to interfere with viral genes to assess the role of specific proteins in viral function (reverse genetics in baculovirus) and 2) using RNAi to interfere with viral genes essential for cell infection may provide a powerful therapeutic tool for the treatment of viral infections.
Production of Recombinant AcNPV-GFP Baculovirus-The cDNA from GFP (Clontech, Palo Alto, CA) was cloned in the pBB4 vector (Invitrogen, Carlsbad, CA) next to the polyhedrin promoter. The pBB4-GFP vector was recombined with linear AcNPV DNA following the manufacturer's instructions (Invitrogen). After two plaque assays, the recombinant baculovirus containing the GFP (AcNPV-GFP) was amplified twice and the titer was determined by plaque assay. This recombinant baculovirus was used in the studies described in this work.
dsRNA Synthesis-Double-stranded RNA was synthesized in vitro using the Megascript kit (Ambion, Austin, TX), following the manufacturer's instructions. Briefly, nucleotides 109330 -109949 were amplified by PCR from the entire virus genome. This sequence corresponds to the first 619 nucleotides from gp64 as verified by sequence analysis. The sequence was cloned in the pPD129.36 vector multiple cloning site, which is flanked by two T7 opposing promoters (10). The T7 promoter sequence, including the fragment of gp64 was amplified by PCR using universal T7 primers, and this sequence was used for the in vitro transcription to obtain the dsRNA with the T7 polymerase. For the synthesis of dsRNA from GFP, the entire gene sequence was used and for polyhedrin the entire promoter sequence (GenBank TM X06637). The ie1 gene is formed from a promoter sequence and the sequence resulting in the transcriptional activator protein. The dsRNA from ie1 was synthesized from nucleotides 19-470 (ATG is in position 1) corresponding to the N-terminal domain from the IE1 protein. For single-stranded RNA production, the vector was linearized with HindIII, and in vitro transcription was performed (positive strand). The antisense strand was synthesized after linearizing the vector with NotI. In all cases, 5 g of each dsRNA using Cellfectin (Invitrogen, Carlsbad, CA) was introduced into Confocal Microscopy-Infected Sf21 cells were fixed 48 h postinfection with a buffer containing 3% paraformaldehyde Sigma (St. Louis, MO) as previously described (11) and incubated with the specific GP64 antibody (AcV5) eBioscience (San Diego, CA). A second anti-mouse antibody conjugated with rhodamine (Zymed, San Francisco, CA) was used to detect gp64-specific fluorescence. Cells were analyzed using a Bio-Rad confocal system (Bio-Rad, Hercules, CA) attached to a Nikon Diaphot inverted microscope equipped with a 60ϫ oil-immersed objective. GFP fluorescence was obtained after exciting the cells with 395 nm and reading fluorescence at 540 nm (green channel). For gp64 detection, the excitation wavelength was 570 and emission was collected at 590 nm (red channel). Double labeling was observed in yellow, the combination of green (GFP) and red (GP64) channels.
-Galactosidase Activity Assay-dsRNA-treated cells were pelleted and resuspended in 0.05 ml of Reporter lysis buffer 1ϫ (Promega, Madison, WI). -Galactosidase activity was quantified using the enzymatic hydrolysis of the synthetic substrate o-nitrophenyl--D-galactoside (ONPG, Research Organics Inc., Cleveland, OH). The reaction was produced in triplicates on 96-well plates. Reactions were begun with the addition of 0.2 ml of 4 mg/ml ONPG at 37°C for 30 min and stopped by the addition of 0.05 ml of 1 M Na 2 CO 3 . The absorbance at 405 nm was measured in an enzyme-linked immunosorbent assay reader (Dynex MRX II), and -galactosidase activity was determined using the equation, -galactosidase activity ϭ (Abs420 ϫ 1.7)/(0.0045 ϫ time ϫ mg/ml protein in the lysate ϫ ml of lysate used). The blank (uninfected cells exposed to ONPG) was subtracted in all cases.
Plaque Assay-Plaque assays were performed in six-well plates according to the manufacturer's instruction manual Bac-N-Blue TM (Invitrogen, San Diego, CA). In short, Sf21 cells were plated at 2.0 ϫ 10 6 cells/well, allowed to attach for 30 min, and infected 1 h at 27°C with serial dilutions (in 1 ml of Grace media, 10% FBS) of the supernatant obtained from transfected in infected cells, as indicated in the figure legend (see Fig. 2). After the incubation period the media was aspirated and 2 ml of Grace media, 10% FBS, 1.5% low melting point agarose (Invitrogen) was added. Finally, 1 ml of Grace media with 10% FBS was added over the agarose. Cells were monitored for GFP-positive plaque formation after 72 h. The GFP-positive plaques were quantified, and the plaque-forming units/ml (pfu/ml) was calculated by multiplying the number of GFP-positive plaques by dilution reciprocal. All dilutions were repeated at least twice.
RNA Purification and RT-PCR Analysis-Total RNA was isolated from dsRNA-transfected or untreated cells using TRIzol reagent (Invitrogen, Carlsbad, CA) following the manufacturer's recommended protocol. All RNA samples were quantified by spectrophotometry and run on agarose gel to check the quality and integrity of total RNA. To avoid DNA contamination, 5 g of total RNA was treated with 15 units of DNase I RNase-free (Ambion, Austin, TX) for 20 min at 37°C. After a phenol-chloroform extraction, reverse transcriptions reactions were done with the SuperScript TM One Step RT-PCR system from Invitrogen using gene-specific primers (0.2 M). -Actin: 5Ј-GATATGGAGAAGA-TCTGGCACCAC-3Ј and 5Ј-TGGGGCAGGGCGTAGCC-3Ј; gp64: 5Ј-G-AAAACAGTCGTCGCTGTCA-3Ј and 5Ј-TATAGTCGACGAGCACTGC-AACGCGCAAATG-3Ј; ie1: 5Ј-GCGCCGTATTTAATGCGTTT-3Ј and 5Ј-GAGGAATTTCTATGCCGGTTTC-3Ј; gfp: 5Ј-ATATCCCGGGATGGTG-AGCAAGGGCGAG-3Ј and 5Ј-GCTCGTCGACCTTGTACAGCTCGTCC-ATGC-3Ј. For each RT-PCR reaction, 200 ng of RNA were used in a 50-l reaction. A retrotranscription step (50°C for 30 min and 94°C for 2 min) was followed by 21 cycles of denaturation at 94°C for 15 s, annealing at 55°C for 30 s, and extension at 72°C for 50 s. To ensure that there was no DNA contamination in the RT-PCR experiments, controls without reverse transcriptase were performed in all cases. In all presented data, RT-PCR products were resolved by 1.5% agarose gel electrophoresis and analyzed after ethidium bromide staining in a Typhoon 8600 variable mode imager (Amersham Biosciences, Piscataway, NJ).
Electron Microscopy-Sf21 control (uninfected) or infected cells were fixed with 3% glutaraldehyde in saline buffer for 2 h at room temperature as previously described (12). For negative stain, viruses were purified from the supernatant of infected cells using the polyethylene glycol method (13). Briefly, the supernatant of infected cells was centrifuged to eliminate cell debris, and 20% polyethylene glycol in 1 M NaCl was added. The supernatant was centrifuged again, and the pellet was rinsed twice with distilled water. The pellet was resuspended and used for negative stain studies using a scanning electron microscope (Jeol, JSM-5410LV) at low vacuum (12).
Insect Infection-T. mollitor (mealworm beetle) larvae were obtained at a local pet store and maintained in the laboratory in a plastic container. Larvae were injected in the hemolymph with 0.1 g of the different dsRNAs nude or with distilled water (control). 24 h later larvae were injected again in the hemolymph with 2 l of AcNPV-GFP viral stock with a titer of 1 ϫ 10 Ϫ7 . Dead insects were assayed for GFP fluorescence and -galactosidase activity. To assess the GFP fluorescence, injected insects were explored with the confocal microscope using a low magnification objective (10ϫ, Nikon). For -galactosidase activity, the insects were injected in the hemolymph with 2 l of 0.05% X-gal in Me 2 SO. These results show that introduction of dsRNA from the gp64 gene prevents GFP fluorescence induced by recombinant baculovirus. Because the very late polyhedrin promoter (PhP) drives GFP production, the lack of GFP expression indicates that dsRNA-gp64 treatment prevents viral infection in culture. Cells transfected with dsRNA-gp64 did not show the -galactosidase activity of the second reporter gene introduced in the recombinant AcNPV-GFP (Fig. 2B). The early-to-late (P ETL ) promoter drives -galactosidase production. Thus, using these two reporter genes one can monitor early and very late viral gene expression. Transfecting insect cells with dsRNA-gfp prevents GFP fluorescence by favoring GFP mRNA degradation, but as expected, this does not appear to affect viral infection. Cells that interfered with dsRNA-gfp showed -galactosidase activity, indicating the succession of typical viral infection (Fig. 2B). Furthermore, recombinant AcNPV-GFP is a lytic baculovirus, therefore, cells infected with AcNPV-GFP will lyse eventually. Even at a low m.o.i. of 0.05, all insect cells infected with AcNPV-GFP and treated with dsRNA-gfp were lysed 7 days after infection. Fig. 2A) and -galactosidase activity (Fig. 2B). These results are consistent with previous studies found in the literature indicating the gp64 gene deletion result in noninfective baculoviruses (8). In contrast to these results, transfecting the cells with dsRNA-ie1 inhibited GFP fluorescence and -galactosidase activity even at m.o.i. values of 1 and 5 (Fig. 2). The IE1 transcriptional factor is essential for initiating the transcription of several early baculovirus genes. In fact, deletion of the ie1 gene results in viruses that cannot be replicated (9). There- fore, the results presented here are consistent with the idea that interfering with the production of the initial transcriptional activator for viral genes (ie1) would prevent the production of viral proteins and interfere with viral replication.
Selective Inhibition of Baculovirus
In agreement with all these results, the baculoviruses isolated from the supernatant of dsRNA-gp64-and dsRNA-ie1treated cells were less infective when compared with untreated cells or cells transfected with dsRNA-gfp prior to baculovirus infection (Fig. 2C). As expected from cells exposed to dsRNA-gp64, only baculoviruses collected from cells infected with low m.o.i. values (0.05 and 0.5) showed reduced plaque-forming units (pfu). At m.o.i. 1 and 5 the baculoviruses produced by dsRNA-gp64-treated cells showed pfu indistinguishable from cells not exposed to dsRNA or cells treated with dsRNA-gfp (Fig. 2C). Notice that the effect of dsRNA is specific for the gene explored, because cells exposed to dsRNA-gfp showed a significant reduction of GFP fluorescence but not -galactosidase activity. As expected from these experiments, baculoviruses isolated from dsRNA-gfp-treated cells showed pfu similar to cells not exposed to any dsRNA. The RNAi effects observed are specific for the double-stranded RNA, because transfecting the cells with single-stranded RNA prior to baculovirus infection did not alter the time course of viral infection and did not prevent GFP fluorescence or -galactosidase activity (data not shown). A small reduction in GFP fluorescence was obtained with the negative single RNA strand of gp64 of 20 Ϯ 5% compared with the large reduction of 92 Ϯ 7% obtained with dsRNA-gp64 (n ϭ 6). This result demonstrates that the inhibition induced is specific for the double RNA strand, as expected from an RNAi phenomenon.
dsRNA-gp64-treated Cells Do Not Express GP64 Glycoprotein-Confocal microscopy studies using a specific GP64 antibody showed protein localization at the cell plasma membrane. The pattern of localization of GP64 contrasted with the generalized fluorescence of GFP, as expected for a soluble protein. Fig. 3 shows representative images obtained with a rhodaminelabeled antibody for GP64 (red channel) and GFP fluorescence (green channel). As illustrated in the figure, the transfection with dsRNA-gfp prevented GFP fluorescence but did not affect GP64 expression of insect cells infected with AcNPV-GFP at m.o.i. 0.05. Transfecting the cells with dsRNA-gp64 eliminated the fluorescence of both GFP and GP64, leaving only a few cells showing both signals. These cells may reflect cells infected by the initial application of recombinant AcNPV-GFP. Interestingly, when AcNPV-GFP was used at an m.o.i. of 1, dsRNA-gp64 could not prevent GFP fluorescence, however, the transfection prevented GP64 protein production, as illustrated by the lack of red fluorescence in Fig. 3. This result is consistent with the data presented in Fig. 2, suggesting that the initial viral application can infect most of the cells (therefore the GFP production is intact), yet the cells do not produce GP64 protein.
The transfection of cells with dsRNA-ie1 prevented GFP and GP64 production even at an m.o.i. of 1 (Fig. 3). Only a few cells per field showed both green and red fluorescence, because not all the cells were efficiently transfected (Fig. 3, bottom panel to the left). This result is also consistent with the findings using FACS. Transfecting the cells with dsRNA-PhP did not alter the GP64 and GFP fluorescence, showing values similar to those obtained from cells exposed to recombinant AcNPV-GFP alone (Fig. 3). (1, 5, and 10) in the absence of dsRNA. For GP64, two bands were observed reproducibly in all assays. These bands may represent different glycosylation forms of the protein previously described (14).
The second panel in Fig. 4A shows the GP64 and GFP protein contents from cells transfected with dsRNA-gfp 48 h prior to infection. Notice the reduction of GFP protein content at m.o.i. 1 and 5, whereas the GP64 protein content was unaltered. The third panel in Fig. 4A shows the effect of transfecting the cells with dsRNA-gp64 prior to viral infection. In this case, GP64 protein content was significantly reduced at an m.o.i. of 1. At m.o.i. 5 and 10 substantial GP64 protein was detected. These results are consistent with the data previously shown using FACS and confocal microscopy.
Finally, the lower panel of Fig. 4A illustrates the effect of transfecting cells with dsRNA-ie1. Under these conditions, both GFP and GP64 proteins were dramatically reduced at all m.o.i. values explored. Consistent with these findings, cells transfected with dsRNA-ie1 and later infected with AcNPV-GFP could be maintained in culture for several months, even at m.o.i. 10. Insect cells divided normally like uninfected cells. On the contrary, cells transfected with dsRNA-gp64 or dsRNA-gfp and infected with an m.o.i. of 5 or higher could not be maintained in culture more than a week. After 7 days in culture only cellular debris was observed in the culture, indicative of cell lysis as a result of the baculovirus infection.
To determine if the reduction in protein content in cells transfected with dsRNAs was the result of depletion of the specific mRNA, we performed RT-PCR experiments using primers specific for gp64, ie1, gfp, and -actin from sf21 cells (see "Experimental Procedures"). Fig. 4B illustrates a representative experiment obtained at an m.o.i. of 1. Uninfected cells (NI) showed the presence of -actin mRNA, and this was confirmed by sequence analysis of the fragment amplified by the RT-PCR experiment. As expected, in uninfected cells gfp, gp64, and ie1 were not detected. Interestingly, infection with recombinant baculovirus resulted in reduction of mRNA for -actin. This result can be explained by the fact that, at the late phase of baculovirus infection, the majority of mRNAs are produced by the baculovirus ␣-amanitin-resistant RNA polymerase (15). The -actin mRNA is observed again in cells that were interfered with dsRNA from gp64 and ie1. In these experiments we observed again the specificity of the RNAi effect; notice that interfering with gp64 (dsRNA-gp64 panel) significantly reduced the mRNA from gp64 but did not affect the ie1 mRNA. As expected, interfering with ie1 (dsRNA-ie1) significantly reduced the amount of gfp, gp64, and ie1 mRNAs, and -actin levels were restored to normal due to the inhibition of viral infection. These results demonstrate that the reduction in protein content originates from the depletion of the respective mRNA.
Depleting the ie1 mRNA turned out to be a very efficient way to interfere with baculovirus infection. In fact, cells transfected with dsRNA-ie1 and later infected with AcNPV-GFP showed morphological characteristics of uninfected cells, as illustrated in Fig. 5. Panel A shows the typical morphology of Sf21 insect cell infected with recombinant AcNPV-GFP. Notice the large amount of viral particles inside the cell nucleus. In contrast, no viral particles could be detected in dsRNA-ie1-tretaed cells as illustrated in Fig. 5B. Purification of virions from supernatants of infected cells (see "Experimental Procedures") allowed the observation of multiple viral particles (Fig. 5C). In contrast, viral particles were extremely difficult to find in supernatants obtained from dsRNA-ie1-treated cells (Fig. 5D). These viral particles did not show any reproducible characteristic that could separate them from particles isolated from dsRNA-ie1untreated cells. These viral particles may come from the initial viral infection or could be produced by cells that were not efficiently transfected with dsRNA-ie1. These results are consistent with the low pfu obtained with the supernatant of dsRNA-ie1-treated cells (Fig. 2C). The very low titer found in the supernatant of dsRNA-ie1-and dsRNA-gp64-treated cells confirms the poor viral content observed with the electron microscope (Fig. 5D). Interestingly, it has been previously shown that GP64 glycoprotein is required for virus budding from the cell (16). In fact, this glycoprotein is acquired by virions during budding through the plasma membrane of the infected insect cell, representing the final step in virus assembly (14).
Injecting Insects with dsRNA from gp64 or ie1 Prevents Baculovirus Infection in Vivo-To test if dsRNA from gp64 or ie1 could protect against AcNPV-GFP infection in vivo, we used the larvae from the insect T. mollitor as a bioassay. Injection in the hemolymph with 2 l of AcNPV-GFP resulted in death of over 97% of the larvae within the following 7 days (68 dead of 70 injected). Dead insects showed high GFP fluorescence assessed by confocal microscopy (Fig. 6, inset box in the upper left corner of each panel) and high -galactosidase activity, as illustrated in Fig. 6 (AcNPV-GFP, upper panel to the left). Uninjected larvae or larvae injected with distilled water did not show GFP fluorescence or -galactosidase activity (H 2 O, upper panel to the right), and over 95% of the insects survived the injection (4 dead of 60 injected). Larvae injected with 0.1 g of nude dsRNA-ie1 24 h prior to the injection with AcNPV-GFP survived the baculovirus infection (6 dead of 80 injected). These insects did not show GFP fluorescence or -galactosidase activity. Similar results were obtained with larvae pretreated with 0.1 g of nude dsRNA-gp64, where 10 insects died of the 61 injected with AcNPV-GFP (data not shown). In contrast, injection with 0.1 g of nude dsRNA-PhP did not protect against the subsequent injection with AcNPV-GFP. As expected, over 97% of the larvae died within the subsequent 7 days (34 dead of 40 injected, data not shown). Similarly, insects pretreated with dsRNA-gfp and later challenged with AcNPV-GFP died within the next 7 days (58 dead of 62 injected). Interestingly, these insects did not show GFP fluorescence
DISCUSSION
RNAi is a recently discovered phenomenon, which is rapidly becoming a powerful tool for selectively depleting mRNA, thus resulting in reduction of the specific protein. This phenomenon has been found in worms, flies, several insects, plants, and mammals (1). The RNAi molecular mechanism involves several protein complexes, including the RNA-induced silencing complex responsible for the degradation of the homologue mRNA and RNA-dependent RNA polymerases that synthesize new dsRNAs and amplify the RNAi signal (1). Proteins essential for the RNAi phenomenon such as Dicer and Argonaute are highly conserved in fungi, plants, and animals. Although the exact physiological role of RNAi in animals is not known to the present day, recent evidence suggests that RNAi may be involved in organism development, germ line fate, and host defenses against transposable elements and viruses (17).
Although it is not clear that RNAi may function as a defense mechanism in mammals, recent experimental evidence shows that modulation of HIV-1 replication in human cell lines by dsRNA is possible (5). In a different study, dsRNA conferred viral immunity in human cells against poliovirus (6). Whether this is a physiological function for RNAi or not, this recent discoveries open the possibility of using dsRNA to treat or prevent viral infections.
One important point that needs to be explored is whether the introduction of dsRNAs in mammals provides effective defense against systemic infections. The adequate distribution of the dsRNAs in the organism and the possible secondary effects of introducing dsRNAs in humans are two of several issues that would have to be solved before this technique is employed efficiently to prevent or treat viral infections. Interestingly, it has been recently shown that intravenous introduction of small interfering dsRNA in adult mice silences luciferase activity and endogenous Fas expression in vivo (18), which demonstrates that systemic application of dsRNA is feasible and effective for gene silencing in organisms.
In the present study we have used the rapidly replicating and highly cytolytic DNA virus A. californica in combination with the insect cell line Sf21 from S. frugiperda to assess the protective role of dsRNA from genes essential for baculovirus Lower panel to the right, larvae injected with dsRNA-ie1 and 24 h later injected with 2 l of AcNPV-GFP. The bars in each panel represent the cumulative percentage of dead insects over the 7-day period. The numbers inside each panel indicate the total number of dead insects at day 7 divided by the total of larvae injected. For insects injected with AcNPV-GFP alone, 68 of 70 larvae died by day 7 postinjection; therefore, the bar from day 7 indicates 97.14% dead insects (upper panel to the left). Only 6.6% of insects injected with water died by day 7 (4/60). Although 93.5% (58/62) insects previously injected with dsRNA-GFP died, only 7.5% (6/80) larvae previously injected with dsRNA-ie1 died at day 7 after baculovirus injection, illustrating the protective effect of dsRNA-ie1 on baculovirus infection. These insects did not show GFP fluorescence or -galactosidase activity, as illustrated in the lower panel to the right. The inset in the left corner of each panel shows representative confocal microscopy green fluorescence images of the head of the insect (the insect used is shown by the arrow) illustrating the expression of GFP in infected larvae. Notice the low green fluorescence observed in insects injected with distilled water (background fluorescence). The larvae shown inside each panel are representative examples of insects injected with 2 l of 0.05% X-gal in Me 2 SO. Notice that only insects infected with AcNPV-GFP turned blue after X-gal injection, indicating the expression of the second reporter gene (-galactosidase).
propagation. We show here for the first time that dsRNAs from segments of two genes essential for baculovirus propagation prevent virus infection in vitro and in vivo. We have demonstrated that this effect is specific for the dsRNA sequence used, because we can eliminate GFP fluorescence (one of the reporter genes present in the recombinant baculovirus) without affecting the time course of viral infection or the activity of -galactosidase (the second reporter gene). Transfecting the insect cells with dsRNA from gp64 prevents viral infection when using low m.o.i. values. On the other hand, dsRNA from the ie1 gene prevented viral infection even when high m.o.i. values were utilized. Using the larva from the insect T. mollitor, we have shown that injecting dsRNAs from gp64 or ie1 confers resistance to recombinant baculoviruses. This protective effect was not obtained with dsRNA from gfp or the polyhedrin promoter, which were two dsRNA sequences used as controls. Even though injecting dsRNA from gfp prevented GFP fluorescence in insects, it did not stop viral infection by recombinant baculovirus. As far as we know, this is the first report showing the protective effects of RNAi in vivo against a viral infection in animals. The protective effects of dsRNA against viral infections have been previously observed in plants (3).
An interesting result from our studies is the high efficacy to silence genes obtained with long dsRNAs. It has been previously recognized that in nematodes and Drosophila, long dsRNA sequences are as effective as short dsRNA sequences contrary to mammals, where dsRNAs only 21 nucleotides long are functional. The highly efficient RNAi observed in the present study was obtained with a single transfection. This was possible after optimizing the type of lipids used for transfection and the amount of dsRNA used in the mixture (see "Experimental Procedures"). Previous studies using null mutants of gp64 gene have shown that the mutant baculovirus cannot be replicated, providing evidence of the important role-played by GP64 protein in cell-to-cell infection (8). Our results are in agreement with these original reports, because interfering with GP64 protein synthesis results in baculoviruses that cannot infect insect cells. Only when high m.o.i. values are used, where virus infection does not depend on generation of new progeny, did we observe normal GFP production and -galactosidase activity from infected cells. However, even when these cells produced GFP and -galactosidase, no GP64 protein was detected by confocal microscopy or Western blotting analysis. These results suggest that the new viral particles produced by the cells interfered with dsRNA-gp64 would lack GP64 protein.
In agreement with this prediction, viruses collected from the supernatant of dsRNA-gp64-treated cells and infected with low m.o.i. values showed significantly reduced pfu as compared with control. The possibility of producing viral particles missing specific proteins provides a powerful tool for reverse genetics with viruses. Using this technique one could rapidly explore the role of single or multiple genes in virus morphogenesis, assembly, and transport. A rapid method for screening the role of specific proteins in virulence may be possible also. Several groups have shown genome-wide RNAi to deplete proteins from entire chromosomes in the nematode Caenorhabditis elegans (19). Such genome-wide RNAi could easily be implemented for entire virus genomes, especially with large complex genomes such as the A. californica 134-kb genome.
Finally, in the genomic era where scientists are gathering large amounts of information about the sequence of many genomes (20), we are learning that the next big challenge will be finding rapid and efficient tools to determine the role played by thousands of genes with unknown function. RNAi might be one of such tool (21). | 7,131.6 | 2003-05-23T00:00:00.000 | [
"Biology"
] |
Genetic Hybrid Optimization of a Real Bike Sharing System
: In recent years there has been a growing interest in resource sharing systems as one of the possible ways to support sustainability. The use of resource pools, where people can drop a resource to be used by others in a local context, is highly dependent on the distribution of those resources on a map or graph. The optimization of these systems is an NP-Hard problem given its combinatorial nature and the inherent computational load required to simulate the use of a system. Furthermore, it is difficult to determine system overhead or unused resources without building the real system and test it in real conditions. Nevertheless, algorithms based on a candidate solution allow measuring hypothetical situations without the inconvenience of a physical implementation. In particular, this work focuses on obtaining the past usage of bike loan network infrastructures to optimize the station’s capacity distribution. Bike sharing systems are a good model for resource sharing systems since they contain common characteristics, such as capacity, distance, and temporary restrictions, which are present in most geographically distributed resources systems. To achieve this target, we propose a new approach based on evolutionary algorithms whose evaluation function will consider the cost of non-used bike places as well as the additional kilometers users would have to travel in the new distribution. To estimate its value, we will consider the geographical proximity and the trend in the areas to infer the behavior of users. This approach, which improves user satisfaction considering the past usage of the former infrastructure, as far as we know, has not been applied to this type of problem and can be generalized to other resource sharing problems with usage data.
Introduction
Bikes usage in cities has seen a dramatic increase in popularity, both in big and medium size cities, with more than 2000 cities with a bike share program [1] and more than 3.5 million bikes [2]. Some of the main concerns in big cities are contamination and traffic jams; in 2016, bike sharing in Shanghai saved 8358 tonnes of petrol and decreased CO 2 and NOX emissions by 25,240 and 64 tonnes, respectively [3]. Both, governments and citizens are pushing a new paradigm of mobility based on bike transportation. Nevertheless, crowded cities lack space for parking in central areas, and as reported by [4], dock-less bikes exist in limbo between disposable and valuable, which makes them a target for abuse and abandonment. To bring some ease, using such appropriate opportunities and circumstances, a new model of bike sharing has been [5] taking place in the last few years: Bike Sharing Systems (BSS). Based on the principle of a vehicle as a service, a new wave of city institutions has implemented networks of docks in a park with shared bikes. This strategy boosts the fluidity of traffic and keeps [6] the public space under control, preventing the users from using their own bikes.
Capacity Distribution Problem
The other aspect to consider is the fixed part of the network: docks; these elements are urban furniture with elevated costs of planning, execution and maintenance [12]. Even an oversized network may not be optimal because of the inner distribution of docking places in the city. Due to the nature of urban infrastructures, most of the effort on network capacity is focused on design and implementation, based on general assumptions over user interests in city zones.
The uncertainty over the behavior of such a complex system will be cleared up once the system is alive. The network reliability is usually measured with the percentage of times the network of bikes/docks are unusable because those dead periods will ultimately derive from lost requests, either by a user that cannot find a place to hold the bike or by not having a bike in a station for being picked up.
There are many proposals that optimize network distribution capacity. The authors of [13] consider the number of bikes, and the authors of [14] consider the dynamic reallocation of bikes. The allocation of the layout is handled in [15], which relies on the idea of cluster and a greedy heuristic.
There have been a number of studies examining BSS functionality and human biking behavior. The authors of [16] cited two types of data used in BSS bike sharing research: based on automatics stocks (stations) and on trips. This paper states that stock-based analysis hardly reflects movement patterns across a city, but it reflects fluctuation in demand and availability. Stock-based data have analyzed prediction over availability: in [17], the authors studied the stock behavior through clustering to give an availability prediction in Barcelona BSS, and in [18], they had a similar outcome in Paris.
Capacity and User Satisfaction Optimization over a Real Usage Simulation
Reference [19] uses an optimization that considers network reliability by means of two concepts, zero-vehicle time and full-port time, which reflected the duration of vehicle shortage and parking stall unavailability in the stations, respectively. Nevertheless, this proposal introduces lost users of the system, bringing the concept of user satisfaction, i.e., not forcing the user to travel to other stations or turn/shift to other travel modes. The most accepted measures in the literature are the ones based on the unsatisfied demands of users.
Other authors measure the same concept of loss of opportunity with different naming but the same indirect calculations based on empty/saturated stations and a probability of requests over time. References [20,21] identify problematic stations where users are unable to obtain a bike or a place over time.
The capacity distribution over existing stations, considering user satisfaction and budget as the number of docks needed at each station, is a combinatorial problem with an exponential number of possible solutions.
The concept of user satisfaction in [22] measures the probability of a request being held over a period of time by means of a randomized simulation on request. Even being based on usage, from our point of view, this approach lacks granularity. In big cities, it is common to have several stations in the same place or very close to each other, and the user may use one of them after not finding an available place nearby.
This work proposes the combination of two metrics of the BSS performance: • User satisfaction, based on the number of kilometers that a user has to walk or ride in order to pick up a bike or leave the bike in a closer station when the one that was originally selected is respectively empty or full. • Total spare capacity, defined as the percentage of empty docks relative to the number of total bicycles in the network to optimize the costs of the network without impacting functionality.
The main goal of this work is to maximize user satisfaction, taking into account the cost of the system and the limitations, in order to find the best distribution of the station capacities based on real usage.
Materials and Methods
This paper shows an optimization methodology based on candidate solution algorithms, such as local searches and genetic algorithms. In optimization, a candidate solution is a member of a set of possible solutions to a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem-it is simply in the set that satisfies all constraints. Within this section, we will describe the problem, the candidate solution definition, the origin of our data, and the details of the algorithms that realize the optimization of the BSS capacity distribution.
The Origin of Data
The data used in this paper are distributed under the Open Licence from Etalab and were collected using the connector provided by JCDecaux [23]. This connector's API provides, for a certain city, an extensive set of attributes in real-time. We have selected and retrieved the number of stations, capacity and occupation every 5 min, in order to figure out the dynamic of users. Within this interval, we will transform the occupation into requests of bikes or docks by means of subtracting consecutive occupation values.
The dataset is composed of the following fields: City as the name of the city, Station as a unique numeric identifier for the station in the city network, Capacity as the number of docks at the station, Docks and Bikes as the number of docks and bikes, respectively, that are available at the station. Latitude and Longitude for geographical location of the station and, finally, Date, as the instant of time where the measure was performed. Table 1 shows a real sample of several tuples extracted over Santander city in April 2019. As named previously, these data are processed to produce the requests and also to calculate several non-time-dependent constants, such as distance between stations, number of bikes in the network, capacity of each station and total capacity.
Defining of a Candidate BSS
The candidate solution definition for the problem of capacity optimization of a BSS in a time frame can be defined as a city (C) that has a series of stations (S), each with an exact and constant capacity for the selected period that we are going to define. For this, we can use C = (c 1 , . . . , c S ) for the capacity vector of the city, with c j being the capacity of station j in this configuration.
We must denote that the study time will be for T periods, so we will run the simulation over a windows time with discrete increments for 0 to T . The capacity vector of a distribution will be constant over time.
The occupation matrix (O) keeps the occupation number of each station at every stage of the considered time window.
Note that o j i is restricted to each station j in every moment to (o j i ≤ c i ) ∀j ∈ {1, . . . , T ), that is, a station will not accept more bikes than the number of available docks. The remaining requests of docks will have to be redirected to other stations. Based on the premise that the number of available docks in the BSS has to be greater or equal to zero, we define spare capacity, sp j i , as the number of available docks when all the bikes are inside the stations in time j, and must always be positive (or zero).
The occupation matrix does not show the system's dynamics needed to perform a simulation, just the status of each station at each time inside the time window; for this purpose, we are interested in capturing movements of bicycles, that is, the number of bicycles that are deposited or picked at a station between 2 consecutive instants of time.
In order to measure these movements, we used real data extracted from the source, as explained in Section 2.1.
We define C * and O * as the capacity and occupation captured from real data that will be transformed into dock/bike requests made over this time window. We call this the request matrix (∆): In order to apply this ∆ over an alternative candidate solution, we assume that the initial state of the stations is empty. Therefore, the first request for every station will be the occupation at this time of the base case. Therefore, ∆ 0 = (O * ) 0 .
From now on, ∆ will reflect the behavior inherent to the real data recorder in the form of requests applied over a new capacity distribution (candidate solution). Consequently, this chain of requests may generate a different occupation sequence and lead to a redistribution of requests to other stations due to empty/full stations in the different capacity distribution.
Measuring the Performance of a Candidate BSS
The simulation of movements over a specific BSS capacity distribution reproduces the system state at a given time. From our knowledge, existing simulations on BSS uses mostly hourly demands averaged on weekdays that may not reflect particularities of some events due to the data averaging. Using real trends in an accumulative model may help gain a better understanding of the model limits without having to prove in detail the accuracy of the probabilities in generic simulations versus real conditions.
From the point of view of our simulation, each ∆ j i is interpreted as a request made by a user in a certain time j to obtain or to park a bike in a specific station i. The global process can be summarized as the distribution of ∆ requests (dock's or bike's demands) over a set of S stations during a time interval from {0, . . . , T }. Figure 1 presents the algorithm for request distribution in a candidate solution in which each station i in a time j will receive ∆ j i requests with an outcome measured in a number of kilometers performed by users when they have to travel from the original station to this new one due to a lack of places/bikes. In the case that some requests (p) (either a bike or a dock) could not be attended by the current target, we allocate as many as possible, updating the current occupation and p with the remaining ∆.
The remaining requests (updated p) will be distributed, following the same process, in a set of distance ordered stations, from nearest to farthest, updating target. Every time a request is attended by a station different from the original one where this request was recorded, the distance from this target to the original one will be added to a kilometer accumulator. This variable will keep the sum of kilometers traveled by the users due to failed requests (F) in the new distribution. The formula that measures the number of kilometers produced by the whole set of requests in the time window is: Figure 2 exhibits the trips made by users when this process is applied over a candidate solution different from the original one. Red circles stand for the origin station, blue triangles stand for the target station, and the thickness of the line is proportional to the number of users that have performed the trip. Sometimes stations may serve as origin and destination in different moments of the simulation process, stating the weakness of the areas that lack docking stations or bikes, depending on the case. As named in the previous section, the proposed measure of the performance of a distribution of docks over a set of predefined stations in a BSS is based on total capacity (cost) and user satisfaction (amount of extra kilometers).
Regarding capacity usage, we consider the total spare capacity as the percentage of non needed docks in a BSS, calculated as the ratio of non-used docks in the case all bikes were are located at stations.
Total capacity, T c , is the sum of all station capacities for the candidate solution. Total bikes number is known by the institutions managing the BSS but is not directly provided by data. This constant is calculated using the number of docks used on the real case O * in the higher moment of occupation. As some of the users may be moving in the chosen instant, traveling from one station to other, a percentage of security measured ξ is added to prevent the possibility of not having enough docks available for all the bikes present in the real case.
Both parameters are independent from time, and they can be calculated at once when data are extracted and applied to every candidate solution evaluation.
Measuring the System Performance Based on a Real Usage
As expressed in Equation (3), the real occupation matrix (O * ) is transformed into a matrix of requests ∆ to be applied over a different capacity distribution. The real case C * is usually also known as base case C b and we will use it interchangeably. The fitness measured in (Equation (5)), applied over C b , will not generate any measure of user satisfaction (no extra kilometers) because each request was attended in C b ; therefore, there is no failed request information recorded.
Although we might guess that some users were not attended to, even if we do not have the proof or evidence to ensure it, we argue that failed requests are prone to happen when a station is full and a user requests an empty dock or vice-versa, when a station is empty and a user request a bike.
To highlight these effects, we propose the use of trends of ∆, named as virtual movements (θ), to model the probability of a failed request happening, even in C b . The average ∆ quantity of these stations may predict the future demand on them, e.g., if one station i becomes full (o i = c i ) at a given moment j, in j + 1 the probability of it being similar is pretty large. Trends are calculated in a time window (V) over station movements at time j as the average of the last V movements when the station becomes full or empty and ∆ becomes zero.
θ i 's are applied to the simulation in the same way as ∆ and called virtual requests. A virtual request will be applied to calculate extra kilometers (named virtual extra kilometers) in the same way as ∆ (see Algorithm 1) but will not change the occupation of the stations because this would change the number of total bikes in the system. Thus, this concept looks after the inertia of ∆ to predict future demands but does not change the occupation when applied.
Based on (Equation (10)), each θ j i is used as a virtual request made at a certain time to obtain or park a bike in a specific station and evaluated by Algorithm 1.
Pondering these pieces, F cap , F km and F VKm , the global measure criterion can be summarized as a fitness function (Equation (11)); a function that expresses the wellness of the proposed capacity distribution in terms of capacity cost and user satisfaction:
Behavior of a Candidate Solution BSS
Simulations offer the chance to evaluate a possible scenario (candidate solution) without the inherent costs of physical implementation or a prototype; while the simulation is beingn executed, some indicators can be recorded that could help decision makers to understand the differences between various candidate solutions.
The simulation of requests over a BSS is measured mainly in this paper through the distance the user has to travel (Equations (4) and (10)); however, the number of failed requests is also per se a valuable indicator of the performance of the candidate solution. The number of times a station becomes empty or full is also useful information because this situation indicates a possible point of improvement in the network by dynamically managing the number of moving bikes from some stations to others or as a measure of strength between similar candidate solutions.
Apropos of the evaluation of user satisfaction, the number of kilometers may be described in more detail, breaking up total kilometers into two separated indicators: firstly, those performed due to a failed request of a bike, because they are made by foot, and those performed due to a failed request of a dock, which will be recorded by bike. These two indicators also inform about differences between candidate solutions in the ability of serving bikes or docks.
The analysis of a simulation over the original distribution may serve as an analysis tool to detect future improvements. For example, a stress factor can be introduced to multiply by a factor the number of requests and predict where the capacity has to be reinforced but does not propose a better distribution with the help of the named indicators; some examples will be described in the results section on the convenience of these indicators to compare different solutions.
Optimization Based on Candidate Solutions
The simulation of a candidate solution in BSS means applying some inputs (bike or dock requests) over a distribution of capacities, C . This is a standard process of improvement validation over existing infrastructures in traffic and urban design [24], but it also operates as the building block for one of the most powerful optimization strategies: genetic algorithms and local searches. As far as we are able to obtain a numerical value to a candidate solution, we may compare it with other ones to minimize or maximize depending on our goals. We can guide these changes between different proposals using expert knowledge and improve the details in trying to understand the system behavior upon those changes. However, sometimes the combinatorial nature of problems and non-evident relation between parts of the solution may not help this discovery process. Optimization algorithms do not rely on knowledge and can drive the problem to a space solution search where the target is the minimum/maximum value for the candidate solution.
The candidate solution is defined in this problem as C = (c 1 , . . . , c S ), regardless of whether the number of docks per station differs from the original C b but the number of stations S remains the same, that is, we will not create nor remove any existing station. As the minimum size for a station capacity is not restricted, except for being greater or equal to zero, the optimization could in fact remove all activity onto a station by setting the capacity to zero. Candidate-based optimization algorithms need an evaluation function, named in this work as Fitness (Equation (11)), and some way of generating new valid candidates that may improve previously generated candidates once evaluated.
The candidate solution must be valid from the point of view of the network definition; therefore, capacity modifications cannot be made with full degrees of freedom. There is a (minimum of) fixed number of bikes (T b , Equation (8)), and therefore, total capacity (T c ) cannot be smaller than this number because each bike has to have at least a dock to be allocated by definition.
Optimization algorithms based on candidate solutions are based on a proper equilibrium between exploration and exploitation. In the next sections, we will consider the usage of several related algorithms that will be compared later in the results section in order to observe the computation effort versus stability and the quality of the results.
Best Improvement Local Search
A local search algorithm [25] considers the optimization as a search space defined by the possible variations of the elements in a candidate solution. The local optimality is achieved by local perturbations. This operation is called neighbor generation in most of the local search varieties. The inner component of the candidate solutions in the BSS problem is station capacities. As each c i (station capacity) can have values from 0 to n (not restricted, by definition), the neighbor generation can be made by the addition or subtraction of a number of c i .
These two parameters (percentage of positions to be modified (pp) and maximum amount to be added (max_am) to the selected stations) define the capacity of exploration in a hill-climbing local search (see Algorithm 1). The initial point in our algorithm proposal is the C b , but it could be initialized with any valid solution. Regarding the other parameters, the number of neighbors generated in each iteration express the depth of the search by a randomized generation of neighbor instead of an exhaustive loop with the length of the neighbor generation operator defined by pp and max_am; Stop Criteria is based on a given number of iterations without improvements on the best solution. The combination of these two parameters allows flexible management of exploitation and exploration in the local search.
Variable Neighborhood Local Search
Best Improvement Local Search is based on a neighbor generation function with a fixed range (number of changes made over the given candidate). These algorithms are good enough to test the fitness function, valid candidates and the validation of the concept but has a poor exploration power depending on input parameters pp and max_am. Thus, Variable Neighborhood Search [26] is proposed as an improvement to fix the local range search, keeping the same function for neighbor generation used in the Best Improvement Local Search. This variation has a function, VNSNeighbor, that increases the number of positions changed in one single neighbor generation every time the Best Improvement Local Search stagnates. This variable range in the operator of neighbor generation provides a way of escaping from local optima, allowing more exploration power to the search without losing exploitation due to the dynamic reduction of the range k once a new local optimum is found. The algorithm ends once the max range is reached without improvements in the best solution (Algorithm 2). Genetic algorithms are population-based algorithms [27] widely used for BSS problem optimization. Some recent examples of related problems in genetic algorithms and local searches in vehicle networks can be seen in [28,29]. These algorithms use an evolution paradigm to select the best candidates (named as chromosomes) of the population and mix their positions (named as genes) in a crossover and perform neighbor-like operations (named as mutation here). The capacity distribution of each station may be seen as the genes that will be mixed over generations to create solutions that bring not just variations over previous solutions but completely new ones with the better genes of their parents. A local search can be seen as a trajectory over a search space with the movement capacity of the neighbor operator, but genetic algorithms break this view by a building block theory in which a better combination will remain in the population in the form of genes and statistically will guide the selection and build even better structures. The results of this operation are offsprings (candidate solutions) that will remain in the population over time if they are better than other individuals.
Genetic algorithms are able to explore complicated combinatorial problems if the equilibrium between genetic diversity and selection pressure keeps the key genes improving without falling in a local optimum because of a lack of diversity in the population.
Best Improvement Local Search, VNS and Genetic Algorithm share the same codification for mutation (named neighbor generation in the local search) and fitness evaluation.
The genetic algorithm proposed can be outlined as a loop where in every iteration two individuals are selected by tournament selection [30], selecting the best individual from a random sampling of size k. The mutation operator makes the system explore outside the initial values of the genes present in the population. This operator is the same one used in the Best Improvement Local Search, named there as neighbor generation. The key operation in the generation and improvement of new individuals is Crossover [31]. This operator is applied over two elements and mutated to create new solutions that may have a better value once evaluated. Crossover selects two points that crossover; this one may generate invalid chromosomes due to restrictions on the minimum capacity, preventing some bikes from finding an available dock; to avoid this issue, two random positions are selected, and the list of positions is copied on the offsprings bearing in mind not to be below the minimal capacity of the T b factor.
It may happen that after copying the fragment, the resultant offspring may be invalid due to the total capacity of the network being lower than the number of bikes. To fix the chromosome, the following positions will be copied until this criterion (capacity greater or equal than bike number) is fulfilled. No max T c (Total capacity) restriction has been set for the sake of clarity.
Let us see an example: being two candidates of a city with eight stations and 100 bikes: [15,20,30,4,3,30] and C 2 = [15,2,4,40,25,20], the crossover will produce the following results: random positions p1 = 2 , p2 = 3. O f f spring1 = [15,2,4,4,3,30]. This offspring is invalid due to the total capacity being lower than the number of bikes (58). The algorithm will then extend the copied positions until it is valid on position 5, being the result Offspring1 = [15,2,4,40,25,30]. The number of individuals in the population is fixed, therefore these two solutions will replace some other one using a steady state population type [32]. To increase the genetic pressure in a flexible way, these two offsprings will replace two individuals from the population using tournament replacement [33], where the selected ones are the worst element of a random set of size k. This strategy promotes the probabilities of poor elements of being deleted and good elements to remain in the pool.
Finally, we propose a Hybrid Genetic Algorithm 3 with a local search as the better approach for this optimization problem, thanks to the good capability of local search algorithms to improve a good solution and the exploration capacity of the genetic search. These hybrid heuristics, which combine concepts and strategies applied by other metaheuristics such as population-based search and local search, are similar to Memetic Algorithms [34,35]. The main difference between the proposed Hybrid genetic algorithm and the Memetic Algorithms is the fact that in Memetic algorithms [36] in any generation, the population of individuals consists solely of local optima, and in the Hybrid Algorithm proposed, the added local search only runs every certain number of iterations, reinforcing that closer minimums to the optima finally become optimized to the optima. This approach provides more time for the algorithm to explore the space before stagnation. Best fitness = simulation(Best Solution) 4 Generate Population (as Strong mutation of base case) 5 While not Stop_Criteria: 6 S1 = Mutation (Crossover(Selection(Population , k))) 7 If fitness (S1) < Best_fitness: 8 Best_fitness = fitness (S1) 9 Best_solution = S1 10 Population = replace (S1 , k) 11 Every (number_iterations): 12 Best_solution = localSearch (Best_Solution , neighbor_number)} 13 return Best_solution
Use Case: Santander Bike Sharing System
This proposal handles the simulation of a real BSS based on the Santander BSS (Spain) as a representative case for validating some of the measures and techniques proposed in this work. Santander has a small network of 16 stations with clear and differentiated areas: on the left the industrial area, down in the middle the train station, in the inner part some residential areas and lastly the coast and downtown, down and on the right, respectively. It is a city with no rivers and short distances that ease the calculus extrapolation because each direction is equally probable in terms of user eligibility. It has a network of dedicated paths for bikes 18 kms length in total, where most of the stations are situated along the route, making the riding safer ( Figure 3). The size of the city and the small number of stations are within the preferred criteria for testing, being able to understand the results and extrapolate conclusions keeping the complexity of the problem high enough to overcome direct combination optimization. Concerning the size of the stations, C b = [15,20,30,15,15,30,20,20,15,25,20,40,25,20,20,25] (Figure 4), the average size is around 20 docks per station, with some doubling this quantity with a total capacity of 355 docks and a total number of bikes in the system of 198. Each station is located at a certain position given by its latitude and longitude coordinates and has a potentially different number of docks with a set of available docks at a certain time of the day. We propose the use of real data in a simulation to measure changes in the capacity distribution. In this setting, Santander has a mild climate with minor variability between seasons. Ideally, several years of data could be used to ensure proper support to all the cases, but for the sake of performance, we have evaluated the most demanding months of the year and compared this period between years from 2017 to 2020. The month with less variability between years and more accumulated usage per day was April 2019. This month has the most demanding average usage per hour. Figure 5 shows the accumulated requests in the network by hour. We have paid special attention to critical situations as to whether some of the stations become full or empty and the probability of movements on those stations remains high.
Results
The experimentation was executed for the four exposed algorithms; Best Improvement Local Search (BL), Variable Neighborhood Search (VNS), Genetic Algorithm (GA) and Hybrid Genetic Algorithm (HGA). Each algorithm was executed with 10 seeds for the dataset, showing the best results for each set of executions.
Metaheuristic optimization has a considerable number of parameters that may impact on the quality of the solution and the time of execution. Initial experimentation with BL and VNS using grid hyperparameter tuning [37] was performed to determine the better values of the parameters. The Best Improvement Local Search is shown in Table 2. Max number of changes to add or subtract. 10% %SpareFactor: The weight of the empty places over user satisfaction in fitness 10% Max Epochs: Number of times without improvement before stopping. 20 Neigbors: Number of neighbors to be generated in each iteration 50 Both BL and VNS show an adequate convergence to the optima, Figure 6 shows the behavior of the two parts of the fit: total spare capacity (a) and the extra virtual kilometers (b) of the best individual within the BL optimization. Based on the Local Search parameter tuning, we have better results for genetic and hybrid algorithms, adding the additional parameters shown in Table 3. The problem chosen for this optimization has been solved in some of the executions by all the optimization algorithms proposed, even if the total number of combinations is bigger than 20 16 (average_capacity stations ). The evaluation of a single solution may take around half a second using a one-week interval, which gives a time of more than seven days of computation to arrive at the optima visiting all possible combinations compared to a much lower execution time of the proposed algorithms, as can be seen in Table 4.
BL already obtained the optima, so VNS, GA and HGA return the same individual in some of the executions. The optima for this problem in the chosen time frame is 22.16 corresponding to C opt = [16,16,23,13,13,21,17,14,13,17,16,27,12,15,18,19]. The only difference between them is stability and execution time. Meanwhile, BL and VNS obtain the optima 53% and 75% of the time, respectively, the Genetic Algorithm arrives at the optima 97% and Genetic Hybrid Algorithms 100% of the time. Execution time is much higher in VNS than the other algorithms. The Genetic Algorithm shows a good performance regarding the quality of the solution and the reliability of the algorithm with the lower execution time, whereas the Hybrid Genetic Algorithm has the better reliability finding the optima for this problem 100% of the time at the expense of an execution time that nearly doubles the Genetic Algorithm. Table 4 shows averaged and standard deviation for time (minutes), evaluations, percentage of times the optima has been found and fitness. Table 5 shows the comparison between the base case, the optimized one with capacity optimization (OP), without capacity optimization (OP_NC, i.e., only dock reallocation between stations, total capacity remains the same) and with an extra 10% of movements than the original to represent the stress of adding new users (OP_ST). The capacity column expresses the total number of docks in the network, F km represents total real kilometers, F Vkm is the total virtual kilometers, Spare is the percentage of empty places relative to the number of bikes, full and empty for the number of times one station achieves this state within the simulation, and finally, fitness (Equation (11)). The comparison between the optimized solution and the initial one shows that with a much lower capacity, the system is able to obtain better results, increasing the real kilometers by just 0.46 kms but decreasing the number of virtual movements from 28.87 to 17.49 kms. This result denotes a user satisfaction increase of 10.92 kms against C b . Figure 7 shows the decrease in capacity of the majority of the stations and the increase in station #1. With the reallocation of bikes from some stations to others (0.46 kms), produced by lowering the capacity of some stations, the algorithm has lowered the number of empty stations from 23 to 11. This optimization cannot improve this number without adding more bikes to the systems, and this is out of the scope of the algorithm. Notice that as predicted in the previous section, C b does not produce real extra kilometers because what was recorded were successful requests.
From the point of view of the behavior indicators stated in Table 6, it shows failed requests as the exact number of requests that are not attended. Those can be virtual or real ones. Input kilometers are those generated as a result of a failed request of requesting a bike at an empty station, and output kilometers are created by a dock request at a full station that could not be served. Base (Cb) shows a big number of failed requests all due to requests of bikes at empty stations. Even if the reasonable outcome will be to increment the number of bikes at those stations, by dynamical movements or just increasing the total number of bikes, a look at the OP and OP_NC shows that with some minor extra kilometers (3.68 kms by bike) due to not attending dock requests, the number of failed requests is much lower (13) and the output kilometers are just 13.80. OP_ST shows that even increasing the number of movements by 10%, a smaller distribution capacity is able to achieve fewer failed requests and fewer extra kilometers. Regarding the behavior under modified conditions, some simulation has been performed over the optimized solution with changes in the number of ∆. We have stressed the number of requests by 10%, obtaining the results in Figure 8. This is not the same as the result shown in Table 5 with OP_ST; this one exhibits the evaluation of the optimized solution with capacity change (OP) measured with the stressed ∆. As can be observed in this figure, the optimization capacity over-fits the current usage, with a great impact on real kilometers when some extra movements are applied. The solution to predict future behavior is to optimize (OP_ST) with the stressed movements, with an increase of just 5. 16 kms. An interesting result is that the number of full stations decreases, so more movement is not always a reason for a shortage of places because the interaction between negative and positive movements are not easy to predict when the users are reallocated to the nearby stations.
Conclusions
The simulation based on recorded requests of real usage over an alternative capacity distribution offers a useful tool to understand BSS behavior. Nevertheless, a set of variables and measures is needed to compare performance and behavior between several capacity configurations.
From the point of view of behavior interpretation, a set of markers has been defined, such as the number of times stations become full/empty, giving an overview of the peaks in the BSS, number of failed requests, input and output kilometers, which makes the results easy to understand. Furthermore, Figures 2 and 8, with the specific routes performed by the user with failed requests, help to predict future behavior and assist experts on specific improvements on the network.
On the standardized performance of BSS, two facets are considered; on the one hand, user satisfaction defined as the number of kilometers pursued in the alternative distribution and, on the other hand, the cost of the infrastructure as the total capacity of the BSS.
This simulation process can also serve as a fitness function for optimization algorithms based on candidate solutions. This work has proposed an effective methodology to implement a set of optimization algorithms over existing BSS capacity records that create a better distribution capacity to increase user satisfaction and minimize the total cost by reducing the capacity of the BSS. The Hybrid Genetic Algorithm has proven to be the most reliable algorithm with a reasonable execution time.
The case of Santander BSS was selected for its implicit characteristics without geographical barriers, distribution on inner stations and mild climate that may reflect an average behavior in the usage of any of the stations at any time of the year. The results show a clear user satisfaction increase by lowering the number of kilometers and also a lower number of docks needed in the BSS. This dataset has been chosen as a first study case over the main optimization criteria for BSS networks. In future works, we will use bigger datasets and compare results considering other variables as physical barriers or station distribution.
Informed Consent Statement: Not applicable.
Data Availability Statement: Source data is available at https://zenodo.org/record/5485072, accessed on 31 July 2021. | 9,528.4 | 2021-09-10T00:00:00.000 | [
"Computer Science"
] |
Charged Black Holes with Scalar Hair
We consider a class of Einstein-Maxwell-Dilaton theories, in which the dilaton coupling to the Maxwell field is not the usual single exponential function, but one with a stationary point. The theories admit two charged black holes: one is the Reissner-Nordstr{\o}m (RN) black hole and the other has a varying dilaton. For a given charge, the new black hole in the extremal limit has the same AdS$_2\times$Sphere near-horizon geometry as the RN black hole, but it carries larger mass. We then introduce some scalar potentials and obtain exact charged AdS black holes. We also generalize the results to black $p$-branes with scalar hair.
Introduction
The low-energy effective theories of superstrings, namely supergravities, provide a variety of fundamental matter fields that we can study their interactions with gravity. These include dilatonic scalars and various form fields. Under gravity, these fields can give rise to charged black holes or p-branes, which are part of fundamental constituents in the perturbative or non-perturbative string spectrum. (See, e.g. [1]. ) Typically in supergravities, the dilaton φ couples to an n-form field strength F n = dA n−1 through an exponential function. The relevant Lagrangian is given by where e = − det(g µν ) and a is the dilaton coupling constant. The exponential coupling function is determined by the tree-level interaction of the string. The higher-order corrections to supergravity can be extremely complicated, involving both the worldsheet α ′ and the string coupling constant g s = e φ . The general structure was discussed in [2]. A
JHEP09(2015)060
concrete example is provided by N = 1, D = 6 supergravity. In the canonical metric, the dilaton dependence of the kinetic energy of the Yang-Mills gauge field is [3] where (v α ,ṽ α ) are constants, with the former being essentially the Kac-Moody level [4] and the latter determined by the supersymmetry. The string one-loop correctionṽ α -terms play a crucial role [5] in some heterotic phase transitions [6][7][8].
Inspired by (1.2), we consider a general class of Lagrangian We require that the function Z(φ) become an exponential function in either the φ → +∞ or φ → −∞ limit. Thus we can expand Z as "e aφ +· · · " that mimics the string loop expansion. Furthermore, we assume that Z(φ) has an stationary point, namely dZ/dφ = 0 for certain φ 0 . There are two motivations for us to consider such a Lagrangian. The first is that the black p-branes in (1.1) suffer from curvature singularity on the horizon in the extremal limit, which is typically BPS or supersymmetric. In this case, the supergravity solution cannot be trusted to yield some non-perturbative information of strings. By contrast, nondilatonic p-branes have AdS×sphere as their near-horizon geometries in such a limit. For some appropriate decoupling limit, the smooth classical supergravity background can yield informations of string theory. This leads to the famous AdS/CFT correspondence [9]. The dilaton coupling function Z(φ) for (1.2) has a stationary point. Thus the dilaton can be decoupled. This implies that the associated p-brane can have smooth horizon even in the extremal limit. It is natural to assume that the singularity associated with the extremal limit of the p-branes in (1.1) is an artifact of the low-energy effect action of the lowest order. Loop correction is expected to smooth out the singularity and extremal pbranes in strings may also have non-vanishing entropy [10]. The general loop-corrected actions of strings are very complicated to construct and few examples are known. The Lagrangian (1.3) can be viewed as a toy model for such string loop effects.
The second motivation is to do with uniqueness of black holes. The standard no-hair theorem states that a black hole is completely specified by the mass, charge and angular momenta. Recently, many examples of scalar hairy black holes have been constructed [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26], and their general first law of thermodynamics were derived [27,28]. In supergravities, charged black holes also generally involve scalar fields and yet the uniqueness theorem appears to continue to hold for the large number of such solutions constructed in literature. It is of interest to construct a simple counter example. It is clear that if Z(φ) in (1.3) has a stationary point as in (1.2), then the usual Reissner-Nordström (RN) black hole with the dilaton decoupled must be a solution. If we can construct a further different black hole with the same mass and charge, but non-vanishing dilaton, the uniqueness theorem is then broken. The Lagrangian (1.3) can thus stand alone as an example for studying non-uniqueness of charged black holes. Numerically it can be established that charged black holes for a generic Z(φ) may indeed exist. In this paper, we find non-trivial exact JHEP09(2015)060 solutions when Z is given by Here ω is some coupling constant. For ω = 0 or 1 2 π, the function Z becomes an exponential function of φ. In general Z has a stationary point for which dZ/dφ vanishes. This dilaton coupling to the field strength resembles those in the STU-model truncation [29] of D = 4 ω-deformed gauged supergravity [30,31].
The paper is organized as follows. In section 2, we present new charged black holes in four dimensions. We focus on the examples that are associated with supergravities. We discuss their global properties and obtain the first law of thermodynamics. We compare the new black holes with the RN black hole. We also obtain the multi-center extremal solutions. We then introduce some scalar potentials and obtain charged AdS black holes. In section 3, give a construction of the new black holes for general dimensions. In section 4, we give an alternative construction that allows us to construct p-branes with scalar hair in general dimensions. In section 5, we construct further new charged AdS planar black holes. We conclude the paper in section 6.
2 Charged black holes with scalar hair in D = 4 In this section, we consider Einstein-Maxwell-Dilaton theory in four dimensions, corresponding to D = 4 and n = 2. The condition for (a 1 , a 2 ) in (1.4) becomes a 1 a 2 = −1. In supergravities with an exponential dilaton coupling, a 1 may take values ±( √ 3, 1, 1/ √ 3, 0). We thus first consider the two special cases, namely (1, −1) and ( , in greater detail and summarize the general results later in this section. We start the discussion by considering an example of charged black holes with scalar hair in four dimensions. The theory consists of the metric, a dilaton φ and a Maxwell field A. The Lagrangian is given by where F = dA. Naively, the parameter ω has a range of 1 2 π; however, owing to the symmetry of ω → ω + 1 4 π and φ → −φ, ω has a range of 1 4 π instead. For latter purpose we choose the range of ω lies in [ 1 4 π, 1 2 π]. When ω = 1 2 π, Z −1 then involves only one exponential term and the Lagrangian is invariant under the constant shift of the dilaton provided that it is compensated by an appropriate scaling of the gauge potential A. For generic fixed ω, however, such a freedom is deprived. The ω = 1 2 π Lagrangian can be embedded in D = 4, N = 4 supergravity. To construct black hole solutions to (2.1), we first note that Z(φ) has a stationary point for ω = 1 2 π, namely dZ dφ φ=φ 0 = 0 , for e φ 0 = tan ω .
JHEP09(2015)060
Thus the dilaton can be decoupled, giving rise to the Einstein-Maxwell theory The Lagrangian admits the well-known RN black hole where the two integration constants M and Q e are the mass and electric charge respectively. For the solution to be a black hole, the mass and charge must satisfy the inequality The solution becomes extremal when the above inequality is saturated and the black hole has zero temperature with the AdS 2 × S 2 near-horizon geometry.
In addition to the RN black hole, we find that the theory (2.1) admits a different charged black hole in which the dilaton is not a constant. We shall give the detail construction later. For now, we simply present the solution The solution involves two integration constant q and Q, which, as we shall see presently, parameterize the mass and electric charge of the solution. We assume that q > 0. With this choice, the spacetime has no curvature singularity in the r > 0 region. For the solution to describe a black hole, the curvature singularity at r = 0 has to be shielded by an event horizon at certain r = r 0 > 0; the curvature singularity at r = −q < 0 is irrelevant for our discussion.
In the large r expansion, we have where R = r(r + q) is radius of the foliating 2-sphere. We can read off the ADM mass We see now the reason that we have chosen ω ∈ [ 1 4 π, 1 2 π]. (Of course, one can perfectly choose ω ∈ [0, 1 4 π], in which case, we can simply let q be negative and the curvature singularity r = −q > 0 becomes the relevant one to discuss.) The electric charge of the solution is given by
JHEP09(2015)060
To determine the relation of the mass M and Q e so that the solution describe a black hole, we note that the function f approaches +∞ at r = 0 and becomes 1 as r → +∞. It follows from (2.7) that for positive M , there must be a minimum for f at some r = r min > 0. It is straightforward to determine that Thus the solution describes a black hole provided that f min ≤ 0, i.e.
When the inequality is satisfied, f has two real positive roots Using the standard technique, we obtain the temperature and the entropy where r 0 = r + . The electric potential difference between the horizon and the asymptotic infinity is (2.14) It is then straightforward to verify that the first law of black hole thermodynamics is satisfied. These quantities satisfy also the Smarr relation (2.16) Thus we see that for given mass M and electric charge Q e , the theory (2.1) admits two black holes, one is the RN black hole with constant φ, whilst the other is the scalar hairy one involving a varying φ.
The new scalar hairy solution also has an extremal limit in which T vanishes, corresponding to Thus we see that φ| r=r 0 = φ 0 , where φ 0 was defined in (2.2). The mass of the extremal black hole with scalar hair is then given by It is easy to verify that for the same electric charge, the mass of the extremal hairy black hole is in general bigger than the extremal RN one, i.e.
JHEP09(2015)060
In particular, this implies that in the extremal limit, both the new and RN black holes with the same electric charge have the same AdS 2 × S 2 structure as their near horizon geometry, but the asymptotic behaviors are different; the new solution having a larger mass. The inequality (2.19) is saturated for ω = 1 4 π for which the scalar hairy black hole degenerates to become the RN black hole.
Finally, we note that both the new and the RN black holes have two horizons, inner and outer horizon. The product of the entropies of the two horizons are quantized in the same way The universality of this property of two very different solutions of the theory is rather telling that there may be a same conformal field theory that is responsible for the interpretation of the black hole entropy. (See, e.g. [34,35].) This entropy product rule also implies that although for given electric charge Q e , the mass of RN and the new scalar hairy black hole is different, but their entropy is the same in the extremal limit.
Case
The Lagrangian for the second example takes the same form as (2.1), but now with Z given by The range of ω lies in the interval [0, 1 2 π]. We pick up this example to discuss in some detail because the theory can be embedded in supergravity when ω = 0 or 1 2 π. The function Z has a stationary point φ = φ 0 with Thus the dilaton can be decoupled, giving rise to Einstein Maxwell theory Thus the theory admits the RN black hole and its mass and charge satisfy The bound is saturated in the extremal limit. We find that the charged scalar hairy black hole is given by
JHEP09(2015)060
Following the same technique in the earlier discussion, we find that the mass and the electric charge of the black hole is given by It is easy to establish that when the parameters satisfy the function f (r) has two positive roots r ± > 0. The horizon is located at r 0 ≡ r + ≥ r − . The temperature, the entropy, and electric potential are We verify that both the first law (2.15) and the Smarr relation (2.16) are satisfied. In the extremal limit, where the temperature vanishes, the mass is proportional to the electric charge The condition (2.28) for the existence of event horizon is equivalent to M ≥ M ext SH . In general M ext SH ≥ M ext RN with the equality is achieved at ω = 1 3 π, in which case, the hairy black hole degenerates to the RN black hole.
The general case
We now consider the general case with a 1 a 2 = −1. We reparameterize the constants as The dilaton coupling function z is thus given by It has a stationary point, given by The reduced Einstein-Maxwell theory (2.23) has Θ = sin(2ω) (2.34)
JHEP09(2015)060
We find that the scalar hairy black hole is given by . (2.35) The mass and electric charge are given by For appropriate parameter regions, the event horizon exists and the first law (2.15) and smarr relation (2.16) can be shown to be satisfied. For a given electric charge Q e , the black hole mass must satisfy the inequality When the equality is achieved, the black hole becomes extremal with zero temperature and the corresponding near horizon geometry is AdS 2 × S 2 . For the extremal solution, the dilaton runs from φ = φ 0 on the horizon to φ = 0 to asymptotic flat infinity. For a given electric charge Q e , the mass of extremal solution of scalar hairy black hole is in general bigger than that of the RN back hole, namely The equality occurs when tan ω = √ 1 − µ/ √ 1 + µ in which case, the scalar hairy black hole degenerates to the RN black hole. The general black holes have two horizons, one is outer and the the other is inner. The product of the two horizon entropies is quantized: (2.39)
The extremal limit and multi-centered black holes
As in the case of RN black holes, in the extremal limit, the electric force and gravity balance out and one can have multi-centered solutions. We find that the multi-centered solutions for the general Z (2.32) are given by
JHEP09(2015)060
where H 0 is given by The solution reduces to AdS 2 × S 2 near the location of each charge, namely x → x i .
Adding a scalar potential
The charged black hole solutions we considered are all asymptotic to the flat Minkowski spacetime. We would like to add a scalar potential to the Lagrangian and construct charged black holes that are asymptotic to AdS. For ω = 0 or ω = 1 2 π, charged AdS black holes were previously constructed. The scalar potential is given by We find that the Lagrangian admits charged black hole solution for general ω. The solution is given by . (2.45) Here k takes the values 1,0 or −1 corresponding to the unit round sphere, torus or hyperbolic space for dΩ 2 2,k . It is worth commenting that although both V and Z have stationary points, they do not coincide. Thus the dilaton cannot be decoupled and the RN-AdS black hole is not a solution of the theory.
The construction in general dimensions
In the previous section, we presented a class of charged black holes with scalar hair in four dimensions. In this section, we show the method of construction. We give the construction in general dimensions, but continue to focus on black holes. We shall present an alternative construction in the next section where we obtain a class of black p-branes with scalar hair.
Asymptotically-flat black holes
We now consider Einstein gravity coupled to a Maxwell field with a generic dilaton coupling in general D dimensions The equations of motion are We consider the ansatz The In other words, it is for round sphere, torus, or hyperbolic (D − 2)-space respectively. For asymptotic flat black holes, we must have k = 1, but we keep k generic for discussing local solutions.
The equation of motion of A implies that where Q is an integration constant related to the electric charge where Σ D−2 is the volume of the space associated with dΩ 2 D−2,k . This is the most general ansatz that respects the isometry and it was first considered in [18]. A nice property of this ansatz is that the combination of the Einstein equations E t t − (D − 3)E i i yields Thus we have where c 1 , c 2 are two integration constants. The combination E t t − E r r gives A further ansatz is thus necessary for obtaining exact solutions. We follow [18] and assume the dilaton is given by
JHEP09(2015)060
Then the function H can be solved exactly, namely [18] where c is an integration constant. (Another integration constant is overall scale of H which we fix so that H becomes unity as r → ∞.) We can then read off the function Z as a function r with the remainder of the Einstein equations. Using (3.9) we can rewrite Z in terms of φ. We would like to insist that Z(φ) is independent of the parameters q and Q, this implies that we must have c = 0 or c = 1 for generic k. (New possibility arises for k = 0, which we shall discussion in section 5.) The choice of c = 0 and c = 1 is simply an interchange q 1 and q 2 and hence we shall choose without loss of generality c = 0, and hence (3.11) The function Z(φ) is then given by Note that we have chosen the constants c 1 and c 2 in (3.7) such that the parameters q i and Q does not appear in Z(φ) explicitly. The metric function f is now given by . (3.14) This quantity turns out to be dimensionally independent. At large r, we find that where q = q 1 − q 2 and R = rH . Thus the ADM mass is given by Here it is understood that k = 1. The Maxwell field is given by It follows that the electric charges and potential are given by
JHEP09(2015)060
where r 0 is the location of the horizon. With the standard technique for calculating the temperature and entropy T, S, we can easily verify the first law of thermodynamics and the Smarr relation, given by The general solution also has two horizons with the product of the two corresponding entropies being quantized, namely (3.20)
Asymptotic AdS solutions
We now consider adding a scalar potential V to the Lagrangian. For the metric ansatz (3.3), the results were already obtained in [18]. The scalar potential turns out to be The solution is then given by Since now the solutions are asymptotic AdS, we can let k = 1, 0, −1 instead of only k = 1 in the asymptotic flat case.113
Alternative construction and scalar hairy black p-branes
In the previous section, we construct a class of charged scalar hairy black holes that are asymptotic to flat spacetimes. The Z function is given by (3.12) with (4.1)
JHEP09(2015)060
This is precisely a condition that one can construct analytical two-charge black holes for the Lagrangian [32]: This suggests that a relation exists between our solution and the two-charge solutions constructed in [32].
To establish an even more general relation, we consider a Lagrangian involving twõ n = D − n form field strengths F ĩ n = dA ĩ n−1 in D-dimensions 3) The dilaton coupling constants (a 1 , a 2 ) satisfy We can parameterize the a 1 and a 2 as following In this parametrization, we have the following two identities The Lagrangian admits the (n − 2)-brane with magnetic charges. The metric and the dilaton are given by (see e.g. [33,36,37],) The gauge fields are given by their field strengths The solutions have three integration constants (µ, δ 1 , δ 2 ), giving rise to the mass and two magnetic charges. We now redefine the parameters to be P 1 = cos 2 ω Q , P 2 = sin 2 ω Q , (4.9) The corresponding solution can be also obtained from the Lagrangian (cos 2 ω e a 1 φ + sin 2 ω e a 2 φ )(Fñ) 2 . (4.10)
JHEP09(2015)060
The corresponding field strength is given by We perform electric and magnetic duality on theñ-form field strength to become an n-form F n = dA n−1 . The corresponding theory becomes The electrically charged black p-brane solution is then given by (4.7), but with the field gauge potential The solution now involves two integration constants (µ, Q), giving rise to the mass and electric charge. The parameter ω is now a coupling constant in the theory. It should be pointed out again that the theory admits also a non-dilatonic p-brane carrying the same mass and charge where the dilaton is a constant, as in the case of the black hole examples discussed earlier.
The multi-center (n−1)-brane solution in the extremal limit can also be easily obtained, given by where (4.15) The geometry around each center of the charges is AdS n × S D−n .
New charged AdS planar black holes
As was discussed under (3.10) in section 3, we could construct more possibilities of the Z function when the spatial section is flat, i.e. k = 0. Although for a black hole to be asymptotic flat, we must have k = 1, the k = 0 case is allowed for asymptotic AdS solutions.
In this section, we consider again the EMD theory with general coupling function Z and scalar potential V .
Class 1
The class-1 solutions are specified by the metric ansatz Large classes of neutral scalar hairy AdS planar black holes were obtained in [17,26]. The scalar potential is where λ 2 = 2/((D − 2)(1 − µ 2 )). We find that analytical charged solution also exists provided that Z(φ) is given by 2) The dilaton and the metric function f are then given by From the expression for the field strength we obtain the gauge potential A = a(r)dt, where a = 2γQ r DD−2 (2r + (1 + µ)q) 1 + q r . (5.5) The neutral solution with µ = 0 was obtained in [17] in a different coordinate system. The neutral µ = 0 solution was given in [26] using the same coordinates. It is easy to calculate that the mass of the solution is given by It is straightforward to verify that the first law of the thermodynamics dM = T dS + Φ e dQ e holds, where T, S can be calculated with the standard technique and with r 0 being the location of the horizon. These thermodynamics quantities satisfy the generalized Smarr relation Note that this is not the usual Smarr relation (3.19) which breaks down in all AdS black holes unless one treats the cosmological constant as a thermodynamical variable. This generalized Smarr relation is a consequence of the scaling symmetry associated with planar AdS black holes [38]. It was shown in [38] that this generalized Smarr formula is directly related to the the universal bound [39,40] of the viscosity to the entropy ratio in the AdS/CFT correspondence.
JHEP09(2015)060 6 Conclusions
In this paper, we considered a class of Einstein-Maxwell-Dilaton theories where the dilaton coupling Z(φ) to the Maxwell field is not the usual exponential function, as in a typical supergravity theory. There are three criteria for us to determine the function Z. The first is that it should provide a toy model for string loop effects so that the extremal solution has the smooth AdS×Sphere near-horizon geometry. The second is that the theory admits two different black holes for given mass and charges, providing a counter example of the no-hair theorem. Both of the above criteria can be satisfied when Z(φ) has a stationary point. The third is that we can construct exact solutions for these black holes. The theories we constructed indeed admit in general two types of black holes. One is the usual RN black hole, for which the dilaton scalar is a fixed constant. For the same mass and charge, there can exist also another charged black hole, in which the dilaton field varies. We analysed the global properties and obtained the first law of black hole thermodynamics, and compared the differences between the new black hole with the RN solution. We found that some scalar potential could be introduced and exact solutions of charged AdS black holes were obtained. We generalized the black holes to black p-branes.
We further obtained a class of new charged AdS planar black holes and show that they all satisfy the generalized Smarr relation, which can be viewed as the bulk dual of the viscosity/entropy ratio of the boundary field theory. | 6,102.6 | 2015-07-15T00:00:00.000 | [
"Physics"
] |
System Analysis of Counter-Unmanned Aerial Systems Kill Chain in an Operational Environment
: The proliferation of Unmanned Aerial System (UAS) capabilities in the commercial sector is posing potentially significant threats to the traditional perimeter defense of civilian and military facilities. Commercial Off-The-Shelf (COTS) UAS are small, cheap, and come with multiple types of functions which have growing interest among hobbyists. This has prompted the need for facility commanders to have a methodology to conduct quick evaluation and analysis of the facility and the existing Counter-Unmanned Aerial System (CUAS)’s effectiveness. This research proposes a methodology that follows a systems engineering perspective to provide a step-by-step process in conducting evaluation and analysis by employing Model-Based Systems Engineering (MBSE) tools to understand the CUAS’s effectiveness and limitations. The methodology analyzes the CUAS’s operating environment and effects of the dominant factors and impacts that CUAS may pose to other stakeholders (e.g., adjacent allied forces, civilians, etc.) within the area of operation. We then identify configuration candidates for optimizing the CUAS’s performance to meet the requirements of the stakeholders. A case study of a hypothetical airport with existing CUAS is presented to demonstrate the usability of the methodology, explore the candidates, and justify the implementation of a candidate that fits the facility and the stakeholders’ requirements.
Introduction
The fast growth in commercial Unmanned Aerial System (UAS) capabilities is posing significant threats to perimeter defense that involves safety, security, and privacy [1].The defense industry sector has had to quickly implement methods to safeguard critical infrastructures against UAS, both from adversaries and civilians.However, methods found to be effective to deal with UAS can also cause disruption to other authorized operations which makes the operation of Counter-Unmanned Aerial System (CUAS) to be extremely complex.For instance, an airport with an actively operating CUAS can disrupt communication signals (e.g., mobile phones, control tower, etc.) and radar signals, which can limit ground crew communications and disrupt control tower operation for coordination operations such as debris reports or runway clearances for landing or taking off [2].However, the consequence of not deploying CUAS can be catastrophic if a facility's perimeter were breached by a UAS.For instance, the UAS could collide with an airplane, causing a shutdown of the runway.Or if a UAS is used by someone with ill-intent, they could conduct reconnaissance and surveillance, conduct strikes on targets with UAS weaponized capabilities, or deliver a payload that contains explosives or chemicals [3].The above are the security dilemmas that facility commanders must face today in dealing with UAS.
Existing research and implementation of CUAS has focused on the technical capabilities of CUAS and UAS systems such as detection, identification, and classification of UAS, study of CUAS kill chains with or without a human-in-the-loop, and the limitations of passive or active counter measures.However, the research has not focused on the broader systems perspective.There is research on the adoption of UAS and CUAS in military and perimeter security operations (e.g., airports, camps, and bases), but there is currently limited work that includes the full impacts to adjacent stakeholders and civilians within the perimeter vicinity during the activation of CUAS interceptor systems to counter UAS threats or intrusions.
Developing a systems perspective of CUAS effectiveness to current and emerging UAS threats may help facility commanders to better identify potential weak points in their defenses.A systems perspective on mitigation measures to reduce the risk of a successful UAS attack through a variety of means (e.g., new CUAS systems, progressive levels of defense approaching sensitive assets, etc.) to counter UAS threats is also needed to help facility commanders make decisions on potential CUAS upgrades.At the same time, a systems perspective on how CUAS operations may affect adjacent stakeholders (e.g., allied forces operations and civilians) is needed to aid in balancing CUAS capabilities with adjacent stakeholders' needs.
Specific Contributions
This research presents a systems engineering evaluation and analysis process to review current deployed CUAS effectiveness in supporting facility operations that facility commanders can follow to understand CUAS vulnerabilities and consider the possible effects on adjacent allied forces and civilians (e.g., jamming, spoofing, etc.).A trade-off study can be completed through the proposed evaluation and analysis.This then can be used to develop an upgrade or implementation plans to better defend facilities against the evolving UAS threat while reducing impacts to adjacent stakeholders' operations.The Singapore Armed Forces (SAF), Department of Defense (DOD), civilian airports, and other facilities that may be targets of UAS will benefit from this research.
Background and Related Research
This section provides the required background knowledge and related research of CUAS and UAS technical capabilities to assist in understanding the discussion and the step-by-step process proposed in Section 3. Additionally, a review of existing literature on evaluation and analysis of CUAS is presented.
Specific Threats: UAS
The concept of UAS was adopted by the military in 1849, when Austria used unmanned balloons stuffed with explosives to attack Venice [4].However, the unmanned balloons blew off course and were unable to reach their target.This failure motivated further UAS technological development.Today, the U.S. military and many other defense forces have successfully adopted UAS to conduct operations such as precision strikes; electronic attacks; and intelligence, surveillance and reconnaissance (ISR).UAS have proven to be effective during military operations such as Operation Enduring Freedom and Operation Iraqi Freedom [5,6].
Beyond the military, the commercial sector is investing in UAS development, as they see potential economic growth of UAS to be deployed in the next few years to realize the Fourth Industrial Revolution [7,8].The rise in UAS-related patent submissions over the last 30 years (from about 20 to about 12,000) demonstrates the explosive growth in this sector.Developments over the last 30 years include sophisticated capabilities such as motion tracking, visual projection, thermal scanning, light detection and ranging, 3D environment mapping, facial recognition, and obstacle avoidance.UAS support many uses such as operations in agriculture, mining, manufacturing, logistics, security firms, marketing, construction, and infrastructure.These capabilities have also attracted hobbyists that have formed a community to adopt UAS for recreational uses [9] such as taking pictures of scenery, videos, or racing.However, unapproved media recordings may lead to security concerns of respective stakeholders.Hence, the growth in both capabilities and the adoption rate of small Commercial Off-The-Shelf (COTS) UAS poses a significant threat to both security facilities and civilian facilities [10,11].
UAS Group Classification
UAS are often classified by top speed, Max Gross Take-Off Weight (MGTOW), and maximum operating altitude.The grouping of UAS basic capabilities are given in Table 1.
Table 1.UAS Groupings Based on Weight, Operating Altitude, and Top Speed.Source: [10].The classification of UAS provides a quick guide to what a particular grouping of UAS can be used for and the type of capabilities the UAS can employ.In this research, the UAS capabilities that are of concern are payload-enabled capabilities, self-modification capabilities, and swarm capabilities.
Payload-Enabled Capabilities
The MGTOW is the payload weight limit that the UAS can carry, which narrows down to what type of capabilities the UAS has.The most common low-cost payloads are non-sensing payloads that do not gather or transmit any type of information to the operator.Such payloads can be anything from homemade explosives to biological or radiological payloads (e.g., Chemical, Biological, Radiological and Explosives (CBRE)).However, this requires the target to be in the operator's line of sight.COTS payloads with capabilities such as live video feeds enable the conduct of Intelligence, Surveillance, and Reconnaissance (ISR) operations and precision strikes.However, these capabilities are limited by the energy consumption of the UAS.Finally, a countermeasures payload can disrupt a wide span of operations using Radio Frequency (RF) jamming or electromagnetic system.Similar to sensing payloads, countermeasures payloads are limited by energy consumption.The list of payload-enabled capabilities is shown in Table 2.
Type Capabilities
Non-Sensing Payload Kamikaze Both the payload and UAS crashes into the target.
Payload Release
The payload is carried to a certain altitude and is released upon hovering above the target.
Sensing Payload
Electro-Optic Imagery and video recording functions to support ISR operations.Light Detection and Ranging (LIDAR) The pulsing of a laser in a given time that enables distance measurements.
Spoofers
The spoofing capability payload disrupts navigational or command and control receiver systems, such as those that rely on Global Navigation Satellite System (GNSS), for instance.
Jammers
The payload overloads sensor inputs which cause disruption to operations.
Self-Modification Capabilities
Another unique feature that some COTS UAS have is that they are packed in kit boxes which require self-assembly of premade components [12].This feature enables hobbyists with a certain level of knowledge in model aircraft and basic electrical engineering, or with online learning resources to mix and match components to achieve desired capabilities.
With the proliferation of low-cost materials and available design templates online, additive manufacturing (3D printing) can create parts which makes the designing and assembling of UAS customizable and enables rapid experimentation [13].Such one-off designs are harder to track by authorities as they lack a paper trail which allows users with malicious intent to adapt devices for nefarious purposes.
Swarm Capabilities
As the complexity of military missions increase, the need to complete more and complex high-risk tasks drives the effort to study the deployment of UAS.Within the defense community, there are ongoing research efforts for designing UAS to operate in swarms, which can improve the efficiency of a mission such as ISR, Maritime Interdiction Operations (MIO), Humanitarian Assistance and Disaster Relief (HADR), and Search and Rescue (SAR) [14].
The commercial sector has found use for swarm technology as a means for entertainment at large-scale events.Commercial swarm technology can be seen in recent publicized events, where a swarm of UAS operates in coordination for lighting displays by switching in between formations [15][16][17][18].For instance, in China, swarm UAS were recently used for marketing purposes where QR codes are illuminated in the sky.Displaying QR codes causes other cyber-security concerns which are not covered in this research [19,20].
UAS swarm capabilities are achieved by adopting an automation architecture or using Artificial Intelligence (AI) and Machine Learning (ML) to support UAS in basic selfmaneuvers, and assist an individual operator in controlling multiple UASs for a common mission.These decentralized and self-organized approaches are commonly known as Swarm Intelligence (SI), which conduct flocking, herding, and schooling that resemble collective behavior found in animals.SI supports UAS in solving complex issues through cooperation and operating within a set of rules embedded in the system [21].History consistency of flocking coordination decisions may be assisted through application of blockchain [22][23][24][25].As operations are decentralized to swarm UASs, there is the increased possibility of cyber-vulnerabilities.
Despite the possibility of swarm UAS intruding into a facility, the likelihood of such an attack may be low, as the cost and space required to mount such a coordinated attack is highly likely to be affordable by a state actor or big organization with its own fleet of UAS but not by an individual.This reduces the probability of a large-scale swarm UAS occurrence.Moreover, with current available CUAS, the successful launch of multiple UASs at the same time without being detected early has a very low likelihood, as the signature created at the launch site will inevitably be high.However, the risk should not be completely ruled out, as the fast growth in swarm technology could support future grey zone warfare in the event of rising tensions between nations.
The direct threat of concern to many facilities is small COTS UAS (Group 1 and 2) and modified UAS, as they are inexpensive, easily acquired, and are difficult to detect and intercept [26].As the adoption of UAS by the public increases, there are increasing reports of UAS intrusions and sightings which disrupt facility operations and cause monetary losses, flights delays, and unnecessary risks (e.g., 2018 Gatwick drone incident and 2019 Changi Airport drone incident [27,28]).A 2018 study by the CUAS capabilities analysis working group established the probable nefarious uses of UAS by non-state actors is likely to be for ISR, conveyance of contraband, kamikaze explosive attacks, and CBRE attacks.
Counter-Unmanned Aerial System
There currently is a wide span of CUAS solutions provided with a range of different configurations that can be purchased as separate systems or suites of systems that can be adopted directly [29][30][31].The concept of the system solution between different companies is similar-namely the ability to detect and intercept UAS.
A variety of considerations influence the CUAS system configuration and performance such as the level of facility security required, the expected time needed from initial detection UAS intrusion until interdiction of the UAS occurs, and the type of deployed interceptor system (e.g., kinetic, RF jamming, energy pulse, etc.).Given the need in flexibility of system configurations, the cost of deploying CUAS is relatively high for a sophisticated solution, as it is a complex system of systems that requires detailed study of the environment in which it operates (e.g., nearby tall buildings, terrain, weather, other environmental behavior factors, etc.) to mitigate possible interference that may cause poor system performance.
System Kill Chain
The kill chain model of CUAS is similar to the generic military application of Find, Fix, Track, Target, Engage, Assess (F2T2EA), and the definition of each process is given in Table 3.
Find
Identify a target.Find a target within surveillance or reconnaissance data or via intelligence means.
Fix
Fix the target's location.Obtain specific coordinates for the target either from existing data or by collecting additional data.
Track
Monitor the target's movement.Keep track of the target until either a decision is made not to engage the target, or the target is successfully engaged.
Target
Select an appropriate weapon or asset to use on the target to create desired effects.Apply command and control capabilities to assess the value of the target and the availability of appropriate weapons to engage it.
Engage
Apply the weapon to the target.
Assess
Evaluate effects of the attack, including any intelligence gathered at the location.
With an understanding of the F2T2EA kill chain, we suggest that the CUAS kill chain can then be simplified for the purposes of this research from F2T2EA to only detection and interception.The detection consists of find, fix, track; and the remaining processes fall under interception.This simplification helps with the association of the kill chain process with physical subsystem capabilities and key performance metrics that will be discussed in Sections 2.2.2 and 3.
Subsystem Capabilities
CUAS have three key subsystems that are required to effectively to counter incoming UAS.
The first subsystem is the sensor system which fulfills the initial detection of the kill chain process which purely conducts sensing and investigates the cause of a potential UAS detection.The sensor system consists of multiple sensors that can sense, maintain track, and identify an incoming UAS.A typical list of CUAS sensors is given in Table 4.
Systems Capabilities
Radar Some UAS may be identified via radar signature.Radar may also be applied to device tracking.Unlike RF, radar signatures for UAS must be distinguished from radar signatures for birds, etc.
Radio-frequency (RF) RF scanning supports the detection and geolocation of UAS based on communication link frequencies.UAS may also be identified by RF behavior in some cases.
Acoustic
UAS can be identified by acoustic signatures generated from operating motors.
Electro-optical (EO)
Identifies and tracks UAS based on their visual signature.
Infrared (IR) IR uses heat signatures for identifying and tracking UAS.
The second subsystem is the Command and Control (C2) system that fulfills the second part of the detection kill chain process.The C2 system classifies and assesses the situation, followed by providing decision-making assistance provided to the CUAS operator.The C2 system can fuse multiple sensors' data, map the situation, confirm the threat, and provides decision-making assistance and dissemination of information to the onsite response team.As there are many different types of software that can be used to fulfill the requirements, this research stays at a generic level with regards to software.
The third subsystem is the interceptor system, which fulfills the interception of the kill chain process.The interceptor system may consist of soft kill [33] and/or hard kill [34][35][36][37] systems (see Table 5 for details) that can temporarily or permanently disable or disrupt UAS from continuing its mission.
Systems Capabilities
Soft Kill Interceptor RF Jamming Radio frequencies are susceptible to frequency jamming, where RF interference is generated to effectively block the RF connection between the UAS and operator or between UAS.
GNSS Jamming
As with RF jamming, GNSS jamming blocks connection to the device.In the case of GNSS, jamming is against the satellite link to the UAS, e.g., GPS or GLONASS, which provides essential navigation information.
Spoofing
Spoofing often implies a break in the cryptographic entity authentication between the device and operator/GNSS satellite.Spoofing an operator allows the attacker to impersonate the operator to the UAS, taking control of the device or redirecting it.If spoofing the GNSS link, the attacker may feed false navigation information to the device.Spoofing may also be used by some researchers to imply a break in the channel (data confidentiality or authenticity) providing information on the UAS and operator link, or the GNSS link.
Dazzling
High-intensity lasers or light beams can be used to render UAS camera use ineffective.
Hard Kill Interceptor
Laser High-intensity lasers can be used for directed energy against UAS, melting or weakening key components.
Microwave
Like lasers, directed high-intensity microwaves can be used to disable the UAS' electronic systems.
Nets
Physical nets can be entangled and trap a UAS.
Projectile
Physical projectiles, including ammunition, can be used to kinetically take down a UAS.
Collision UAS A custom UAS may be flown against the target UAS with the intent to collide with it and produce a kinetic take-down.
Challenges
Although there can be multiple subsystems within the CUAS that can have overlapping capabilities to ensure a high level of successful UAS intercepts, several external factors and challenges are still present that are not within the controls of the system that will diminish the overall effectiveness of dealing with a UAS intrusion.
The first challenge is the detection effectiveness of the sensors can be greatly affected by some UAS capabilities [38].For instance, a UAS with high maneuverability enables closeto-the-ground and sea flying which dampen a radar's ability to detect the UAS.Similarly, UAS entering a facility in a direction with backlighting by strong light sources such as the sun or from beyond tall buildings with a limited Line Of Sight (LOS) will reduces the Electro-Optical (EO) and Infrared (IR) sensors' detection effectiveness.Adverse weather can cause attenuation of RF signals which reduces RF sensor effectiveness.Therefore, detection effectiveness is based on all types of sensors' ability to investigate intrusion through the environment and available LOS between the CUAS sensors and the UAS.In addition, the UAS MGTOW may also contribute to the detection interference, as it allows UAS to carry payloads such as electromagnetic devices that can cause interference or damage to the sensors' capabilities.In the near future, it can be foreseen that many operating environments will include the use of friendly UAS within the area of operation [39][40][41].There is a strong need to ensure that the CUAS possessed capabilities to identify, classify, and differentiate between friends and foes of detected UAS to avoid missed engagement or unintentional engagement.These limitations posed a crucial requirement to have a sufficient buffer to compensate for errors during the short and complex response window to deal with both authorized and unauthorized UASs, as the effectiveness of the CUAS interceptor system can decrease rapidly as time passes and unauthorized UAS approach an intended target.
The second challenge is the facility's legal rights and cost to public perception when applying passive or active countermeasures during an UAS intrusion [42].There are rising concerns of the application of CUAS interceptors in operating environments including the possibility of causing collateral damage such as disrupted UAS falling out of the sky which could cause injury to bystanders and damage to infrastructure [26].There are ongoing debates that the application of an interceptor should be limited and used only as a last resort, which can greatly reduce the effectiveness of the CUAS [42,43].
The third challenge is the emerging requirement of forensic study of intercepted UAS.Forensic study processes are known to be labor intensive and require much effort to find evidence of an operator's intention through examination of media storage [44].However, the benefit of forensic study is high as it will assist in vulnerability assessment of the facility and set improvement requirements for CUAS.This extracted data can include flight path coordinates, pictures, and videos that will provide data to aid in the enhancement of the base security such as extending fenceline boundaries, increased foot patrol in an area, etc. [44][45][46][47].However, this requires more resources and technical expertise among responding personnel in order not to unintentionally tamper with or destroy evidence while extracting the data from the media storage.The debate is, therefore, whether there is a need to employ a team of technical experts to conduct the forensic study on detained UAS.
The last challenge is the consequences of engaging a misidentified target.There are few studies on the consequences of the misidentified UAS; however, multiple broadly publicized incidents involving passenger aircraft being misidentified and subsequently shot down by air defense system due to misidentification exist [48][49][50][51].For instance, the incident of Flight PS752 which was shot down due to misidentification by an air defense unit in the suburbs of Tehran [51] provides an example of what a CUAS system could do if it were to misidentify an incoming object.
Existing Methods of Developing and Deploying CUAS
Several methods exist to develop and deploy CUAS and similar related physical protection systems such as Garcia's physical protection design and evaluation method where the method begins with the determination of physical protection system objectives which characterizes the physical facility, defines threat, and develops target identification.Next, the design of the system is split into three broad categories of detection, delay, and response.The analysis is then carried out on the system design to determine whether the system design has met the requirements, and lastly, to be output as a final design or iterated back to the redesign loop.
Another related method is the National Institute of Standards and Technology (NIST) risk management framework, which provides a generic process that can be applied to any type of system [52].The processes are prepare, categorize, select, implement, assess, authorize and monitor.Each process is required to meet a certain objective which guides an organization in adopting a more comprehensive risk management plan.The prepare step identifies the key risk management roles and established strategy that can be adopted organization-wide.The categorize step, ranks the risk based on the impact level and appoints an appropriate approving authority.Then the selection of control measures to be allocated to specific system components is made.The implement step applies the controls for the system and organization.The next step, assessment, assesses if the controls have met the intended outcome.The approving authority then reviews the assessment to see if the risk management plan is acceptable.Lastly, the continuous monitoring of the risk on the control implementation to allow timely review or intervention.
In general, the concepts and principles of the two above discussed methods are relevant and applicable for adoption in conducting analysis on a CUAS system.Although they do not directly address the need for a CUAS design method, they do provide inspiration to this research.
Methodology
Rapidly developing UAS technology has caused disruption to the design and implementation of CUAS systems.Due to this, there is often scope creep and/or new requirements introduced late in the CUAS system design cycle which can lead to higher overall project cost or a failed CUAS deployment.To successfully deliver CUAS solutions to facilities, there is a need to develop a cost effective and comprehensive system analysis method that can ensure the CUAS stays relevant against new emerging UAS technology and threats.This section proposes a system perspective analysis method that can guide a facility commander in developing new CUAS capabilities and augmenting existing CUAS.The proposed methodology provides insights for how the CUAS system can be optimized to achieve required system effectiveness and allow timely intervention to propose CUAS system redesign solutions.The proposed methodology illustrated in Figure 1 is not dependent on any specific CUAS technology and can be adopted by any facility to conduct analysis and evaluation of potential and existing CUAS systems.
Pre-Step: Collect System Information
There are two events that can trigger the pre-step process and are described as follows: The first event is the identification of emerging UAS capabilities that pose a new threat toward currently deployed CUAS at a facility that creates a technological gap between the CUAS and threat.It is assumed that the pre-step has not been completed for the facility before and thus must now be done.The second event is newly developed CUAS capabilities becoming available, and that can address the existing outstanding CUAS threats a facility faces.These two events are iterated upon throughout the design process and even after the system is delivered to ensure timely intervention to maintain the intended CUAS requirements for the facility.Without either of these events occurring, there is likely no need to use the proposed methodology.
The importance of the pre-step is to gather current and historical data of the CUAS.The gathered data establishes the current state of the CUAS for further investigation and analysis work by facility commanders.It is recommended to use Model-Based Systems Engineering (MBSE) tools such as Core, Innoslate, etc., to assist in the conduct of analysis and evaluation work [53].However, it is not mandatory, and the analysis can still be carried out without usage of MBSE tools.
Step 0.1: Establish Stakeholders' Requirements
The initial data required from the stakeholders are the (1) daily activities within the facility, (2) expected environmental conditions, and (3) facility security requirements.
The daily activities data can be human or asset (e.g., planes, cargo vessels, etc.) traffic flow, routine operations, and possible ad-hoc operations that need to be carried out by the respective stakeholders.This can provide a baseline of types of daily activities that will be in operation alongside the CUAS at the facility.
Next, to understand the expected environmental conditions that the system needs to operate in, there is a need to gather facility blueprints, the maximum and minimum boundaries of the facility (some facilities have zones of protection that extend beyond the site boundary), conduct a direct line-of-sight study, identify possible wildlife activity, and collect historical weather condition data.This effort identifies the environment's "noise" that could interfere CUAS system performance.
Lastly, the stakeholders are to determine their expectations of CUAS security requirements.This requires the stakeholders to list their critical assets with the expected level of protection and security classification of the assets.
The objective in this step is to determine the expected CUAS operation conditions and acceptable security level so that the system can be configured to meet all stakeholder requirements.
Step 0.2: Review Existing Risk Management Plan
In this step, there is a need to understand the current risk management plan that is in place to address the inherent vulnerability of the existing CUAS.The inherent vulnerability can be associated with three parts: the first is physical system failures such as electrical faults, inclement weather, fires, and accidents.The second is potential infrastructure damages and personnel injury caused by falling interdicted UAS.Lastly, the risks posed by cyber-security may have catastrophic outcomes such as if an attacker successfully gains control over either the CUAS's detection or interception systems.
The review of the existing risk management plan is to determine whether there is any impact on existing vulnerabilities introduced by a new CUAS design solution.Ideally, a new CUAS design solution should cut down on the number of risks that the existing risk management plan must address.Otherwise, there is a need to identify new mitigation measures that requires and possibly overlaps with the existing risk management plan.
The objective in this step is to allow the facility commander to have an informed consideration of the impact to daily operations on CUAS activities.This will be achieved by garnering collective agreement from the various stakeholders of the mitigation plans that will be in place to manage the identified inherent risks [54].
Step 0.3: Gather System Operator Feedback
The last step of pre-step is to gather feedback from the personnel who interact with the CUAS.To achieve high overall CUAS effectiveness, it is good practice to include current operators or personnel that interact with the CUAS during design reviews, as the operators and personnel may uncover limiting constraints or beneficial concepts which otherwise would not be recognized.
These data can then support Human Factors Engineering (HFE) design efforts that improve the interaction between the operator and the system [55].This interaction improvement implies a better response time and reduction in human errors, as the interaction is more intuitive for the operator, which also increases the overall system effectiveness.
Step 1: Threat Definitions
In Step 1, the objective is to assemble current technical specifications of the Group 1 and 2 UAS.A static list of technical specifications cannot be used because UAS development is constantly advancing which translates into new capabilities that are frequently emerging [56].
Step 1.1: Establish New Threat Information
As discussed in Section 2.1, the facility commander must understand UAS capabilities that are of concern to the facility.Such concerns can be payload-enabled capabilities, selfmodification capabilities, and swarm capabilities, among others.For instance, if any of the above capabilities have been made available to COTS UAS, the facility commander should review the existing CUAS and determine whether the current CUAS design solution is still relevant to manage the new threat.Hence, the facility commander requires both UAS and CUAS capability parameters to be collected to conduct the analysis.
Step 1.2: Determine Potential CUAS System Vulnerabilities
To determine potential CUAS system vulnerabilities, an understanding of baseline system performance must be developed.We recommend that the CUAS system's baseline performance be established through the listing of expected daily operation and be based on possible UAS intrusion scenarios that may occur to the facility.Furthermore, potential future and as yet unobserved scenarios should also be considered.The subsequent CUAS evaluation can then harness the insights on vulnerabilities of the CUAS.
Given the identified vulnerabilities and data developed in the previous steps, it is now possible to provide tangible measurement of CUAS effectiveness.We derive generic mathematical representation of the probability of CUAS effectiveness from Kouhestani [57].CUAS effectiveness can be determined by Equation (1), which is the product of probability of detection and probability of interception.The breakdown of the two sub-functions' equations is explained in subsequent sections.
The probability of detection functions refers to the probability of the sensor's chances to detect the presence of a UAS.This includes the element of accurate identification and classification of UAS, which can also be implied as the false alarm rate.The three subperformance metrics of detection effectiveness in terms of probability are sense (P(Sense)), track (P(Track)), and data transmission (P(Transmission)).The relationship between each of the sub-performance metrics and their definition are given in Equation (2) and Table 6 respectively.
Performance Metrics Definitions
Detection Effectiveness
Probability of Sense
The probability of the sensors able to detect the presence of UAS and initiate alarm.The higher probability of sense will increase the CUAS success rate of accurately detecting UAS.
Probability of Track
The probability of accurate track of UAS's geolocation.The lesser track drop will increase the CUAS accuracy of acquiring UAS position.
Probability of Transmission
The probability of successful data transfer ring over a period of time to a response team and/or interceptor subsystem.The data may contain information such as UAS models and coordinates necessary for action against the UAS.
The probability of interception (P(Interception)) refers to the probability of the CUAS system being able to deny or disable a UAS from continuing its intended mission.The sub-performance metrics that make up the interception effectiveness in terms of probability are hit (P(Hit)), kill/deny (P(Kill/Deny), and risk ((P(Risk)).The relationship between each sub-performance metric and each definition is given in Equation (3) and Table 7 respectively.P(Interception) = P(Hit) × P(Kill/Deny) × P(Risk)
Performance Metrics Definitions
Interception Effectiveness
Probability of Hit
The probability of successful contact made by interceptor to the UAS by either hard (e.g., projectiles) or soft (e.g., electromagnetic waves) kill method.The higher the probability depicts the better effectiveness.
Probability of Kill/Deny
The probability of successful denial or destruction of the UAS after being contacted by the interceptor.The higher probability depicts the better effectiveness.
Probability of Risk
The probability of possible injury to person or damage to infrastructure due to a UAS falling or accidentally colliding with an obstacle following interceptor action.Based on a study conducted by [26], the probability that a Group 1 and 2 UAS can cause that impact is about 2-6% and it can remain in this state throughout the evaluation.
On top of the above-mentioned mathematical relationship, it is also important to consider any of the existing mitigation measures that will be applied to reduce collateral damage.These mitigation measures will then be evaluated as part of system vulnerability reduction in a subsequent step.
Step 1.3: Define Threat
With the vulnerabilities identified, the facility commander should next associate specific threats to specific risk levels and then link the risk levels to respective system vulnerabilities.The threat can be defined by measuring the severity of the threat and how this could cause the system vulnerability.
Step 2: Re-Evaluation of the Current System
In Step 1, the process focuses on CUAS's technical performance.For Step 2, the process focuses on human factors that may affect CUAS performance from the perspectives of adaptability and usability.The aim is to optimize and choose only necessary changes that reduce CUAS impact to the people and operations within the facility.
To better understand the current system, more data such as building blueprints, existing surveillance systems, perimeter environmental behavior, and standard operating procedure documents are now reviewed in detail.These documents should provide a good starting point of the system's perspective on the overall current CUAS effectiveness that the facility commander can analyze.
Step 2.1: Re-Evaluate Facility Security Requirement with Stakeholders
With the understanding of current CUAS system capabilities, it is important to conduct a meeting with the respective stakeholders to verify if the current CUAS performance meets facility requirements.This includes the identification of the respective stakeholders' critical assets and critical asset locations within the facility to determine the level of protection within that zone.This requirement provides the criteria for the baseline of needed CUAS performance.The stakeholders must collectively agree and commit to the operational requirements and safety constraints identified in this and previous steps.If there is dispute over requirements, the facility commander will decide and direct the stakeholders to accept compromises between their operation and security needs.
Step 2.2: Develop Operational Concept View
At this point, a variety of Operational Viewpoint-1 (OV-1)s and accompanying system architectural products should be developed to demonstrate multiple potential CUAS systems configurations.These CUAS configurations will be used in subsequent steps to compare against requirements and eventually either determine that the current CUAS is sufficient or that the CUAS should be either upgraded or replaced.
We suggest that the OV-1 view is particularly important to develop because it allows the facility commander and stakeholders to have a visualization of the CUAS system perspective.This helps to clarify interactions or identify missing interactions between portions of the base and surrounding community and the CUAS.The case study in a subsequent section provides an example of OV-1.
Step 2.3: Analyze Candidate CUAS Configurations
Now that several candidate CUAS configurations have been developed, a trade-off analysis of requirements can be conducted.We recommend the Pugh Matrix approach because it provides a method to do scoring in terms of positive or negative impact on specific requirements based on the facility commander's best intuition according to the available data [58].
The Pugh Matrix analysis results can identify which CUAS candidates' configuration is the most preferred.The CUAS configurations are ranked based on the stakeholders' requirement and translated down as a decision to improve an existing CUAS (or build a new CUAS if none already exists) or remain with the existing CUAS.
If the candidate' CUAS configuration solution benefit is lower or matches the current CUAS, the recommendation is that no change to the current CUAS be undertaken.In this case, return to Step 1 for continued monitoring of the situation for future emerging UAS threats.Otherwise, if there are technological gaps found in the capabilities of the existing CUAS against UAS, proceed to Step 3.
Step 3: Evaluation and Analysis
In Step 3, a more in-depth evaluation and analysis is conducted to assess the effectiveness of the proposed CUAS solution from Step 2.3.This includes generating the workflow of the proposed solution and using MBSE software to conduct simulation work.
Step 3.1: Simulate the Proposed CUAS Configuration
In this step, MBSE software is used to conduct detailed evaluations of the proposed CUAS configuration.In the case study presented in this research, CORE software is used to conduct the simulation of the CUAS's functions which allows the facility commander to assess the feasibility of the function's interactions.However, many other MBSE software packages are available and have similar functionality.
Step 3.2: Determine Dominant Factors
The objective in this step is to identify the dominant factors that have the most impact on overall CUAS performance through simulating the down-selected CUAS configuration that was identified in Step 3.1.This approach reduces cost and time required to assess the new CUAS configuration that may not otherwise have sufficient real-world data points for detailed analysis.Another benefit of conducting simulations is that simulations can explore high-risk scenarios that involve personnel, hazardous payloads, and other threats which would be dangerous to be conducted physically.We suggest conducting a Design Of Experiment (DOE) as part of this step using software such as Minitab, ExtendSim, or CORE to generate time-based simulations that can produce results such as outcome probability distributions, identify delays and bottlenecks in the CUAS response, and implement probabilistic decision-making which can improve realism in the simulations [10,59].
The interaction between the UAS and CUAS capabilities generates certain important interaction effects; the relationship of those interaction effects is illustrated in Table 8.
UAS Maximum Range Detection Range
To achieve earlier detection possible, the detection coverage should include the UAS maximum distance outwards starting from the facility outer perimeter.
UAS Maximum Speed
Response Time Required to intercept before the UAS achieves the objective.
CUAS Identification and Classification Time Reaction Time
This duration must be much shorter and within the response time.This could also be used to justify if this process should be managed by human-in-the-loop or full automation.
CUAS Tracking Capacity Software Algorithm and Sensors Capabilities
This will determine the ability of CUAS to deal with a swarm of UAS.
CUAS Maximum Interceptor Range Engagement Window
It will be ideal for the interceptor to be able to engage as soon as possible.
Step 3.3: Identify Risks to the CUAS
In this step, risks to the CUAS are identified to allow for mitigation strategies to be developed in the subsequent step.In this context, we are interested in risk of the CUAS not performing as intended.
CUAS system risks can be caused by limitations that are inherent to some CUAS subsystems such as sensors and interceptors requiring line of sight with the UAS, no sun glare, and other related interferences.The CUAS is designed to be a system of system which aims to address the inherent individual sensors' and interceptors' vulnerabilities to eliminate such single points of failure.Additional risks may be present such as bypassing (e.g., through blind spots, via saturating or overloading a sensor, etc.) or spoofing sensors.Interceptors can be vulnerable if unable to activate in a timely manner which can lead to missed UAS engagement opportunities.Additionally, the risks to facility personnel and nearby communities caused by CUAS must be considered.For example, an adversary's UAS that loses control due to CUAS interception may fall or crash into nearby communities.
We suggest identifying and categorizing the risk level of the above-mentioned risks into high, moderate and low risk categories based on the severity of each risk outcome.We define high risk as the possibility to cause significant damages or severe degradation of performance to the CUAS and/or facility and/or surrounding community, moderate risk as the possibility to cause moderate damages or degradation of performance to the CUAS and/or facility and/or surrounding community, and low risk as the potential to cause minor damage or degradation to the CUAS and/or facility and/or surrounding community.
Step 3.4: Develop Risk Mitigation
Next, risk mitigation plans for each identified risk can be developed.The mitigation plans' aims are to increase the probability of CUAS success and reduce potential collateral damage.We suggest that this can be achieved through a review of policy, practices, and procedures associated with the CUAS.To reduce CUAS system failure, the emphasis here is to develop and implement controls over the risk.This can be done by strengthening the identified weak points of the system or accepting the risk and creating layers of processes to neutralize the risk if the CUAS system cannot be strengthened.Although there are a variety of mitigation strategies discussed below, we assert that the main consideration for risk mitigation should be to provide ample time for the CUAS system to react safely to UAS intrusion that ensures the safety of people operating within the vicinity.We assert that this is a conservative and safer approach than others.
The first mitigation strategy is to achieve early detection to increase CUAS engagement window duration.Detection coverage should be as far out as can be reasonably accommodated from the facility.The probability of successful detection can be further increased by eliminating detection blind spots and reducing false alarm rates.If there is line-of-sight blockage by tall buildings or trees, this can be mitigated through employing patrols to cover blind spots, pruning vegetation, or establishing satellite sensor locations atop the tall buildings that otherwise would block CUAS sensors.This also involves adding layers of procedures and a revamp of policy to allow installation of sensors on buildings that may not be controlled by the facility.To reduce the false alarm rate, we recommend the mitigation efforts include employing environmental studies as discussed in previous steps to understand the visibility conditions throughout different periods of the day such as possible sun glare during sunrise and sunset, reduced visibility during inclement weather, and blockages by wildlife activities.
The second mitigation strategy is to have interceptors that are effective and will not cause disruption to facility operations or neighboring communities.The time taken to react and neutralize a UAS intrusion is expected to be swift, and the result is expected to be decisive.This mitigation strategy aims to intercept the UAS before it enters the facility or at least prior to encountering sensitive areas of the facility.We suggest employing multiple layers of interceptors to allow CUAS engagement at different distances and with different interception methods to eliminate the possibility of a single point of failure.
Step 4: Design Recommendations
In the final step, the stakeholders collectively make a decision on CUAS upgrades and implementation.
Step 4.1: Verify TRL Level
In this step, checks must be performed to ensure that selected CUAS technologies and configurations meet appropriate Technology Readiness Level (TRL) levels.We advocate using Kouhestani's CUAS TRL levels where Level 1 is scenario-based testing (e.g., "red-teaming" [60][61][62]), Level 2 is exploratory testing, Level 3 is baseline characterization testing, Level 4 is performance testing (statistical confidence levels), Level 5 is degradation/vulnerability testing, and Post-Install is certification and periodic performance testing [57].Please note that Kouhestani's TRL is significantly different than other TRLs.We prefer Kouhestani's TRL levels over other approaches to TRL because it focuses on elements that are more apropos to the rapid development of technologies associated with CUAS and UAS.Kouhestani's TRL is specifically designed for UAS and CUAS while other methods such as the National Aeronautics and Space Administration (NASA) TRL are more suited for aerospace purposes etc.However, a practitioner using this method may adapt the method to their own preferred TRL method.For the purposes of this research, we suggest that TRL Level 1 is the minimum level that any specific system or subsystem must be at within the CUAS for consideration.We recommend not evaluating TRL earlier in the methodology beyond ensuring that CUAS technologies are on track to be at TRL Level 1 by the time Step 4 is reached because of the rapid development of CUAS technology to keep pace with the rapid development of UAS technology.If a facility was constrained to only evaluate TRL Level 5, then in our opinion the facility would have little hope of ever countering the latest UAS threats and instead would be protected against old and outmoded UAS threats.
After confirming the TRL levels, the results can be bundled into a package for stakeholder review.
Step 4.2: Recommended CUAS Design for Implementation
In this step, the stakeholders review all CUAS information compiled and developed over the previous steps.Stakeholders review the CUAS information to ensure that all requirements are met satisfactorily.In the event that any requirements are not met, the stakeholders must either accept that some requirements will not be met or the CUAS design needs to be iterated upon to meet requirements.If the CUAS does not meet requirements, the method returns to Step 2 and iterates until a suitable design solution is found or the stakeholders accept the CUAS limitations.
If the CUAS meets all requirements, the facility commander can proceed with implementing CUAS capability upgrades or deploying an entirely new CUAS depending upon the situation.In parallel, the method iterates back to Step 2 to continuously monitor emerging new UAS and CUAS technologies and capabilities that can pose threats and solutions to the facility.The proposed method never truly ends because the threat posed by new UAS innovations is ever evolving.
Case Study
This section demonstrates the usability of the proposed methodology applied to a hypothetical airport and CUAS in a dense urban area in a tropical environment.Although the case study's hypothetical airport and environment may bear a passing resemblance to some airports, the details have been intentionally changed to retain realism but ensure that no unintended disclosure of sensitive information could occur as a result of this article.The case study assumes that a CUAS is already in operation and the expected threat is from hobby UAS that are COTS systems.The analysis provides evidence to determine if emerging UAS capabilities require an upgrade to the existing CUAS capability.In this case, study we use the MBSE software CORE to illustrate the proposed methodology.The identified key stakeholders are the facility commander, personnel working within facility, and CUAS operator.In a real facility, there could be multiple entities for personnel working within the facility and it is ideal to list them separately as they may have different levels of authority over the design solution.
The list of each key stakeholders' concern and their influence over the system design is illustrated in Table 9.By understanding the stakeholders' concerns, the list of daily activities is also consolidated to have a sense of the baseline operation within the vicinity.A sample list of daily activities is provided in Table 10.The next step is to gather expected environmental conditions using historical data of the weather over recent years.The average year-round weather historical data can be accessible from Weather Spark's website [65] and many other data repositories.The parameters that are of the designer's interest are the temperature and weather conditions.These parameters determine the expected operating temperature range that the CUAS is required to withstand and the expected inclement weather such as haze, snow, and rain which will affect the sensors' sensitivity and visibility.The temperature in Singapore throughout the year is estimated to be above 24 • C at the lowest and below 33 • C at the highest.For the daily chance of precipitation, the lowest probability is 25% in February and highest is 65% in November.
Lastly, the stakeholders are to determine the boundaries of their respective assets and the level of security required to guard their assets.The assets identified in this case study are the aircraft, the "critical repelling zone" where the airplane parks, and the runway where airplane takeoffs and landings occur.
Step 0.2: Review Existing Risk Management Plan
The existing risk management plan adopted the "isolation approach" for the physical system and operating procedure to segregate the potential risks into layers for better management.The physical system of the CUAS, which, as with many other military systems, must have the redundancy to ensure the mission can still be carried out.The existing risk management plan is illustrated in Table 11.
Risk Type Mitigation Action Simulation Model Approach
Full System Failure The activation of soldiers to be deployed as sentry until the CUAS is back online Increase processing time and reduce the layer defense to last mile layer only Detection System Failure Used stand-alone portable aeroscope to manage the detection until system is back online Increase system processing time
Interceptor System Failure
Physical Interceptor gun will be adopted by the patrol until system is back online The current CUAS is operating in human delegated mode where the system gathers and generates a situation map that displays all the relevant information to the operator.The system's decision-making tools will analyze the information from the sensors and provide decisions of the pre-programmed actions to the operator.However, the operator is still responsible for the interpretation of data and deciding which action to execute [66].
The primarily use of this HFE feedback is to ensure the virtual environment (Situational Map) generated by the CUAS matches the actual environment as much as possible to not cause any misjudgment in operator communication or decision-making [67].Other factors to be considered for the design review include the sound level of the alarm, Heads-Up Display (HUD) such as color of the threat, size of the text, legends to label the icons, and confirmation messages.However, other factors can be subjective as may it vary with different individual inputs.The best way is to use common colors for display such as red for threat, blue for own forces, etc. [68].
4.2.
Step 1: Threat Definitions 4.2.1.Step 1.1: Establish New Threat Information As the anticipated intrusion will be hobby UASs which are COTS and modified system, the specifications are mostly with the maximum endurance of one hour, payload of 5-15 kg, speed of 68 km/h, and the size of around 50 cm to 2 m wide with the ability to withstand strong wind speed of 39 to 61 km/h [69,70].These UASs are in Groups 1 and 2 as outlined in Table 1.The technologies found on the COTS UAS are usually electric propulsion, Vertical Take-Off and Landing (VTOL), and navigation system that are radio-controlled but requires LOS to maintain the link.However, the above specification mentioned may not be applicable to the modified UAS, but for now it can be safely assumed that the difference may not be very far off.These capabilities will be then recorded and set as the new baseline threat capabilities.However, the presented COTS UAS mostly have the option to be configured into a swarm.
Step 1.2: Determine Potential CUAS System Vulnerabilities
The overall potential system vulnerabilities are investigated and segregated into two parts, which are the detection system and interception system.
The detection system vulnerabilities are defined by their subsystem weaknesses.However, each subsystem, given their unique capabilities, addresses each other's weakness as shown in Table 12.Understanding the detection subsystem strengths and weaknesses, in this analysis, it is assumed that the system has a land link, and it is frequently maintained.Therefore, for the purposes of this case study, the probability of transmission can be considered 100% successful.As for the parameters such as probability of sense and track, it can be gathered through the review of Original Equipment Manufacturer (OEM) data or individual conduct of conditional testing.In this case, with the assumption that the probability of sense and track are at 75% and 90% respectively, the overall probability of detection can be determined using Equation (2) resulting in the efficiency to be at 67.5%.
Next, the interception system vulnerabilities are dependent on the composite parameters of individual probability of hit, kill/deny, and risk.For the case study, we adopt commonly used countermeasures including RF Jamming and GNSS Jamming as they have the least collateral impacts and work against most current COTS UAS.However, they may have issues dealing with modified UAS that are operating in an unknown RF band or a modified navigation system such as an EO payload that uses a live feed for maneuvers.In this case, with the assumption that the composite parameters of the probability of hit, kill/deny, and risk are at 80%, 85%, and 6% respectively, the overall probability of interception effectiveness can be determined using Equation (3) resulting in 63.9%.Please note that the probability of risk is inverted to obtain the non-risk probability for the calculation of interception efficiency.
The overall CUAS effectiveness is then computed using Equation ( 1) and is 43.13% which may not meet stakeholders' requirements.
Step 1.3: Define Threat
Based on the previous computation of CUAS effectiveness, there is a strong need to improve both detection and interception effectiveness to raise the CUAS overall effectiveness against the UAS.With the assumption that the adversary UAS's objective is to complete a path to a target which has the least chance of being detected or intercepted, the biggest threat that CUAS will face is UAS speed, where the UAS's speed reduces the engagement window which can be implied as a lack of time for the CUAS to process and intercept.With the threat defined, the stakeholders now need to decide what type of security is needed to protect their assets.The first is to determine the maximum line of exploitation within the facility that allows the UAS to free-roam after it had intruded the facility.Anything after the maximum line of exploitation will be deemed the danger zone.With these boundaries mapped out, the next is for the stakeholders to identify locations for their detection or interception system to be deployed.As the system deployed had to be compatible with the existing facility operations and procedures.Lastly, the stakeholders are to review the procedure of activating the response team where the response team stands in during the CUAS's down-time to review the changes needed to support the system mitigation measures.These details include the response team's expected time to be ready, rest area, mobility means, issued equipment, and the rules of engagement.
The outcome of the discussion is stakeholders accepting and approving the security level that was presented in the proposed candidate CUASs to kick start the candidate system exploration with a detailed study.This includes clarification of possible interference to the stakeholders' operations that the candidate systems may cause.
Step 2.2: Develop Operational Concept View
A graphical operational concept view of the hypothetical airport is shown in Figure 2. The hypothetical airport is a relatively flat area consisting of three runways and generally low buildings except for one significantly tall control tower.Daily, there will be some ground operating crews onsite to direct aircraft traffic, transport luggage and clear debris off the runway, and wildlife activity such as bird flocks around the trees and fence line.The general assumption of the facility condition is that there are environmental effects which may have intermediate levels of interference that need to be accounted for in the analysis as it may cause some degrading effects to the subsystem performance.
Step 2.3: Analyze Candidate CUAS Configurations
The evaluation measures for CUAS are based on the above key design drivers.There are numerous evaluation measures that can be adopted, but to have a reasonable evaluation of the candidate configurations, four evaluation measures are created.
•
High Detection Rate: The high detection rate will increase the success rate of the mission.This measurement can be derived from the detection range, type of sensors overlapping, and false alarm rate.
•
Total System Cost: The cost of the overall system must be prudent.This measurement can be derived from the per system cost, maintenance cost, testing cost, etc. • Flexibility for Layer Deployment: The system ability to deploy in layers for in-depth defense.
•
High Interception Rate: The high interception rate will increase the success of the mission.This measurement can be derived from the probability of hit and kill.
The Pugh Matrix is an effective way to evaluate candidates as it can be compared to several design concepts against the existing system (Datum).This uses qualitative techniques, and each criterion listed in Tables 13 and 14 has a quantifiable comparison.The Pugh Matrix may provide enough evidence for the designer to gather consensus from the stakeholders to move forward to explore the candidate configuration more deeply.
The results illustrated in Table 13 suggest that the CUAS candidate that provides medium detection range and combination of soft and hard interceptors will be better as the sum of positives score is the highest.With that, the designer can proceed to Section 4.4 for further study of the CUAS candidate configuration.Conducting simulations can reduce project cost by exploring innovative ideas or feasibility checks prior to actual live testing on the system.The feasibility check includes the testing of system functions and their interaction through simulations, as illustrated in Figure 3 where an example simulation output from the case study produced using CORE is shown.This exploration can streamline the number of physical tests required and generally reduces testing costs.The baseline simulation was conducted using CORE, and the result is shown in the supplementary material to this article.Based on Figure 3, it is feasible to implement the chosen candidate CUAS that consists of a medium detection range and combination of soft and hard interceptors.The next step is to determine the dominant factors to optimize the improvement design to meet the requirements.
Step 3.2: Determine Dominant Factors
The results based on the DOE show the dominant factors that have significant impact to the respective probabilities: the CUAS's target initial range (maximum detection range) exponentially increases the probability of kill as it increases; the target speed (UAS's max speed) decreases the probability of kill significantly as it increases; and any of the process, response and neutralization time that requires more than 10 s to carry out will cause an exponential decline to probability of kill.From the results presented, the key dominant factors that have an impact on system effectiveness are the target initial range (maximum detection range) and system response time.
Step 3.3: Identify Risks to the CUAS
With the dominant factors identified, the system designer then creates a list of risks according to their severity and the likelihood of occurrences that will affect them, as illustrated in Table 14.
Step 3.4: Develop Risk Mitigation
Next, level of risk severity will be addressed as it was identified as high risk through DOE.The mitigation actions' aims are to reduce the likelihood of occurrences of the possible identified risks.The list of mitigation recommendations and the new likelihood of occurrences are shown in Table 15.Based on the finding in Section 4.4, the proposed CUAS's candidate possessed the feasible functionality and meet most of the stakeholders' requirements.The following proposal is to apply the mitigation measures to address the identified risk.The candidate design increases the CUAS's detection distance and adopts a combination of soft and hard interceptors which can effectively improve probability of sense, probability of hit, and kill/deny, respectively.The results are illustrated Table 16.
With the applied mitigation measures, the expected CUAS's effectiveness increases from 43.13% to 77.19%.Based on the current subsystem's TRL against the UAS's TRL, this percentage of efficiency is acceptable for implementation.Hence, the case study proceeds to Section 4.5.2.To proceed with the recommendation, the stakeholders now make an informed decision to accept the unaddressed requirements and agree to discuss the unaddressed requirements in the future when opportunities arise and make necessary reviews of their Tactics, Techniques, and Procedures (TTP) to ensure it will maintain system cohesiveness to support implementation efforts.This includes reassessment of project funding and reallocation of resources to support the new design configuration efforts.
Although the implementation process is underway and after the new CUAS is deployed, the facility commander continues to monitor emerging UAS capabilities.As new threats emerge, the process iterates to respond in a timely manner to the new threats.
Discussion
With the anticipation of deploying UAS to support the facility daily operation, there is some key insight that will be applicable in reviewing the design solution.The fast emerging UAS capabilities are known to be quick in diminishing the effectiveness of a CUAS.Through minor tweaks to the UAS, it enables a UAS to avoid CUAS's detection and interception.The use of proposed methodology guides the designer through a structured System Engineering approach that is repetitive and consistent to compare the performance parameters across different subsystems.The design principles are generic where it explores peoples' interactions with the system and leverages on the collaboration effort across the stakeholders to ensure the chosen CUAS are effective.
The use of MBSE and simulation tools will assist in verification and validation of the system to further explore innovative ideas and feasibility study in a cost-saving and safe environment.Furthermore, the DOE will determine the dominant factors which then provides better insight to improve the system in a resource-optimized approach.
Although the data of the CUAS and the environment that was used in the case study are intentionally fictitious, the proposed methodology provides a generic guide to the facility commander in understanding the critical requirements of the CUAS's deployment through the proposed system engineering approach.The proposed methodology is specifically useful to resolve projects with design uncertainty due to unknown parameters and projects with limited resources.
To caveat, this methodology is proven effective in theory.However, there are still uncertainties if there will be more complex issues to arise when actual system data sets are presented.The challenging portion of the methodology will be the development of a consistent parameter for performance comparison.This challenge could be due to limited testing facilities, or the capability is at low TRL.To overcome this, the data set must be generated through live testing of a wide variety of scenarios but there will be an increase in the project overall budget.However, the method of acquiring realistic and usable consistent parameters is not part of the research scope.We assert that different types of facilities, different countries, and different stakeholders will find different parameters of most utility to their specific situations.Thus, we do not recommend a specific parameter here.
The proposed methodology can address from a single UAS up to UAS with swarm capabilities.The research did not include UAS's capability that allows it to travel through air and underwater which may cause a huge impact to the detection capability of CUAS, as the UAS will be able to travel stealthily underwater and strike with agility in the air [71,72].
Although the re-evaluation of design principles will be similar, the need to include underwater sensors such as sonar or proximity or new types of underwater countermeasures will generate integration issues such as increased false alarm rate, incompatible output, and target hand-over between ground sensors and underwater sensors.
It should be noted that there are a variety of assumptions embedded in Equations ( 1)-(3).For instance, P(Detection) could be 0 if the detection subsystem is unable to sense or track an incoming UAS, or is unable to transmit that information to other CUAS subsystems.This is a valid assumption based on commercially available CUAS currently on the market.
The method presented in this paper is intended to serve as an adaptable and customizable method for specific facility situations and for specific potential CUAS technologies.The case study assumes using fixed CUAS assets.However, the method could be modified to also include ground mobile, floating mobile, and air mobile assets that are both crewed and uncrewed.For instance, there are some proposals and preliminary demonstrations of CUAS subsystems that operate from UAS platforms and from crewed ground vehicles.A practitioner can tailor the method presented in this article to their specific facility and the specific CUAS technologies available at the time.
The recommended expansion of this work is to include methods in generating parameters that are consistent through the conduct of designing a live test to support the evaluation through validation and to conduct research of the new UASs that can travel both in the air and underwater.The expansion of the study will support this analysis by providing realistic results and include analysis of the new emerging technology.
Conclusions
In conclusion, the results of the case study demonstrate that the adoption of an iterative learning and data sharing approach such as the methodology proposed above can consistently ensure that the CUAS are review in time manner.The battle against fast emerging capabilities of UAS that threaten the relevancy of deployed CUAS may be a thing of the past now through enabling modular upgrades to strengthen the CUAS's systems and test system performance even at low TRL, which reduces the possibility of UAS exploiting inherent weakness in facility security.
Figure 1 .
Figure 1.Overview of the proposed methodology.The methodology is intended to be used by facility commanders to analyze existing Counter-Unmanned Aerial System (CUAS) effectiveness, identify CUAS capabilities gaps, produce CUAS upgrade recommendations, and provide CUAS system design reviews.
Figure 3 .
Figure 3. Simulation of Counter-Unmanned Aerial System (CUAS) Function for the Case Study.
Table 2 .
Types of UAS Payload-Enabled Capabilities.
Table 5 .
Soft and Hard Kill Interceptors Typically Used in Counter-Unmanned Aerial Systems (CUASs).
Table 6 .
Performance Metrics and Definitions of Detection for Counter-Unmanned Aerial Systems (CUASs).
Table 7 .
Performance Metrics and Definitions of Interception for Counter-Unmanned Aerial Systems (CUASs).
Table 8 .
Effects of the Unmanned Aerial System (UAS) and Counter-Unmanned Aerial System (CUAS) Capabilities Interactions.
Table 9 .
Case Study Stakeholders List.
Table 10 .
[63] Study List of Daily Activities at the Facility.Adapted from[63].
Table 11 .
Risk Type, Mitigation Action, and Simulation Model Approach for the Case Study.
Table 12 .
List of Case Study Detection Subsystem Strengths and Weaknesses.
Table 13 .
Pugh Matrix for the Case Study Candidate Design Concept Decision-Making Process.Please note that two of the configurations have been eliminated as being inferior to the candidates and DATUM configuration shown in the table.The − symbol indicates a candidate performs worse on a specific criteria than the baseline while a + symbol indicates that a candidate performs better than baseline on a specific criteria.An S indicates that a candidate performs the same as the baseline on a specific criteria.
Table 14 .
Risk Associated with Severity and Likelihood of Occurrences for the Case Study.
Table 15 .
Case Study Mitigation Recommendations and Updated Likelihood of Occurrences.
Table 16 .
Expected Probability Improvement of the Proposed Candidate Design in the Case Study. | 15,394.4 | 2021-11-03T00:00:00.000 | [
"Engineering"
] |
OpenHiCAMM: High-Content Screening Software for Complex Microscope Imaging Workflows
Summary High-content image acquisition is generally limited to cells grown in culture, requiring complex hardware and preset imaging modalities. Here we report an open source software package, OpenHiCAMM (Open Hi Content Acquisition for μManager), that provides a flexible framework for integration of generic microscope-associated robotics and image processing with sequential workflows. As an example, we imaged Drosophila embryos, detecting the embryos at low resolution, followed by re-imaging the detected embryos at high resolution, suitable for computational analysis and screening. The OpenHiCAMM package is easy to use and adapt for automating complex microscope image tasks. It expands our abilities for high-throughput image-based screens to a new range of biological samples, such as organoids, and will provide a foundation for bioimaging systems biology.
INTRODUCTION
Large-scale genome sequencing has rapidly facilitated the investigation and analysis of gene and protein functions and interactions. Efforts to interpret sequence data and to understand how they are used to control cellular, tissue, or organ system development have quickly revealed the limitations in our molecular understanding of multicellular organisms. Spatial gene expression is important for understanding the events that are necessary for the development of metazoans, and large-scale studies are underway for a number of species (e.g., Hammonds et al., 2013;Lein et al., 2007;Pollet et al., 2005;Tabara et al., 1996). Indeed, even the existence of uniform cells or tissues has been questioned, and their variance has been masked by single measurements (Levsky and Singer, 2003).
High-content screening (HCS) is routinely used in cell culture imaging, pharmaceutical drug discovery, genome-wide RNA interference (RNAi), and CRISPR (Boutros et al., 2015). Commercial equipment is readily available (Zanella et al., 2010) and able to scan multi-well plates and slides. Commercial automated microscopes are integrated packages of robotics, microscope, and software, limiting possible customizations, especially for well-suited existing microscope systems. Imaging is done in a single pass, requiring a compromise between resolution and field depth. For single cell samples, this means a tradeoff between resolution and cell density. Transient transfections may not have sufficient cell transfection density for high-resolution imaging. For larger samples, such as histological sections, organoids, or whole-mount samples of model organisms or tissues, specimens need to be tiled at low resolution or placed at a predefined position.
We describe new software for high-throughput imaging, specifically designed to automate image acquisition that requires multi-step workflows contingent on image analysis and multiple imaging modalities. The software, OpenHiCAMM (Open Hi Content Acquisition for mManager) controls optical microscopes and interfaces with an automated slide loader to perform fully automated HCS.
The core of OpenHiCAMM is a sophisticated workflow manager that executes modules operating the robotic hardware, performing the imaging and processing the data (Table S1, Figure S3 and Tables S2-S4). In the spirit of Fiji and mManager, we designed a high-quality, open and extensible Java core. The workflow manager executes microscope-dependent modules sequentially and computational processing modules in parallel. OpenHiCAMM uses an integrated SQL database engine for persistent storage of workflow setups, for module configurations, and for recording completed tasks, thus allowing for recovery from operational hardware problems and stopping or resuming interrupted workflows. Both the database files and the image files, in standard mManager file and directory format, are stored in a user-selectable local or remote disk destination ( Figure 1).
To automate slide handling for HCS, we developed a slide management module, SlideLoader, which tracks the slides and either interfaces with a hardware slide loader or, for semi-automatic imaging without the slide loader, prompts the user to place the slide on the stage. SlideLoader can be used multiple times in the same workflow, allowing for loading and imaging slides repeatedly. This design allows for changing image modalities and manual adjustments of the microscope between each imaging pass. Loading slides multiple times may result in possible offsets to the slide position on the stage. To correct for these offsets, we developed modules for calibrating the stage position using the location-invariant edge between the frosted end and the adjacent transparent glass in commercially available slides ( Figures S4 and S5).
Imaging of the slide is performed by the SlideImager module, which is a wrapper for mManager's Multi-D acquisition and thus able to use all capabilities of mManager, including selectable light filters and stacks along the z axis. If an area exceeds the size of a single camera snapshot, multiple tiled images are acquired and post-processed with Fiji image stitching to assemble a composite image (Preibisch et al., 2009) (Figure 1D). To increase the acquisition rate, we developed a fast custom autofocus plugin ( Figure S6).
In our initial implementation, we developed a workflow and modules optimized for imaging slides containing whole-mount Drosophila embryos with gene expression patterns detected by in situ mRNA hybridization (Weiszmann et al., 2009). We assembled an HCS microscope platform with a Prior PL-200 slide loader, a Zeiss Axioplan 2 motorized microscope, a Nikon DSLR D5100 camera, and a Prior motorized stage (Figure 1B). We selected differential interference contrast (DIC) microscopy for capturing both probe staining and sample morphology in a single image, thus making it particularly well suited to high-throughput analysis of tissue or organisms and for studying ontogeny, such as the developing Drosophila embryo (Hartenstein and Campos-Ortega, 1997). We designed the workflow to image Drosophila embryos with two imaging modalities ( Figure S9). For the first pass (Phase 1), we calibrated the microscope for a 53 objective. We ran the workflow to first image manually selected areas of the slide occupied by the coverslip as tiled images, followed by an image analysis module to capture the coordinates of the detected embryos (Figures S7 and S8). After completion of the first pass, we manually adjusted the microscope, moving to a 203 objective, and resumed the workflow at the second entry point (Phase 2). The workflow re-loads the slides and re-images the detected embryos at higher magnification.
To assess the performance, we measured the precision and speed of the imaging process. We manually created imaging objects on slides with a permanent marker, performed the imaging workflow, and manually superimposed the objects. The average displacement was 0.059 mm, about 1.5% of the image at 203 magnification ( Figure S5). On average, the tiling at 53 took 2.3 s/image and the detection of twenty-five 203 images, including a focus step, took 12 s/image. At 53 magnification, the slides can be covered with 200-300 tiles, resulting in a rate of about 12 min/ slide. The second-phase high-resolution imaging with 203 magnification takes about 20 min for each 100 detected objects. Imaging the slide with high resolution at 203 would take over 3,000-6,500 images. Thus with approximately 300 embryos per slide, our imaging strategy achieved a more than 10-fold speedup.
To demonstrate OpenHiCAMM's ability for autonomously completing an HCS experiment, we used the workflow to image 95 slides made from a 96-well plate experiment ( Figure S1). For the low-resolution pass, we selected a slide area with 180 tiles. Low-resolution imaging was completed in about 12 hr or at a rate of 8 min/ slide and yielded 26-751 objects (continuous areas containing one or multiple embryos) per slide. In the second pass, we obtained high-resolution images for embryos with imaging times ranging from 39 min (61 objects with 119 images) to 113 min (334 objects within 573 images) for 90% of the slides excluding those at the tails of the distribution (too few or too many embryos per slide).
For cases that rely only on high-resolution imaging, we developed an additional module, SlideSurveyor, which takes advantage of the camera video feed to rapidly image the slide from live view. We detect objects and re-image with SlideImager. All steps use the same imaging modality, thus limiting alignment problems and user intervention from repeated slide loading. Using SlideSurveyor for Phase 1 at 203 magnification resulted in 20 min/slide, while avoiding slide reloading and changing the objective.
We imaged six additional slides to compare the expression of Drosophila embryonic wild-type gene mirror (mirr) (McNeill et al., 1997) with two intragenic and three intergenic cis-regulatory module (CRM) reporter constructs (Pfeiffer et al., 2008) (Figures 2 and S2). For high-quality slides, our workflow acquired between 75% and 85% of the objects on the slide. One slide (GMR33E04) exhibited age-related degradation (low contrast) and detected only 55% of the total objects on the slide. For each slide, we obtained high-quality images representing six stage ranges and two standard orientations as previously described for manual imaging (Hammonds et al., 2013). The images obtained from slides containing embryos with CRM constructs were compared with the images obtained from slides containing wild-type embryos. The collected images were of sufficiently high resolution to allow identification of distinct elements of the wild-type pattern driven by different CRMs (Figures 2B and S2). These results were comparable with those from similar experiments performed using manual imaging (Pfeiffer et al., 2008).
DISCUSSION
Imaging multicellular samples has become increasingly essential for understanding the complexity of medically relevant biology. To our knowledge, no existing tool is capable of multiple-slide autonomous imaging or adapting studies focused on multicellular samples to HCS. OpenHiCAMM adds the abilities to (1) accommodate hardware beyond the microscope itself as demonstrated with the slide loader, (2) handle hardware and software workflows, and (3) track slide contents beyond a single plate/slide setup. To accomplish these goals, we developed a sophisticated task algorithm, capable of resolving hardware and software task dependencies and providing persistence across workflows or interrupted experiments with an SQL database back end. These advances provide the flexibility to adapt workflows currently used for HCS, and material science samples, as well as allow integration of experimental devices, such as flow cells activated by changes computationally recognized in an acquired image.
Imaging cells in 96-well plates is well supported, both commercially and by open source products (Singh et al., 2014). OpenHiCAMM expands HCS to sample types for which there is no current solution while providing a robust, self-contained package and a transparent user interface, similar to the Fiji bioimaging package and commercial imaging and microscopy software (e.g., Opera/Operetta by Perkin Elmer). Previously existing tools are implemented as macro packages with only a subset of our software's capabilities (Edelstein et al., 2010), or are dependent on external software such as LabView (Conrad et al., 2011). As an important advance over a previous workflow design (Eberle et al., 2017) implemented in KNIME (Berthold et al., 2009), our software enables experiments for a high-content screen on multiple slides or multi-well plates. The commercial Metafer Slide Scanning platform is closest in capabilities but is dependent on a single hardware setup and is solely aimed toward histology content at lower resolutions. Magellan utilizes mManager and serves as a complementary utility for exploration of single-slide 3D samples or could conceivably be adapted as powerful plugin to our software. Object recognition has been the primary focus of previous works (Conrad et al., 2011;Eberle et al., 2017;Tischer et al., 2014) but is only a relatively insignificant part of our software. We tried to avoid the domain specificity of machine learning by creating an intentionally simple but more generalizable object recognition algorithm, while allowing users to add their own recognition pipeline developed in ImageJ/Fiji and pasted as a macro. More sophisticated machine learning algorithms are easily added with a simple programming interface. In fact, the well-supported KNIME software (Berthold et al., 2009) would make a highly complementary plugin to our software for complex image processing workflows to detect specific features in biological samples.
Our workflow framework is robust, flexible, easily adapted, and extendable to other high-throughput robotics tasks that are increasingly common in modern biology, making the OpenHiCAMM package uniquely positioned to provide a foundation for sophisticated high-throughput screens.
METHODS
All methods can be found in the accompanying Transparent Methods supplemental file. Expression of the mirr gene in embryonic stages 4-6 (blastoderm), 9-10 (gastrulation), and 13-16 (terminal differentiation) visualized by whole-mount in situ hybridization with a probe to mirr mRNA shown adjacent to the expression produced by the fragments GMR34C02, GMR34C05, GMR33E04, GMR34C02, and GMR34C05. Transgene expression is visualized by whole-mount in situ hybridization with a probe to GAL4 mRNA. Lateral views are shown, anterior to the left. Expression in the foregut anlage in statu nascendi (AISN) is indicated by arrowheads, and expression in the proventriculus is indicated by arrows. Segmental expression apparent at stage 9-10 in the wild-type mirr embryos is detectable at stage 4-6 in GMR33C10. Eight images (marked with an asterisk) are composite stitched images from overlapping tiled ROIs. Scale bar, 50 mm. See also Figure S2.
ACKNOWLEDGMENTS
We are indebted to the members of BDGP for their support and advice. We thank Richard Weiszmann for conducting the whole-mount in situ hybridization experiments. We thank Stephan Preibisch for providing implementation for the autofocus function and Abdelrahman Elbashandy for prototype testing. We thank Korneel Hens and Bart Deplancke at Laboratory of Systems Biology and Genetics (LSBG) of Ecole polytechnique fé dé rale de Lausanne for the full-length cDNA clone of mirror. The work was supported by NIH grants R01 GM076655 to S.E.C. and R01 GM097231 to E.F. Work at LBNL was conducted under Department of Energy contract DEAC02-05CH11231.
DECLARATION OF INTERESTS
The authors declare no competing interests.
OpenHiCAMM workflow manager, data model and module management
OpenHiCAMM is written in the Java 1.8 programming language as a plugin for µManager 1.4.x. While compatible with the basic ImageJ and µManager distribution, for image processing, we require the extended and standardized plugin selection provided by Fiji. At its core is a custom workflow manager providing following functionality: 1) A common interface for modules implementing either hardware control or image processing tasks; 2) Configuration of a modularized workflow and module specific parameters; 3) Resolving dependencies and ordered execution of the modules; 4) Metadata and image storage management; 5) Response to events from the hardware; and 6) A graphical user interface for designing workflows, configuring storage and modules and starting/stopping and resuming workflows.
OpenHiCAMM itself is platform agnostic but µManager hardware support depends on the operating system. We developed and tested the C++ code for PL200 slide loader hardware adapter on Macintosh OSX 10.10 (or newer) also tested it under Linux.
Data model
The OpenHiCAMM workflow engine manages two types of data, a set of execution tasks and the module and task configurations ( Figure S3). Execution tasks: Each task represents a single unit of work. The purpose of a task is 1) to allow the system to schedule module executions and 2) to provide a record of successful or unsuccessful module completion. Tasks are connected to each other using a directed acyclic graph. A workflow task may be started once all of its parent tasks have completed successfully. Workflow modules may tag tasks as serial or parallel. Serial tasks are completed in sequential order, and are suitable for hardware dependent tasks such as slide loading and imaging. Parallel tasks can be run simultaneously and are suitable for image processing or analysis. Each task consists of a unique task identification number, its current status, and its associated module. Tasks are connected together in parent-child relationships. The task workflow is similar to the concept of a data dependency graph that is common in many workflow execution systems (Zaharia, Chowdhury et al. 2010). Workflow module and task configurations: Metadata for configurations are stored as keyvalue pairs. Workflow module configurations are parameters set by the user and can be accessed by all tasks in the module. Module configurations are used for configuring parameters that apply to all tasks within the module. Task configurations apply to each task individually. Task configurations are not manually entered by the user, but are generated by the module. The task configuration is used to store information about which item of work each task is to perform. For example, the SlideLoader task configuration stores the identification and location for the slide to load, the SlideImager task configuration stores the identification of the image to be acquired by that task.
Images are part of the task data and not explicitly handled. OpenHiCAMM directs the µManager acquisition engine to store images in its native format to a workflow dependent location and tracks imaging through the task sets.
System architecture
The OpenHiCAMM workflow engine is designed to store its data in a SQL database backend. Our current version implemented HSQLDB (http://hsqldb.org) as database backend. We chose HSQLDB because it is implemented in pure Java and can be distributed as a cross-platform JAR file, making installation using the Fiji update manager much easier.
Each workflow module can create a configuration entry and one or more tasks. Upon invocation of a configured workflow, a new workflow instance with a task list is generated, tasks linked as acyclic graph and task information stored in the database. The workflow manager iterates through all pending tasks and executes every task not completed and with no dependency on preceding tasks or competing for the microscope hardware.
Each workflow can have multiple distinct phases. Each phase is named by its initial module, the topmost module in the graph. The user selects which phase to execute in the workflow dialog. Providing multiple phases and allowing the user to select between them gives the user the opportunity to split a workflow into sections and perform any required manual adjustments between each phase. For example, in our workflow designs, we expose two phases; the first performs a low-resolution scan and searches for regions of interest. The second performs a highresolution tiled imaging of each region of interest found in the first phase.
For tasks tagged as "Serial", child tasks will inherit task configuration from parent tasks. This allows a form of communication between parent and child tasks in the same workflow phase. For workflows split into separate phases, it may be desirable to pass information from the first phase tasks to the second phase tasks. This cannot be done using task configuration since the second phase tasks are not directly related to the first phase tasks. For these cases, the user can create custom database tables and add identifying information to each record so that the second phase modules can find the associated data. We use this approach in our workflow to pass the position list produced by the region of interest finder in phase 1 of our workflow to the phase 2 imaging module.
Modules have been optimized for robustness and rapid processing and acquisition. For modules interfacing with hardware, we added abstractions and modifications to the call sequence to catch aberrant hardware behavior and communication errors. The module will attempt to correct problems or skip the current step before stopping and reporting errors the user. Our improvements vastly increased the robustness of the stock µManager software and resulted in requiring user interaction only for hardware problems.
Module API
OpenHiCAMM is designed to be easily extensible using modules. Custom modules need to implement OpenHiCAMM's module interface. Our Module interface allows user-created modules to customize the behavior of their modules using at several key entry points (Table S2).
Each module can provide a custom configuration user interface (UI). For the configuration UI, a module developer needs to implement a configure method and return a Configuration object (Table S3). If the module is part of the workflow, the UI will be automatically displayed in the tabbed configuration dialog ( Figure S10). The configuration object will be managed and stored in the database by the workflow manager.
To add a new module, a module designer needs to add a Java jar file in the openhicamm_modules/ directory. On startup, OpenHiCAMM will automatically detect and load all jar files in the openhicamm_modules/ directory, and the module will be available in the Workflow Design interface. Module designers are free to design their own database tables, or use the provided configuration object for storing configuration metadata. A thin, easy-to-use wrapper around the popular ORMlite SQL object relational mapping library (http://ormlite.com) is provided in the Connection and Dao classes.
Reporting interface
OpenHiCAMM includes a reporting interface, which the module designer can extend to provide custom workflow reports (Table S4). We have used this reporting interface to successfully detect and correct implementation bugs and hardware issues in our workflow designs. By clicking the "View Reports" button in the Workflow Dialog, the Report Dialog is displayed with a drop-down list of the reports available for viewing. Module designers can build custom reports by implementing the Report interface and including the report in their plugin JAR file. The report will automatically be detected and added to the list of available reports.
Reports are created by interfacing with the OpenHiCAMM workflow manager to query the state of the workflow, and producing HTML output for display. To produce HTML with a convenient Java based interface, we included a HTML templating library named "Tag". Our Report UI uses the JavaFX WebView component to display the HTML document. JavaScript code can be added to the HTML document. We used JavaScript to interface the report with the microscope hardware, to allow for loading a slide and positioning the stage to a previously imaged object. Once the document is created, it is stored in the workflow directory, and can be re-generated at any time by the user.
Example workflow module implementation for detecting objects of interest
We implemented an ROIFinder module that executes an image processing pipeline to return an ImageJ ROI list with objects of interest (see section "ROIFinder and Drosophila embryo detection"). We used our workflow, µManager and Fiji data structures to integrate the output of the ROIFinder in a second phase workflow: The ROIFinder module creates a position list of ROIs and stores it in a custom SlidePosList database table. In the first phase, the SlideImager module processes user defined SlidePosList to determine which areas of the slide will be imaged. In the subsequent phases, SlideImager processes the most recently added SlidePosList records. SlideImager computes if a region of interest is larger than the size of a single image and creates a set of partially overlapping tile positions.
The position list JSON schema defined by µManager allows for custom properties to be added to each position in the position list. The ROIFinder module adds the "stitchGroup" custom property to each position in the ROI tileset to group the individual tiles together. The SlideImager module looks for custom properties in the position list and converts them into task configuration records. Because task configuration is inherited from parent to child for serial tasks, the downstream ImageStitcher module will use the "stitchGroup" task configuration to determine which images need be stitched together.
Core modules
SlideLoader module SlideLoader is responsible for initializing the slide loader hardware, keeping track of which slides are loading and unloading slides to and from the stage. SlideLoader defines a high-level programming interface with hardware dependent libraries, similar to the current µManager model. We developed a sample hardware library for a Prior PL-200 slide loader that supports most microscopes and is able to hold and handle up to 200 slides. SlideImager module SlideImager is a wrapper for the µManager Multi-D dialog and acquisition engine, primarily providing configuration storage in the SQL database and invoking the the acquisition engine with the previously configured parameters. The position list can be either manually predefined or the result of a previous workflow module, such as ROIFinder. To manually create a position list, we use the µManager position list user-interface to define a region on the slide. SlideImager will pass configured parameters from the Multi-D dialog and the position list to the Multi-D acquisition engine. All functions provided by the Multi-D dialog are available, able to use all abilities of the µManager, including selectable light filters and stacks along the Z axis. We added a custom hook to the Multi-D acquisition engine, linking back to SlideImager, to track the acquisition of each image and schedule post-acquisition image processing tasks.
Calibration of the slide position on the stage
We developed the PosCalibrator module to processes images from a dedicated instance of SlideImager ( Figure S4). Both modules can be optionally inserted into the workflow, after the slide is loaded onto the stage. SlideImager records images at an invariant position and PosCalibrator matches the slide position to a previous reference image and adjusts a position list to detected shift.
After loading a slide the first time, using the standard SlideImager module (called refImager in the example workflow in Figure S9), the user creates a position list with on entry containing the approximate position of the area where the slide frosting and the corner intersect. At each slide, when the workflow reaches SlideImager, the user-defined area is imaged and stored as reference ( Figure S4A). At workflows that load the slide again, optionally at different magnification, the user can create an instance of a modified SlideImager module with additional functionality for re-imaging the area in case of an interrupted workflow (called compareImager in Figure S9), followed by PosCalibrator. SlideImager images the same area again and the images are matched to previously acquired images in the module PosCalibrator, Figure S4B.
While sophisticated scale and rotation invariant methods have been described, they are generally slow or hard to implement. For PosCalibrator, we created a fast and robust pipeline based on Generalized Hough Transform (GHT) template matching. GHT matches the edge contours between a template and a target image by counting matching pixels in an accumulator (Ballard 1981). For each boundary point of the template, we pre-computed an R-table with the coordinate differences xc -xij and yc -yij between a fixed reference xc/yc in the middle and each boundary point xij/yij. To account for rotation and scaling we also pre-computed scaled and rotated x/y boundary pairs and stored them in the R-table. For each entry in the R-table, we calculated the gradient ɸ(x). For each pixel in the search image, we matched the pixel gradient to ɸ and increased the a corresponding entry in an accumulator array. The accumulator entry with the maximum value corresponds to the position of the template in the search image.
We customized an existing ImageJ plugin, GHT (http://rsb.info.nih.gov/ij/plugins/ght/ index.html), and used the code in a new plugin to encapsulate the pipeline to process and return matched image coordinates as ImageJ ROI. The pipeline converts images to grayscale, resizes them and uses the ImageJ IsoData algorithm find two thresholding values for the reference and all combined calibration images. The threshold values are used to generate binary images, the images undergo morphological processing to eliminate holes and errant pixels. From the set of calibration images the pipeline selects an image containing the both slide frosting and slide border at roughly equal proportions. The reference image and the selected calibration image are converted to edge contours ( Figure S4C) for matching with the GHT algorithm ( Figure S4D). The workflow module PosCalibrator calls the plugin and transforms the matched image coordinates to stage coordinates. GHT is scale and rotation invariant but, to further reduce running time, we adjust the image sizes to the same dimensions with user configurable parameters (e.g. 4 for matching 5x and 20x magnifications). Image pixel translations to stage movements were measured with the µManager Pixel Calibration plugin and the resulting values entered as configuration parameters and used to adjust the offset. We tested the modules both with DIC and fluorescent illumination. If no matches are found, a warning message is added to the log and imaging continued with no calibration. We tested our system with manual markings and found negligible displacements ( Figure S5). Our slide calibration works both with light and fluorescent microscope systems.
Slide Surveyor
The SlideSurveyor module is similar to the SlideImager in that it takes as its input a position list and acquires images but does not use Micro-Manager's Multi-D acquisition module. Instead, SlideSurveyor sets the camera to "Live Mode", moving to each position in the list, acquiring a live mode image. The image is then copied into an image buffer representing the entire slide. "Live Mode" captures video, which allows quickly acquiring low-resolution images without triggering the camera's shutter. Running the acquisition in "Live Mode" allows the SlideSurveyor to quickly map out an entire slide, even at 20x magnification. SlideSurveyor can image the contents of an entire slide in 20 minutes, as opposed to several hours with the Multi-D module and individual images. The resulting images are low quality, but sufficient for ROI detection. The same "Live View" mode is also used to accelerate our custom autofocus engine. The ability to survey an entire slide using the 20x lens enabled us to avoid using a two-phase workflow, helping the workflow to complete more quickly, and providing more accurate positioning and centering of the ROIs when performing the second phase.
Autofocus
We created a plugin, FastFFT ( Figure S6), using the µManager autofocus API. In FastFFT, we perform the Fast Fourier Transform (FFT) using the Mines Java Toolkit (http://inside.mines.edu/ dhale/jtk/) and apply a bandpass filter to the power spectrum. The logarithmic values between the two percentage radiuses are summed up and the process repeated for all images in the same stack. The image with the highest value is selected. Empirically we determined the best bandpass filter between 15 and 80 percent to place the focal point in the middle of the embryo ( Figure S6C). The bandpass filter values can be adjusted in the configuration dialog. To accelerate the image acquisition process, our autofocus module changes the camera setting to live imaging with reduced image size.
We compared FastFFT to two µManager autofocus implementations, J&M and Oughtafocus, by measuring the plugin reported level of detail at multiple stage positions for 10 embryos. The FastFFT module performed as designed and returned highest scoring focal planes at the middle of the embryos ( Figure S6A and S6C). The µManager functions returned optimal focal planes with more variability, depending on the stage of the embryo ( Figure S6A). At early embryos, focal planes were visually indistinguishable and within 6um of each other, less than the thickness of the cell-layer surrounding the blastoderm. At later stages, the µManager functions tended to focus on the surface of the embryo ( Figure S6C). FastFFT performed a magnitude faster than the µManager function ( Figure S6B), an average of 40 seconds for scanning 100 positions, compared to 206 seconds (Oughtafocus) and 246 seconds (J&M). Moreover, FastFFT was more consistent in its speed.
To further speed up processing and cover a wide range of focal planes, we perform a coarse focusing with large interval steps and a broad range and second a finer granularity focusing with small interval steps around the previous best focal plane. Both intervals are user configurable as part of the graphical configuration dialog. We configured it to take 41 images at 10 µm intervals and 7 images at 3.3 µm intervals. To prevent hardware damage cased by a potential slow drift in measurements, we also added a configuration for an absolute maximum/minimum Z position that cannot be exceeded. The plugin has a configurable counter to skip focusing on a selected number of images. The counter is reset every time after two consecutive images match the same Z position after completing the autofocus function. This prevents erroneous focusing on occasional mis-detected objects such as air bubbles.
ROIFinder and Drosophila embryo detection
This module processes single images and returns regions of interest (ROI). It provides a high level interface that returns an object's bounding box and can be easily adjusted to custom image processing pipelines. We developed two computationally efficient ROIFinder implementations. BothWe developed two computationally efficient custom ROIFinder implementations and macro based implementation, CustomMacroROIFinder, which allows the user to simply paste a previously developed ImageJ macro into a dialog and execute it as part of the workflow. CustomMacroROIFinder is extremely flexible and along with the powerful Fiji UI and macro recording abilities allows for the development of complex segmentation workflows by users less experienced with image processing.
The custom implementations segment the image, use Fiji's "Analyze Particles" function to detect and measure segmented areas and store the bounding boxes for areas exceeding a selectable minimum size. We set the Analyze Particles function to discard objects at the image boundaries. Virtually all objects missed in our Drosophila embryo imaging experiment were discarded for crossing image boundaries rather than failure to detect the object.
The simpler ROIFinder variant segments the object by automatic thresholding with IsoData and is suitable for uniform samples distinct from background such as fluorescent labeled samples.
For our work with Drosophila embryos we developed a variant that uses the texture, which is frequently inherent in biological samples ( Figure S7). Our pipeline calls Fiji edge detection, applies a threshold on the edge image with a fixed, empirically determined value (13, out of 255 gray values) and performs morphological image operations (close, fill holes) to smoothen the areas. This pipeline will be suitable for histological samples and other whole mount specimens.
For the SlideSurveyor workflow, we used the CustomMacroROIFinder ( Figure S8). We developed a macro that 1) removes horizontal and vertical lines that were a result of SildeSurveyor's image stitching with Fiji's FFT Bandpass filter plugin, 2) resize, 3) convert to a mask, 4) thresholds the image, 5) performs morphological operations and, 6) applies the "Analyze Particles" function as described before.
ROIFinder provides a standard API for a image processing pipeline as well as the ability to execute a macro and using Fiji's image processing plugins, programming and macro recording abilities, a customized pipeline can be easily implemented. More complex samples such as mutant embryos with altered developmental phenotypes can be detected and rapidly imaged at high resolution for computational analysis. We envision users building sample specific object detection pipelines, which can significantly increase the throughput if uninteresting objects can be discarded at low resolution.
Drosophila embryo imaging workflow
Our two phase workflow is shown in Figure S9. After loading the slides, our SlideImager module captures images with a given list of positions on the slide. The initial position list is configured by the user as the maximum area of interest and slides are imaged in overlapping tiles at 5x magnification. Captured images are processed with a module, ROIFinder, to detect objects and their coordinates stored in the database. Detected objects are stored as bounding boxes, transformed to the imaging area and the content of the bounding boxes imaged again at higher resolution ( Figure S1 and S2). If a bounding box exceeds the size of a single image at 20x, multiple tiled images are acquired and post-processed with a module, ImageStitcher, using an Fiji stitching algorithm (Preibisch, Saalfeld et al. 2009) to assemble a composite image ( Figure 1D, Figure S2).
The Fiji based stitching worked reliably for 2-3 images but failed frequently for large composites with 3 or more images. We found these large composites containing dense clusters of overlapping embryos and discarded the results.
User interface
The main window is available from the µManager plugin menu (Figure 1). The user can select a local or remotely mounted (NFS or CIFS) directory as location for the workflow database, settings and images.
Using a workflow editor, the user can add and arrange modules from an existing set to create a workflow ( Figure S9). For our low/high magnification implementation, we created a combined workflow with two imaging phases. The first imaging phase arranges the SlideLoader, refImager, SlideImager and ROIFinder modules, the second imaging phase arranges compareImager, PosCalibrator, another instance of SlideImager and ImageStitcher. Modules for the second imaging phase will automatically detect and use results from the first imaging phase. When the workflow reaches a point with no child tasks, it will terminate.
The "Start Task" in the main window defines the entry point, in our implementation, either the first or second imaging phase.
Each workflow module can be configured with persistent settings in a configuration dialog called from the main window ( Figure S10-S12). The SlideLoader module creates a pool of slides with the current slides in the robotic slide loader, allows for scanning the slides and adding metadata information ( Figure S10). The SlideImager module will call the µManager Multi-D dialog for configuration of the slide imaging task and save the settings in the workflow database ( Figure S13). SlideImager's derived modules, refImager and compareImager, are configured similarly. refImager takes a reference image for calibration at a user defined position. compareImager captures an overlapping set of images centered around the user defined position of RefImager. Upon successful completion both refImager and the set of compareImager images are used by the PosCalibrator to identify the matching position (see Detection of the slide position) and adjust the position coordinates. The ROIFinder module can be configured to record objects of a user defined minimum size and tested on images in the configuration dialog ( Figure S12). Testing the ROI finder opens a window with images for each intermediate processing step and the resulting bounding boxes as illustrated in Figure S7 and S8.
For each defined workflow, new instances are used for processing a pool of slides. Thus, the same basic configuration can be used for multiple slide pools. Image files are stored in µManager format in a folder set by the initial workflow configuration and sub-folders matching the workflow instance and workflow tasks, respectively.
The workflow progress is stored in a SQL database. For debugging or correcting unexpected problems, the Workflow Dialog includes a "Show Database Manager" button that opens a graphical database querying interface with the workflow database loaded. The OpenHiCAMM source also includes a "sqltool" command-line script which, when run in the workflow directory, opens a command-line SQL query tool, which is useful for debugging purposes.
Reviewing the results
During workflow processing, progress can be monitored with a live imaging view and a log window ( Figure S13). Workflows can be stopped and resumed at defined checkpoints. Upon completion, results can be visualized in an HTML report.
At the conclusion of the pipeline, OpenHiCAMM generates a summary report, displaying the calibration images, and the results for all imaging passes. The report is generated as an html file ( Figure S14) and rendered in OpenHiCAMM in a report window. In OpenHiCAMM, the report viewer allows for loading slides and positioning the stage to positions. The report file can also be independently opened and viewed in a standard web browser.
In situ hybridization experiments
We performed in situ hybridizations as previously described (Weiszmann, Hammonds et al. 2009). We collected and hybridized wild type embryos with an RNA probe made from a cDNA clone of mirror and used a probe that detects Gal4 reporter RNA to hybridize embryos of five Drosophila strains, GMR33C02, GMR33C05, GMR33E04, GMR33C10, GMR33B03, from the Janelia Farm Research Campus (JFRC) FlyLight Gal4 collection (Pfeiffer, Jenett et al. 2008), each carrying a putative mirror enhancer with a Gal4 reporter.
Embryos were mounted on slides and imaged with OpenHiCAMM. Resulting images were cut out and rotated with Adobe Photoshop for publication. Figure // SlideSurveyor PostProcessing run("Bandpass Filter...", "filter_large=25 filter_small=15 suppress=Vertical tolerance=5 autoscale saturate"); run("Bandpass Filter...", "filter_large=25 filter_small=15 suppress=Horizontal tolerance=5 autoscale saturate"); // resize 0.5 run("Scale...", "x=0.5 y=0.5 width=4329 height=3629 interpolation=Bilinear average create"); // CustomMacroROIFinder Custom ROI Finding Macro run("8-bit"); setAutoThreshold("IsoData"); setOption("BlackBackground", false); run("Convert to Mask"); run("Gray Morphology", "radius=7 type=circle operator=dilate"); run("Analyze Particles...", "size=500-5000 exclude clear add in_situ"); Settings can be configured and saved with the µManager Multi-D dialog (button "Show acquisition dialog"), settings from the Multi-D dialog can be assigned, and a user defined position list or a previously computed position list selected. We found some cameras needed time to adjust to the light. Thus we added a configuration option to take a selectable number of "dummy" images before starting the imaging. The pixel size needs to be calibrated with µManager and entered as configuration to adjust the pixels in the images to the stage position. We also added options to invert the coordinates for the stage compared to the captured image, if necessary. Initially the workflow manager computes all tasks to complete the currently configured workflow and adds them to the persistent database. The workflow manager displays the currently configured workflow and the tasks that need to be completed sequentially because of hardware dependencies ("SERIAL") or can be executed in parallel ("PARALLEL"), usually only those modules dependent on computations. Progress for each task is printed as log messages, and the overall progress and remaining time shown in a progress bar (bottom). Supplemental Tables Table S2. Module API
API call Functionality
initialize Called during the workflow initialization phase, before module configuration. Allows the module to perform any required initialization.
configure Called when the user performs the module configuration step. This method returns a Configuration object which handles the configuration UI display, and serialization of configuration options to the database.
createTaskRecords Creates a database record for each task that is expected to be run during the workflow. Called before the start of the workflow run.
runInitialize A second initialization entry point. This point is called before each workflow run, whereas initialize is called only once per module. run The actual workflow module execution code. Called once per task.
cleanup Task cleanup code. Is called even if the tasks throws an exception. Called immediately after the run method completes.
setTaskStatusOnResume Allows customization of the task status when the workflow is run in "Resume" mode.
API call Functionality
retrieve Returns the list of configuration key-value pairs retrieved from the configuration UI display When given a list of configuration key-value pairs, display returns a Swing Component that contains the configuration UI, with the configuration applied validate Scans the configuration UI for any validation errors, and returns a list of ValidationError objects, which is displayed to the user.
API call Functionality
initialize Pass in the WorkflowRunner context object and perform any required initialization steps. runReport Run the report and return the contents of the report as an HTML string. | 9,721.6 | 2018-03-27T00:00:00.000 | [
"Computer Science",
"Biology",
"Engineering"
] |
pytreegrav: A fast Python gravity solver
Gravity is important in a wide variety of science problems. In particular, questions in astrophysics nearly all involve gravity, and can have large (≫ 104) numbers of gravitating masses, such as the stars in a cluster or galaxy, or the discrete fluid elements in a hydrodynamics simulation. Often the gravitational field of such a large number of masses can be too computationally expensive to compute by directly summing the contribution of every single element at every point of interest.
Summary
Gravity is important in a wide variety of science problems. In particular, questions in astrophysics nearly all involve gravity, and can have large (≫ 10 4 ) numbers of gravitating masses, such as the stars in a cluster or galaxy, or the discrete fluid elements in a hydrodynamics simulation. Often the gravitational field of such a large number of masses can be too computationally expensive to compute by directly summing the contribution of every single element at every point of interest.
pytreegrav is a multi-method Python package for computing gravitational fields and potentials. It includes an exact direct-summation ("brute force") solver and a fast, approximate tree-based method that can be orders of magnitude faster than the naïve method. It can compute fields and potentials from arbitrary particle distributions at arbitrary points, with arbitrary softening/smoothing lengths, and is parallelized with OpenMP.
Statement of need
The problem addressed by pytreegrav is the following: given an arbitrary set of "source" masses m i with 3D coordinates x i , and optionally each having a finite spatial extent h i (the softening radius), one would like to compute the gravitational potential Φ and/or the gravitational field g at an arbitrary set of "target" points in space y i . A common application for this is N-body simulations (wherein y i = x i ). It is also often useful for analyzing simulation results after the fact -Φ and g are sometimes not saved in simulation outputs, and even when they are it is often useful to analyze the gravitational interactions between specific subsets of the mass elements in the simulation. Computing g is also important for generating equilibrium initial conditions for N-body simulations (Volker Springel & White, 1999;Yurin & Springel, 2014), and for identifying interesting gravitationally-bound structures such as halos, star clusters, and giant molecular clouds (Behroozi et al., 2013;Grudić et al., 2018;Guszejnov et al., 2020).
Many gravity simulation codes (or multi-physics simulation codes including gravity) have been written that address the problem of gravity computation in a variety of ways for their own internal purposes (Aarseth, 2003;Dehnen & Read, 2011). However, pykdgrav (the precursor of pytreegrav) was the first Python package to offer a generic, modular, triviallyinstallable gravity solver that could be easily integrated into any other Python code, using the fast, approximate tree-based Barnes & Hut (1986) method to be practical for large particle numbers. pykdgrav used a KD-tree implementation accelerated with numba (Lam et al., 2015) to achieve high performance in the potential/field evaluation, however the prerequisite tree-building step had relatively high overhead and a very large memory footprint, because the entire dataset was redundantly stored at every level in the tree hierarchy. This made it difficult to scale to various practical research problems, such as analyzing high-resolution galaxy simulations (Gurvich et al., 2020). pytreegrav is a full refactor of pykdgrav that addresses these shortcomings with a new octree implementation, with drastically reduced tree-build time and memory footprint, and a more efficient non-recursive tree traversal for field summation. This makes it suitable for post-processing datasets from state-of-the-art astrophysics simulations, with upwards of 10 8 particles in the region of interest.
Methods
pytreegrav can compute Φ and g using one of two methods: by "brute force" (explcitly summing the field of every particle, which is exact to machine precision), or using the fast, approximate Barnes & Hut (1986) tree-based method (which is approximate, but much faster for large particle numbers). In N -body problems where the fields at all particle positions must be known, the cost of the brute-force method scales as ∝ N 2 , while the cost of the tree-based method scales less steeply, as ∝ N log N . The brute-force methods are often fastest for small (< 10 3 particle) point sets because they lack the overheads of tree construction and traversal, while the tree-based methods will typically be faster for larger datasets because they reduce the number of floating-point operations required. Both methods are optimized with the numba LLVM JIT compiler (Lam et al., 2015), and the basic Accel and Potential front-end functions will automatically choose the method is likely to be faster, based on this heuristic crossover point of 10 3 particles. Both methods can also optionally be parallelized with OpenMP, via the numba @njit(parallel=True) interface.
The implementation of the tree build and tree-based field summation largely follows that of GADGET-2 (V. Springel, 2005). Starting with an initial cube enclosing all particles, particles are inserted into the tree one at a time. Nodes are divided into 8 subnodes until each subnode contains at most one particle. The indices of the 8 subnodes of each node are stored for an initial recursive traversal of the completed tree, but an optimized tree traversal only needs to know the first subnode (if the node is to be refined) and the index of the next branch of the tree (if the field due to the node is summed directly), so these indices are recorded in the initial recursive tree traversal, and the 8 explicit subnode indices are then deleted, saving memory and removing any empty nodes from consideration. Once these "next branch" and "first subnode" indices are known, the tree field summations can be done in a single while loop with no recursive function calls, which generally improves performance and memory usage.
The field summation itself uses the Barnes & Hut (1986) geometric opening criterion, with improvements suggested by Dubinski (1996): for a node of side length L with centre of mass located at distance r from the target point, its contribution is summed using the monopole approximation (treating the whole node as a point mass) only if r > L/Θ + δ, where Θ = 0.7 by default (giving ∼ 1% RMS error in g), δ is the distance from the node's geometric center to its center of mass. If the conditions for approximation are not satisfied, the node's subnodes are considered in turn, until the field contribution of all mass within the node is summed. pytreegrav supports gravitational softening by assuming the mass distribution of each particle takes the form of a standard M4 cubic spline kernel, which is zero beyond the softening radius h (outside which the field reduces to that of a point mass). Explicit expressions for this form of the softened gravitational potential and field are given in Hopkins (2015). h is allowed to vary from particle to particle, and when summing the field the larger of the source or the target softening is used (symmetrizing the force between overlapping particles). When softenings are nonzero, the largest softening h max of all particles in a node is stored, and a node is always opened in the field summation if r < 0.6L + max (h target , h max ) + δ, where h target is the softening of the target particle where the field is being summed. This ensures that any interactions between physically-overlapping particles are summed directly with the softening kernel. | 1,741.6 | 2021-12-28T00:00:00.000 | [
"Physics",
"Geology"
] |
Unsupervised explainable AI for the collective analysis of a massive number of genome sequences: various examples from the small genome of pandemic SARS-CoV-2 to the human genome
In genetics and related fields, huge amounts of data, such as genome sequences, are accumulating, and the use of artificial intelligence (AI) suitable for big data analysis has become increasingly important. Unsupervised AI that can reveal novel knowledge from big data without prior knowledge or particular models is highly desirable for analyses of genome sequences, particularly for obtaining unexpected insights. We have developed a batch-learning self-organizing map (BLSOM) for oligonucleotide compositions that can reveal various novel genome characteristics. Here, we explain the data mining by the BLSOM: unsupervised and explainable AI. As a specific target, we first selected SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) because a large number of the viral genome sequences have been accumulated via worldwide efforts. We analyzed more than 0.6 million sequences collected primarily in the first year of the pandemic. BLSOMs for short oligonucleotides (e.g., 4~6-mers) allowed separation into known clades, but longer oligonucleotides further increased the separation ability and revealed subgrouping within known clades. In the case of 15-mers, there is mostly one copy in the genome; thus, 15-mers appeared after the epidemic start could be connected to mutations. Because BLSOM is an explainable AI, BLSOM for 15-mers revealed the mutations that contributed to separation into known clades and their subgroups. After introducing the detailed methodological strategies, we explained BLSOMs for various topics. The tetranucleotide BLSOM for over 5 million 5-kb fragment sequences derived from almost all microorganisms currently available and its use in metagenome studies. We also explained BLSOMs for various eukaryotes, such as fishes, frogs and Drosophila species, and found a high separation ability among closely related species. When analyzing the human genome, we found evident enrichments in transcription factor-binding sequences (TFBSs) in centromeric and pericentromeric heterochromatin regions. The tDNAs (tRNA genes) were separated by the corresponding amino acid.
INTRODUCTION
The Kohonen self-organizing map (SOM), an unsupervised neural network algorithm, is a powerful tool for the clustering and visualization of high-dimensional complex data on a two-dimensional map (Kohonen et al., 1996). We modified the conventional SOM for genome informatics based on batch learning to make the learning process and the resulting map independent of the order of data input (Kanaya et al., 2001). The batch-learning SOM (BLSOM) is suitable for high-performance parallel computing and thus for big data analysis (Abe et al., 2003(Abe et al., , 2006a. It is also worth mentioning that we can obtain unexpected insights by daring to leave knowledge discovery to AI. By analyzing the compositions of short oligonucleotides (e.g., 4-and 5-mers) in a large number of genomic fragments (e.g., 10 kb) derived from a wide variety of species, the BLSOM enables the separation (self-organization) of the genomic sequences by species and phylogeny and identifies oligonucleotides that significantly contribute to the separation (Abe et al., 2003(Abe et al., , 2006cIwasaki et al., 2013b;Bai et al., 2014;Kikuchi et al., 2015). In an analysis of the genomic fragments of a wide range of microbial genomes, over 5 million sequences were successfully separated by phylogenetic groups with high accuracy . A BLSOM program suitable for PC cluster systems is available on the following website: http://bioinfo.ie.niigata-u.ac.jp/?BLSOM.
In our previous analysis of all influenza A strains, the viral genomes were separated (self-organized) by host animals based only on the similarity of the oligonucleotide composition, even though no host information was provided during the machine learning process (Iwasaki et al., 2011a, b): unsupervised AI. Notably, the BLSOM is also an explainable AI that can identify diagnostic oligonucleotides, which contribute to host-dependent clustering (self-organization). When studying the 2009 swine-derived flu pandemic (H1N1/2009), we detected directional time-series changes in the oligonucleotide composition due to possible adaptations to the new host, namely, humans (Iwasaki et al., 2011a), and these findings have shown that near-future prediction was possible, albeit partially (Iwasaki et al., 2013a). We found similar time-series changes in the oligonucleotide composition for ebolavirus, MERS coronavirus (Wada et al., 2016 and SARS-CoV-2 (Wada et al., 2020b;Iwasaki et al., 2021;Abe et al., 2021). In the present paper, SARS-CoV-2 genomes, which are of great interest to society, were used as an operative example to illustrate how this unsupervised explainable AI supports efficient knowledge discovery from big sequence data.
oligonucleotides that are expanding rapidly in the virus population, which allowed us to predict candidate advantageous mutations for growth in human cells (Wada et al., 2020b;Ikemura et al., 2020). Furthermore, the oligonucleotide BLSOM can classify the virus sequences into not only the known clades but also their subgroups . After the above-mentioned publications, a large number of sequences of this virus have continued to accumulate and will exceed one million. These big data, which have been acquired through worldwide efforts, are valuable not only for dealing with the current and future epidemics of infectious viruses but also for basic research in genetics and related fields. In this paper, we first used SARS-CoV-2 sequences, including the new sequences, to explain how unsupervised explainable AI enables knowledge discovery from a massive number of genome sequences.
BLSOM of short oligonucleotides
In this study, approximately 0.7 million SARS-CoV-2 genomes, which were isolated from December 2019 to February 2021, were analyzed, and the polyA-tail was removed prior to the analysis. Figure 1 shows BLSOMs for di-to hexanucleotide compositions in all viral genomes; importantly, no information other than the oligonucleotide composition was provided during the machine learning step (unsupervised AI). The total number of grid points (i.e., nodes) was set to 1/100 of the total number of viral genomes. After learning, to determine whether the separation achieved with the BLSOM was related to known clades published by GISAID, grid points containing genomes of a single clade are colored to indicate each clade, and grid points containing those of multiple clades are displayed in black. As shown in Fig. 1A, a major portion of grid points on the dinucleotide BLSOM are black, but a major portion of the grids on penta-and hexanucleotide BLSOMs are colored, showing the good classification power of the BLSOMs (Figs. 1A and 1Bi). Notably, the grid point where even one other clade-derived sequence is mixed is displayed in black, despite of the condition that an average of 100 sequences per grid point was set. This condition may be very strict; thus, in the hexanucleotide BLSOM shown in Fig. 1Bii, each grid point was colored to show the clade with the highest value, and the separation (self-organization) by clade is more clearly shown.
Explainable AI
The BLSOM is an explainable AI that can identify diagnostic oligonucleotides responsible for the observed clustering through the use of heat maps (Abe et al., 2003(Abe et al., , 2006c. Eight examples of heatmaps for a hexanucleotide BLSOM are shown in Fig. 1Biii; here, the representative vector for each grid point is composed of 4,096 (= 4 6 ) variables, and the contribution level of each variable at each grid point can be visualized as follows: high (red), moderate (white) and low (blue). For example, CAGUAA has a high occurrence (red) primarily in GR territories (yellow green in Fig. 1B), but CAGUAG, which differs from the former by one base (underlined), has a low occurrence (blue). When considering all 6-mer patterns (refer to Supplementary Fig. S1), we observed many cases in which red/blue patterns were reversed for a pair of 6-mers with a one-base difference, as observed in Fig. 1Biii.
It should be mentioned that most 6-mers exist in multiple copies in the viral genomes; thus, the one-base difference yielding the red/blue reverse pattern could not be connected uniquely to a mutation in the viral genome. In other words, extending the oligonucleotide length until most k-mers are present as one copy per genome should connect one-base difference to one mutation. Thus, we calculated occurrences of long oligonucleotides in the viral genomes and found that when the length was extended to 15-mers, most showed one copy per genome; i.e., if a 15-mer BLSOM yields a clade-specific separation (self-organization), this explainable AI will identify the mutations involved in the separation (Wada et al., 2020b;Ikemura et al., 2020). The 15-mers, however, include over one billion types (4 15 ), and over 0.3 million types were found even for those that appeared in the viral genomes. A strategy for dimension reduction is essential for efficient data mining by a BLSOM.
Time-series analysis for dimension reduction A major internal structure, such as a clade, has arisen as a result of mutations that have occurred after the start of the epidemic and rapidly increased their population frequency. To study the formation of such internal structures using a 15-mer BLSOM, 15-mers in which no mutations have been found or those in which mutations have occurred but did not significantly expand in the virus population can be excluded from the analysis. Therefore, we first tabulated viral strains from each month of collection and calculated the occurrence of each 15-mer in the viral population. For each month, we calculated the increase in occurrence from December 2019 and selected 15-mers whose increase reached at least 10% in the population of each month; the reason for using the frequency of each month was to obtain information at intermediate stages of the pandemic (Wada et al., 2020b). After combining the results for each month and excluding duplicates, 891 different 15-mers were obtained. The BLSOM can analyze far more variables with no difficulty. Because various European-originated variants have shown worldwide spreads, the above time-series analysis was also performed for European strains, and 118 new 15-mers were obtained. When these are added, we may perceive the near-future trends of the global epidemic more acutely, and the following BLSOM was made using the 1009 15-mers (Fig. 2Ai); notably, the BLSOM can be used only for oligonucleotides of interest. Fig. 2Ai, each grid point of the BLSOM for the 1009 15-mers has 100 genomes on average, as above-mentioned in Fig. 1. Grid points that contain even one genome from another clade are shown in black, but most are colored, which shows that the BLSOM achieves good separation by clade, even though the number of oligonucleotides has been reduced by approximately a quarter compared to that obtained with the hexanucleotide BLSOM (4096 variables): usefulness of dimension reduction. Nevertheless, it is also clear that a significant portion was marked in black. Investigation of these black points revealed that they frequently contained the O (Other) clade sequences that GISAID did not classify as known specific clades. We believe that significant numbers of O-clade sequences can be classified into known clades . In Fig. 2Aii, each grid point was colored to indicate the clade with the highest value.
BLSOM for 15-mers In
Based on the U-matrix (Fig. 2B) used in the conventional SOM analysis (Ultsch, 1993), the Euclidean distance between representative vectors of neighboring grid points can be visualized by the degree of blackness, and a larger distance is reflected by a higher degree of blackness . Notably, the boundary of the clade territory is visualized by a clear black line, and clear black lines are also observed inside the GR clade territory, indicating the existence of its subgrouping; the GR territory is very large, which reflects that GR sequences account for more than 60% of the total sequences. To clarify whether these GR subgroups have biological significance, we next investigated the time-series changes as follows. In Fig. 2C, using the BLSOM shown in Fig. 2A, sequences of different collection months are displayed. The comparison of March 2020, when almost all clades first appeared, with January 2021 showed that their sequences presented differential locations, and August 2020 showed an intermediate pattern. The existence of such time-series changes shows that the separation on the BLSOM has biological significance, as previously reported by Abe et al. (2021).
Mutations responsible for separation into clades and their subgroups
We subsequently examined the 15-mers that contributed to separation on the BLSOM shown in Fig. 2 using the heatmap method explained in Fig. 1Biii. Importantly, we could identify mutations responsible for the separation into clades and their subgroups because all 15-mers that have rapidly expanded have one copy in the genome. Because the 1009 patterns include many similar patterns and are very complex as shown in Supplementary Fig. S2, efficient data mining appears to be difficult. Therefore, we introduce the BLSOM for the 1009 heatmaps. As a typical AI image processing, each heatmap can be converted to vectorial data of the dimension corresponding to the number of pixels, which was 6570 (approximately 1% of the total number of genomes) in the present analysis. When we created a BLSOM for the vectorial data of the heatmaps (abbreviated heatmap BLSOM), most grid points are black (Fig. 3A), which indicates that many heatmap patterns are very similar to each other. An examination of the black grid points revealed that one grid point was often composed of heatmaps of 15 different 15-mers, which were shifted by one base and added up to a 29-mer in total. A homology search (blastn) of the 29-mer against the standard SARS-CoV-2 sequence (NC_045512) showed a mutation in the center of the 29-mer, i.e., we could identify the involved mutations. When referring to all grid points on the heatmap BLSOM, the number of the attributed sequences was mainly a multiple of 15 or close to it. We subsequently explain actual cases in more detail.
For the No.1 grid point in Fig. 3A, heatmaps of 60 (= 15 x 4) 15-mers were found; thus, four 29-mers were obtained from these 60 15-mers, as above-mentioned. The blastn analysis of these four 29-mers against the standard viral sequence identified four independent mutations. Table 1 shows the 29-mer sequences before and after the mutation and details of the mutation, as well as the four 15-mer sequences with the mutation at the center, for which actual heatmaps are presented in Fig. 3Ci. Although minor differences were found in the periphery of the GV territory, 15-mers with these four different mutations show a highly similar pattern (No. 1 in Fig. 3Ci). Fig. 3B shows the aforementioned U-matrix again. In addition to the GV territory, there are small red spots, which show differences among the four mutations. Importantly, over 0.6 million genomes were separated on a single BLSOM; thus, if a region of interest is found (e.g., red spots outside the GV territory), features of the corresponding genomes (e.g., the date and region of the strain collection) can be immediately displayed because this AI is equipped with various tools to enable efficient data mining.
For the No. 2 grid point in Fig. 3A, 45 (= 15 x 3) 15-mers were found, and their heatmap patterns (No. 2 in Fig. 3Ci) are very similar to those of No. 1, although small areas were missing in the GV territory. The resultant three 29-mers and corresponding mutations identified by blastn are also shown in Table 1. Figure 3Cii shows a monthly time-series analysis of a total of seven 15-mers presented in Fig. 3Ci, where 0 on the horizontal axis corresponds to December 2019 (the epidemic start) and other numbers show the elapsed months since the start. Their occurrences have increased worldwide since August 2020, until they reached a peak in November and decreased thereafter.
Since these seven 15-mers show very similar time-series changes, the seven mutations of interest have increased and decreased, linking to each other. These mutations should include both advantageous mutations that increase infectivity and/or proliferation and neutral mutations that hitchhike the former mutations . The exploration of the seven mutations in Table 1 identified two nonsynonymous mutations and five synonymous mutations. The three mutations of No. 2 in Fig. 3Cii, where the red region is slightly missing, are synonymous mutations; thus, these may be neutral mutations. An AI that can visualize all genomes on a single map can support this type of prediction.
In the case of four grid points (No. 3 ~ 6 in Fig. 3A), 15 15-mers were attributed to each point, resulting in four 29-mers in total (Table 1). For four respective 15-mers in Table 1, most areas of GRc (a subgroup of GR) are red, but there are some differences in their periphery and spots in other clade territories (Fig. 3Di). Their time-series changes ( Fig. 3Dii) are very similar to each other but different from those shown in Fig. 3Cii. As Figs. 3Ci and Di, we have introduced examples of heatmaps with relatively simple patterns among those in Supplementary Fig. S2 and identified mutations that expanded rapidly in the population. More analyses on the mutations have already been published (Wada et al., 2020b;Ikemura et al., 2020). Undoubtedly, the analysis aiming to distinguish between advantageous and neutral mutations becomes important, and the present AI is thought to be useful for this purpose. Extensive data of mutations observed for SARS-CoV-2 have been published by various sources, e.g., Alam et al. (2021) and COG-UK / Mutation Explorer (http:// http://sars2.cvr.gla.ac.uk/cog-uk/).
Use of preformed BLSOMs for analyses of newly-obtained sequences
One distinctive characteristic of SARS-CoV-2 is the rapid increase in sequence data, and GISAID has very recently added a new clade, GRY, which corresponds to variants originating in the UK and rapidly expanding worldwide. We have examined how the sequences belonging to the new clade are related to those analyzed in Fig. 2A. In Fig. 3E, the GRY sequences newly downloaded at the end of March 2021 (over 30,000 sequences), most of which were not included in the BLSOM in Fig. 2A, were mapped onto the BLSOM, i.e., for each GRY sequence, we looked for the grid point with the closest Euclidean distance. Most GRY sequences were mapped to the small area indicated by the arrow in Fig. 3B (arrowed also in Fig. 3E); here, the number of GRY sequences attributed to each grid point is indicated by the height of the vertical bar (3D display), and almost all GRY sequences were mapped in the GRY territory arrowed in Fig. 3B. The sequences downloaded at the beginning of February 2021 contained some GRY sequences, and these formed the small area in Fig. 2A. A BLSOM can provide information on sequences that were not included in its formation, and this mapping method is useful in metagenome analyses described below. So far, we have explained how the present AI can analyze big sequence data by explaining various methods in detail. Next, in connection with these detailed methodological strategies, we introduce examples of data mining from various prokaryotic and eukaryotic sequences.
Large-scale BLSOMs and their use in metagenome studies
The BLSOM algorithm is suitable for high-performance parallel computing, and large-scale analyses, such as those that analyze most of the genome sequences currently available, is possible when using high-performance computers (Abe et al., 2006a). In Fig. 4A, the tetranucleotide composition of over 5 million 5-kb fragments of almost all microorganisms (including viral sequences) available in INSDC (DDBJ, ENA/EBI and NCBI) was analyzed by a BLSOM . In INSDC, only one strand of complementary sequences is registered, and the strand is chosen rather arbitrarily in the registration of fragment sequences. When investigating general characteristics of genome sequences, differences between two complementary strands are usually not important. In Fig. 4A, we constructed a BLSOM after summing frequencies of a pair of complementary oligonucleotides (e.g., AAAA and TTTT) in each fragment ; BLSOM for this degenerate set of a pair of complementary tetranucleotides is abbreviated as DegeTetra BLSOM. Most of the sequences were separated (self-organized) with high accuracy according to their phylogeny with no information concerning phylogeny during the machine learning process (unsupervised AI). When such a large-scale BLSOM is created, the phylogeny of the metagenomic sequence can be predicted by mapping the large number of metagenomic sequences on the preformed BLSOM.
We previously used this strategy to classify sequence fragments derived from environmental samples of the Sargasso Sea reported by Venter et al. (2004). Since a certain portion of Sargasso sequences may be derived from eukaryotic genomes, we constructed a DegeTetra BLSOM with 5-kb sequences from prokaryotes, unicellular eukaryotes and fishes (Fig. 4Bi); for details, see Abe et al. (2005). The power of BLSOM to separate prokaryotic from eukaryotic sequences was very high; only 0.1% prokaryotic sequences were classified into eukaryotic territories (■). Next, we mapped Sargasso sequences longer than 1 kb on this BLSOM (Fig. 4Bii), as explained for the 3D display in Fig. 3E. While a major portion of the Sargasso sequences (■) were classified into prokaryote territories, 9.9% were classified into eukaryotic territories that corresponded mainly to territories of unicellular eukaryotes. This mapping method may have some disadvantages that even sequences derived from very novel phylotypes were attributed to any of known ones. To solve this issue, we next constructed a DegeTetra BLSOM with the species-known prokaryotic sequences plus Sargasso sequences (Fig. 4Ci). Grid points that contained only Sargasso sequences are indicated by (■), and the number of Sargasso sequences classified into these grid points is indicated by the height of the vertical bar (Fig. 4Cii). It is clear that a large portion of Sargasso sequences belongs to novel phylogenetic groups. In Fig. 4Ciii, the number of Sargasso sequences that got into grid points containing species-known sequences is indicated by the height of the bar distinctively colored to show the species-known phylotype .
The strategies explained in Figs
Analyses of various eukaryotic genomes
It is also interesting to see to what extent BLSOM can separate genome sequences of closely related species. Figure 4D shows DegeTetra BLSOM of 100-kb fragments from 15 vertebrates including nine fishes, and most sequences were separated by species with high accuracy . Using heatmaps, such as explained in Figs. 1 and 3, we found that coelacanth has a distinctly lower frequency of CG-containing oligonucleotides than other fishes: an evident CG deficiency, as typically found for tetrapod. We also reported BLSOMs for 38 eukaryotes (Fig. 4E) (Abe et al. 2006c), 12 Drosophila species (Fig. 4F) , eight frogs (Katsura et al., 2021) and 13 plant species ) and found high separation ability even for closely related species.
Analyses of the human genome Unsupervised AI can be used without prior knowledge or models, and we dare to leave the main data mining to the unsupervised explainable AI, which allows for unexpected knowledge discovery. Using pentanucleotide compositions of 50-and 100-kb and 1-Mb fragment sequences from the human genome, we explored the possibility of chromosome-dependent separation Wada et al., 2015. Although we first thought it was impossible, we obtained interesting patterns for these three lengths. Figure 5Ai shows a 50-kb case; grid points that contained only sequences from a single chromosome are colored, and those that contain sequences from plural chromosomes are shown in black. Most grid points were black, but four relatively large areas, as well as many small areas, contained characteristic colored grid points, and interestingly, the four areas were found to be composed of sequences derived from centromeric and pericentromeric heterochromatin regions ; in the case of many small regions, sequences derived from subtelomeric heterochromatin regions were often found.
A heatmap analysis of the pentanucleotide BLSOM showed that the four large characteristic areas were evidently enriched in eight pentanucleotides that are known to be consensus-core sequences of diverse transcription factor-binding sequences (TFBSs) . Once the AI provides this information, we can analyze it more directly using a standard distribution map along each chromosome. Figure 5Aii displays the occurrences of the eight TFBS-core pentanucleotides in three chromosomes, and the TFBS-cores are clearly enriched in centromeric and pericentromeric regions, which are marked with a broad brown bar just below the horizontal axis. Importantly, this characteristic distribution pattern was observed in all chromosomes except chrY .
For more direct analysis, we subsequently analyzed actual TFBSs, instead of the consensus-core pentanucleotides. By searching for human TFBSs through Swiss Regulon Portal (Pachkov et al. 2013), 189 hexanucleotide, 977 heptanucleotide and 3,946 octanucleotide TFBSs were found. When analyzing hexanucleotide TFBSs, their clear enrichment in centromeric and pericentromeric regions was observed as in Fig. 5Aii, whereas the combination of the enriched TFBSs varied among chromosomes; for details, see . Because the size of these enriched regions was at the Mb-level, we named the regions "Mb-level TFBS islands". In centromeric and pericentromeric regions, the Mb-level enrichment in a group of CG-containing oligonucleotides (e.g., CG surrounded by A/T-rich sequences) was also observed (Fig. 4Dii); thus, these were named "Mb-level CpG islands"; actual sequences of the enriched oligonucleotides also varied among chromosomes (Wada et al., 2015).
BLSOMs with TFBSs
In the case of 977 hepta-and 3,946 octanucleotide TFBSs, the number of variables is so large that it is practically impossible to perform a distribution map analysis of all TFBSs along all chromosomes. BLSOM, however, can analyze compositions even of 3,946 octanucleotide TFBSs in all chromosomes at once . Figure 5Bi shows a BLSOM that analyzes the octanucleotide TFBS occurrences in 1-Mb sliding sequences with a 50-kb step. By restricting the oligonucleotides to TFBSs alone, characteristics of centromeric and pericentromeric regions became more prominent, and the sequences in these regions formed a large special area on the left side. Due to the 50-kb sliding step, the total number of 1-Mb sequences was almost the same to that in Fig. 5Ai where 50-kb fragments were analyzed, but the patterns were very different from each other for the following reasons. The TFBS occurrences in neighboring 1-Mb sequences off by 50 kb are very similar to each other, and therefore, sequences belonging to one chromosome can be visualized as a single stroke-like trajectory. Actually, in the large specific area, 1-Mb sequences derived from centromeric and pericentromeric regions of one chromosome give the colored dotted lines. Eight examples of heatmaps show chromosome-dependent enrichment in individual TFBSs (Fig. 5Bii), which is related in part to the chromosome-dependent difference in alpha-satellite monomer sequences (Hayden et al., 2013;Aldrup-MacDonald et al., 2016;Sullivan et al., 2017). Centromeric and pericentromeric regions are poor in protein-coding genes; thus, the evident enrichment in TFBSs was unexpected. In the previous paper , we proposed a model that their enrichment in the Mb-level structures (Mb-level TFBS islands) support interchromosomal interactions in interphase nuclei (Lieberman-Aiden et al., 2009;Dekker & Misteli, 2015).
Analyses of tRNAs, proteins and codon usage
We will next explain what a BLSOM can tell us about short sequences such as tRNAs. Even if the result is unpredictable, we dare to leave knowledge discovery to AI. We have created a database (tRNADB-CE) of tRNA genes (tDNAs) obtained from all microbial genomes available , and we next explain a pentanucleotide BLSOM of approximately 0.4 million tDNA sequences in the database . There are 1024 types of pentanucleotides, and sequences of tDNAs are clearly shorter than 100 nt; thus, most pentanucleotides do not occur in each tDNA. For these particular data consisting mostly of zeros, the obtained result was rather unpredictable. After training only with the pentanucleotide compositions, the grid points that contained tDNAs of one amino acid were colored. Interestingly, most tDNAs were separated by amino acids with high accuracy (Fig. 5Ci). Referring to heatmaps (Fig. 5Ciii), the oligonucleotides that contributed to the separation by amino acid were found to correspond primarily to the recognition sequences of the aminoacyl-tRNA transferase, which have been designated identifier or identity (Ardell, 2010), as well as the anticodon and surrounding nucleotides, i.e., the identifiers appear to be assignable without experiments . Interestingly, a minor proportion of tDNAs were located outside the main territory of each amino acid on the BLSOM (Fig. 5Cii), and these exceptional tDNAs often belonged to phylogenetic groups that grow in special environments . The BLSOM can provide insight into identifiers even for these poorly characterized species. An analysis of large numbers of tDNAs from a wide range of phylotypes will enable this type of characteristic research.
As introduced in Fig. 3A, the BLSOM can analyze vectorial data other than oligonucleotide compositions, and the peptide composition of proteins is analyzable. In fact, Ferran et al. (1994) investigated di-and tripeptide compositions using the Kohonen conventional SOM. Although the study was conducted at a time when a limited number of protein sequences (approximately 2000) were known, the proteins were separated by function. In addition, rather than using 20 different amino acids separately, 11 groups, into which amino acids with similar physicochemical properties were classified, achieved better separation. We created BLSOMs of di-, tri-and tetrapeptides for over 80,000 prokaryotic proteins and obtained separation reflecting COGs (Clusters of Orthologous Groups of proteins), which are known to be functional groups . In support of the previous research, a better separation was obtained after grouping into 11 groups. However, a mixed trend of separation reflecting phylogeny was detected, and the situation becomes further complicated for proteins with multiple functional domains. Although it has already been used to study the diversity in enzymes related to secondary metabolic pathways in plants (Ikeda et al., 2013), the scope of its use is currently limited, and further development is needed. BLSOM can also be applied to the studies of codon usage in protein genes (Kanaya et al., 2003;Kosaka et al., 2008).
CONCLUSION AND FUTURE PERSPECTIVES
In genetics and related fields, most studies have been conducted by creating models and hypotheses and then testing them. However, in the big data era, this principle may not be sufficient for the data mining of big data. Models and hypotheses must rely on existing knowledge, which make it difficult to obtain unexpected insights. Here, we have explained the strategy of entrusting knowledge discovery to AI, which appears to become increasingly important. In big data analyses, the research conducted by large groups using high-performance computers is becoming the norm, and somewhat unfortunately, the contributions of individuals and small groups are decreasing. With the help of AI, individuals and small groups can discover distinctive knowledge from big data. The research presented here, with the exception of those shown in Figs. 4A~C, can be conducted on PC-level computers. A BLSOM program suitable for PC systems is available on the following website: http://bioinfo.ie.niigata-u.ac.jp/?BLSOM.
The main purpose of this paper was to show how an unsupervised explainable AI can enable knowledge discovery from big data. To help readers envision applications to their own topics, various examples and research strategies were explained in detail. The present research strategies can be applied to AI algorithms other than the BLSOM, and similar or better data mining will become possible. The Kohonen SOM, which serves as the basis of the BLSOM, was developed during the second AI boom and is a classic technology. In the current third AI boom, a wide variety of new technologies are being developed and playing active roles in practical fields. For an example of image processing, image classification has made great strides due to its practical importance, but the reasons for why AI is achieving such highly accurate classification are unclear for us (black box). This may not be a problem regarding the practical use, but in scientific studies, the reason for the classification is important. The third-generation explainable AI will be definitely developed. On the machine side, quantum computers, which excel at combinatorial optimization problems, have become practical. The BLSOM algorithm appears to somewhat resemble the combinatorial optimization problem; hence, AI-based genome analyses using quantum computers will be realized sooner than expected. Such advanced technologies should be quickly introduced to the studies with social importance, such as observed for the rapid technological development of COV-19 vaccine production. In such an era, interdisciplinary collaborative research and appropriate education will become increasingly important.
(B)
The same to the U-matrix shown in Figure 2B, but the small GRY territory is marked with an arrow. (C) 15-mers contributed to formation of the GV territory: their heatmaps and monthly time-series changes (i and ii, respectively). Results of 15-mer sequences after mutation, which are listed in Table 1, are presented. In the time-series analysis, different 15-mers gave very similar patterns; thus, the symbols distinguishing individual 15-mers were not explained. (D) 15-mers contributed to formation of the GRc territory. Their heatmaps and time-series changes (i and ii, respectively) are shown as described in (C). (E) Mapping of newly downloaded GRY sequences on the 15-mer BLSOM presented in Fig. 2. The number of sequences mapped to each grid point is indicated by the height of the vertical bar: 3D display.
Figure 4
Oligonucleotide BLSOMs of various prokaryotic and eukaryotic genomes.
(A) DegeTetra BLSOM of microorganisms of 40 phyla/classes; for details, see Abe et al. (2020). Grid points containing sequences of a single phylotype are indicated in phylotype-specific colors. (B) DegeTetra BLSOM of 211,000 and 210,600 5-kb sequences of 1,502 prokaryotes and 12 eukaryotes, respectively. (i) Grid points that contain sequences from a single prokaryotic phylotype are indicated in phylotype-specific colors, and those that contain only eukaryotic sequences are indicated in color (■); for details, see Abe et al. (2005). (ii) Metagenome sequences from Sargasso Sea (Venter et al., 2004) were mapped on this BLSOM, and square root of the number of Sargasso sequences mapped on each grid point is indicated by the height of the bar in color (■). (C) DegeTetra BLSOM of species-known prokaryotic sequences and Sargasso sequences. BLSOM was constructed with 211,000 5-kb prokaryote sequences plus 218,000 Sargasso sequences after normalization of the sequence length; for details, see Abe et al. (2005). (i) Grid points that contain sequences from a single known phylotype are indicated in phylotype-specific colors, those that contain only Sargasso sequences are indicated in color (■), and those that include species-known and Sargasso sequences or those from more than one known phylotype are indicated in black. (ii) Square root of the number of Sargasso sequences classified into each grid point containing no species-known sequences is indicated by the height of the bar in color (■). (iii) Square root of the number of Sargasso sequences that were classified into grid points containing species-known sequences is indicated by the height of the bar distinctively colored to show the phylotype. (D) DegeTetra BLSOM of 100-kb sequences of 15 vertebrates including nine fishes. For details, see Iwasaki et al. (2014). (E) DegeTetra BLSOM of 100-kb sequences of 38 eukaryotic genomes. For details, see Abe et al. (2006c). (F) DegePenta BLSOM of 100-kb sequences of 12 Drosophila species. After grouping the 12 species into five phylotypes, grid points are colored for showing the phylotypes; for details, see . | 7,970.2 | 2021-05-24T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Notch signaling, the segmentation clock, and the patterning of vertebrate somites
The Notch signaling pathway has multifarious functions in the organization of the developing vertebrate embryo. One of its most fundamental roles is in the emergence of the regular pattern of somites that will give rise to the musculoskeletal structures of the trunk. The parts it plays in the early operation of the segmentation clock and the later definition and differentiation of the somites are beginning to be understood.
In one way or another, at one stage or another, almost every tissue in an animal body depends for its patterning on the Notch cell-cell signaling pathway [1]. The evidence from mutants is clear: disrupted Notch signaling entails disrupted pattern. The challenge is to define precisely what it is that Notch signaling does in any given case, and when it does it. This problem is posed in a particularly striking and curious way by the phenomena of somitogenesis -the process by which the vertebrate embryo lays down the regular sequence of tissue blocks that will give rise to the musculoskeletal segments of the neck, trunk, and tail.
These blocks of embryonic tissue, the somites, are arranged symmetrically in a neat, repetitive pattern on either side of the central body axis. Each somite is separated from the next by a cleft -the segment boundary; and each somite has a definite polarity, with an anterior portion and posterior portion expressing different sets of genes [2]. Mutations in components of the Notch signaling pathway play havoc with this whole pattern: although somites may eventually form, the segment boundaries are irregular and randomly positioned, and the regular antero-posterior polarity of individual somites is lost. Genetic screens for mutations that disrupt segmentation in this way chiefly identify Notch pathway components as the critical players. Notch signaling is clearly central to somitogenesis [3][4][5][6]. But precisely how?
N No ot tc ch h p pa at th hw wa ay y c co om mp po on ne en nt ts s c ca an n b be e w wi ir re ed d t to og ge et th he er r i in n d di if ff fe er re en nt t w wa ay ys s f fo or r d di if ff fe er re en nt t o ou ut tc co om me es s In general, the function of the canonical Notch pathway is to coordinate gene expression in contiguous cells. It does this in a particularly direct way. The signal-sending cell expresses a Notch ligand (belonging to either the Delta or the Serrate/Jagged subfamily) on its surface; this binds to the receptor, Notch, in the membrane of the signal-receiving cell and thereby triggers cleavage of Notch, releasing an intracellular fragment, the Notch intracellular domain (NICD); NICD translocates to the nucleus, where it acts as a transcriptional regulator [1,7] (Figure 1). The main -or at least, the best-studied -targets of direct regulation by NICD are the members of the Hairy/E(spl) family (Hes genes in mammals, her genes in zebrafish) [8,9]; these code for inhibitory basic helix-loop-helix (bHLH) transcriptional regulators, which can control many different secondary targets, including Notch ligand genes and the Hes/her genes themselves. The role of Notch signaling in pattern formation depends on the ways in which these componentsand others that modulate their activity -are functionally connected into regulatory feedback loops [10]. Mathematical modeling highlights several possibilities. Thus, one type of linkage, where Notch activation leads to downregulation of Notch ligand expression in the signal-receiving cell, can lead to lateral inhibition, forcing neighboring cells to become different from one another [11] (Figure 2). An opposite linkage, whereby Notch activation stimulates ligand expression, can have an opposite effect, inducing contiguous cells to be similar [12]. Still other types of circuitry built from the same components can perform yet other tricks, including the production of temporal oscillations of gene expression [13,14]. And this brings us back to somitogenesis, where such oscillations are in fact seen.
A A g ge en ne e--e ex xp pr re es ss si io on n o os sc ci il ll la at to or r m ma ar rk ks s o ou ut t t th he e p pe er ri io od di ic c p pa at tt te er rn n o of f b bo od dy y s se eg gm me en nt ts s Somites derive from the unsegmented presomitic mesoderm (PSM) at the tail end of the embryo. PSM cells are specified by the combined action of Wnt and fibroblast growth factor (FGF) signaling molecules, which are produced at the tail end of the PSM and spread anteriorly to generate a morphogen gradient. At the point where the level of Wnt and FGF falls below a threshold value, somites form. Thus, as the PSM grows caudally, extending the embryo, one pair of somites after another is budded off from the anterior end of the PSM in a regular head-to-tail sequence. Each species generates its characteristic number of somites at its own pace, ranging from one new somite pair approximately every 30 minutes in zebrafish to one pair every 2 hours in mice. This rhythmic process involves coordinated patterns of cell behavior not only in space but also in time: it depends on an underlying gene expression oscillator -the segmentation clock -that ticks in the cells of the PSM and dictates the rhythm of somite formation, with each oscillator cycle corresponding to the production of one additional somite [15]. The genes that were first found to oscillate in the PSM and that show this cyclic expression in all vertebrates belong to the Notch signaling pathway; these oscillatory genes include, specifically, certain members of the Hairy/E(spl) gene family of bHLH transcriptional regulators -in particular Hes1 and Hes7 in mice, her1 and her7 in zebrafish, and hairy1 and hairy2 in chick [15-22]and (in zebrafish) the Notch ligand DeltaC, whose expression is controlled by them. These, and certain other oscillatory genes, display a characteristic pattern of expression that can be seen in fixed specimens stained by in situ hybridization. In the posterior part of the PSM, the level of expression may be high or low, depending on the phase of the oscillation cycle at the moment when the embryo was fixed. In the anterior part of the PSM, meanwhile, one sees a stripy pattern, in which bands of cells that express the oscillatory gene strongly alternate with bands of cells that do not (Figure 3). This pattern reflects the gradual slowing of the oscillations as cells approach the point of exit from the PSM, beyond which oscillation is halted: cells in more anterior positions are thus delayed in phase relative to more F Fi ig gu ur re e 1 1 Basic principles of Delta-Notch signaling. Notch is a cell-surface receptor whose ligand Delta is also expressed on the cell surface. Binding of Delta to Notch activates cleavage of Notch at the membrane, thereby releasing the Notch intracellular domain (NICD), which migrates to the nucleus where it functions in transcriptional regulation. The detached extracellular fragment of Notch, NECD, along with Delta, is endocytosed into the Delta-expressing cell. Lateral inhibition in differentiation. Two neighboring cells each express both the Notch receptor and its ligand, Delta, but the cell on the left expresses Delta more strongly, so that the Hes/her gene is activated in the neighboring cell (on the right), and its product, an inhibitory transcriptional regulator, acts in this cell to block expression both of Delta and of genes for differentiation. Consequently, in the left-hand cell Notch is not activated, the Hes/her gene is not transcribed, Delta expression is maintained, and genes specifying differentiation are expressed. Since, as we noted earlier, any mutation that blocks Notch signaling leads to disrupted somite segmentation, an obvious suggestion is that the oscillation depends on Notch signaling and fails to occur when Notch signaling fails. However, the detailed consequences of mutations in the Notch pathway do not quite fit this simple explanation. A different interpretation is instead suggested by a closer examination of the behavior of one of the oscillatory genes, coding for the Notch ligand DeltaC, in zebrafish with mutations in the Notch pathway [24]. The individual PSM cells in these mutants still express DeltaC, but in an uncoordinated way: tissue fixed for analysis by in situ hybridization shows a pepper-and-salt mixture of cells expressing DeltaC at different levels, as though the cells are still oscillating individually, but no longer in synchrony with their neighbors (Figure 4). Moreover, both in zebrafish and in mice, the first few somites of embryos with Notch pathway mutations develop almost normally [25-27], implying that Notch signaling is not absolutely necessary for somite segmentation and that the consequences of failure of Notch signaling make themselves felt only gradually, after the onset of somitogenesis. These findings led to the suggestion that the primary function of Notch signaling is not to drive the oscillations of individual cells, but only to coordinate them and keep them synchronized; and that the cells begin oscillation in synchrony at the start of somitogenesis, and take several cycles to drift out of synchrony when Notch signaling is defective [24]. This proposal -that Notch signaling from cell to cell in the PSM serves to maintain synchrony but is not necessary for oscillation of individual cells -has been supported by several subsequent experiments. For example, zebrafish embryos can be treated at different stages of somitogenesis
(a) (b)
with the inhibitor DAPT, which inhibits the enzyme that releases NICD from the membrane ( Figure 1) and thus blocks Notch signal transmission. When Notch signaling is prevented in this way, somite defects ensue, but always with a delay that corresponds to a gradual disordering of the pattern of oscillator gene expression [28,29]. Other evidence comes from experiments where PSM cells are transplanted into a wild-type zebrafish embryo from an embryo in which the expression of the oscillatory her genes is defective. The transplanted cells then cause abnormal segmentation behavior in their neighbors; but they fail to exert this effect if they are prevented from expressing the Notch ligand DeltaC [30]. The oscillatory behavior of individual PSM cells and the influence of Notch signaling can also be demonstrated through study of cells from the PSM of a transgenic mouse embryo containing a luminescent Hes1 reporter. These cells show oscillating expression of the reporter gene even when they are dissociated and thus unable to communicate via Notch [31], but in that condition the oscillations are much less regular than in the intact tissue.
W Wh ha at t i is s t th he e u ul lt ti im ma at te e p pa ac ce em ma ak ke er r o of f t th he e s se eg gm me en nt ta at ti io on n c cl lo oc ck k? ?
All these findings support the view that Notch is needed to maintain synchrony between the oscillations of the individual cells, which are somewhat noisy and imperfect timekeepers when left to their own devices. But what is generating the cell-intrinsic oscillations? According to one view, the core oscillator -the pacemaker of the whole process -is a delayed negative feedback loop in the autoregulation of the oscillatory Hes/her genes -Hes7 in mammals, her1 and her7 in the zebrafish [13,32] ( Figure 5). Loss of Hes7 in the mouse, or of her1 and her7 in the zebrafish, disrupts segmentation all along the body axis; and it has been shown experimentally that these genes are indeed subject to negative regulation by their own products [22,23,32,33]. The idea that this Hes/her negative feedback loop is the core oscillator has been articulated in quantitative mathematical terms and is supported by many pieces of evidence, but it still lacks firm proof [34]. In mouse and chick, the PSM cells also show oscillating expression of various other genes, including (in the mouse) genes in the Wnt and Fgf pathways [35][36][37], some of which appear to continue their oscillation even when the Hes7 oscillations fail [35]. Thus, the nature of the ultimate generator and pacemaker of the oscillations is still under debate, especially for mouse and chick [38][39][40]. Failure of synchronization is sufficient to explain the disruption of segmentation in Notch pathway mutants. But that is not necessarily the end of the story. To acknowledge that Notch signaling has this critical function, and that that is enough to explain the mutant phenotypes, is not the same as saying that synchronization is the only function of Notch signaling in somitogenesis. At least two additional functions have been proposed. One is in the final step at which a segment boundary is created by physical separation of one nascent somite from the next; the other is in creating or maintaining the difference between anterior and posterior halves of each somite. Each of these possible further roles for Notch signaling -in boundary formation and in segment polarity -seems attractive on the basis of analogies with other systems. Thus, in the Drosophila wing disc, Notch signaling plays a critical part in organizing the dorso-ventral compartment boundary [41]; and in the vertebrate hindbrain, likewise, it is involved in organizing the boundaries between rhombomeres [42]. As for segment polarity, the creation of a difference between the cells of the anterior and posterior parts of each somite could be seen as similar to the creation of differences between adjacent cells through lateral inhibition -a well known function of Notch signaling in many different systems [1].
N No ot tc ch h s si ig gn na al li in ng g i is s d di is sp pe en ns si ib bl le e f fo or r b bo ou un nd da ar ry y f fo or rm ma at ti io on n i in n z ze eb br ra af fi is sh h It is in the anterior part of the PSM, where the oscillation of cyclic genes slows down and then halts, that cells are assigned to anterior or posterior somite compartments and clefts form, finally demarcating one somite from the next. Thus, the formation of the segment boundary and the specification of antero-posterior polarity are both processes that occur relatively late in the history of each somite, after its precursor cells have graduated to the anterior part of the PSM from the posterior as the embryo grows and extends. If the early function of Notch signaling in maintaining synchrony in the posterior PSM is disrupted, any failure in these later functions is likely to be imperceptible amid the general chaos. One can, however, test for the later functions by imposing a block of Notch signaling part way through somitogenesis. For example, one can take a zebrafish that has already formed five somites and immerse it in a DAPT solution to block Notch signaling from that time point onwards. The result is striking: the next approximately 12 somites proceed to form in the normal way, with regularly spaced boundaries, and only after that does one begin to see segmentation defects [28,29]. This shows that Notch signaling is not needed, in the zebrafish at least, for the creation of somite boundaries, and it quantitatively matches predictions based on the proposition that the only function of Notch signaling is to maintain synchrony in the posterior PSM [29].
C Cl le ef ft t f fo or rm ma at ti io on n c co or rr re el la at te es s w wi it th h t th he e a ap pp pe ea ar ra an nc ce e o of f s sh ha ar rp p b bo ou un nd da ar ri ie es s o of f g ge en ne e e ex xp pr re es ss si io on n Findings in the mouse, however, are not so clear, and there are differing schools of thought. In a series of papers [43][44][45][46][47][48][49], Saga and colleagues have argued that Notch signaling is indeed needed to create a sharp boundary of gene expression that is necessary to mark the future cleft between one nascent somite and the next [43,44]. Their conclusions emerge from study of a pair of transcriptional regulators -Mesp2, and the less well characterized Mesp1that are expressed in the anterior PSM. They seem to operate as orchestrators of the process by which the output of the somite oscillator is translated into the spatially repeating pattern of the somites [45] -a process that is disrupted in Mesp2 mutants [46]. Mesp2 is expressed dynamically in each forming somite, beginning as a one-somite-wide stripe, rapidly narrowing to a half-somite-wide stripe (which marks the future anterior compartment of the somite), then disappearing completely as the somite buds off from the PSM. In the brief window during which it is expressed, Mesp2 seems to be responsible for allocating anterior or posterior identity to the cells of the somite through activation or repression of various targets that distinguish the anterior from the posterior cells, and for regulating some of the genes required for border formation [47,48]. In particular, somite boundaries form at interfaces where cells with high expression of Mesp2 but low Notch activation confront cells in an opposite state, with high Notch activation but no expression of Mesp2. These observations strongly suggest that some sort of feedback loop involving Mesp2 and Notch signaling organizes the formation of an interface between cells with high Notch activation and cells with low Notch activation, and that this interface is necessary to define the segment boundary. Moreover, the same studies suggest that Notch signaling is involved in the restriction of the Mesp2 expression domain from the whole presumptive somite to just its anterior half [48,49], and thus essential for the establishment of the anterior-posterior polarity of each new somite. However, these observations do not amount to firm proof: correlation need not imply causation, and Mesp2, acting independently of Notch activity, could be the critical factor. The pattern of Mesp2 expression is indeed altered in Notch pathway mutants [43], but it is hard to be sure whether this reflects a function of Notch signaling in the anterior PSM where Mesp2 is expressed, or merely the aftermath of the disorder created by prior failure of Notch signaling in the posterior PSM.
N No ot tc ch h s si ig gn na al li in ng g i is s r re eq qu ui ir re ed d t to o g gi iv ve e e ea ac ch h s so om mi it te e i it ts s a an nt te er ro o--p po os st te er ri io or r p po ol la ar ri it ty y Feller et al. [50] tested the role of Notch signaling in the mouse PSM in a different way and came to a somewhat different view. When they artificially expressed NICD, the intracellular transcriptional regulator domain of Notch, throughout the entire PSM, they found that many somite boundaries still formed, despite the absence of any interface between cells with differing levels of Notch activation; these boundaries, however, were irregularly spaced, and the resulting irregular blocks of somite tissue lacked the normal antero-posterior polarity. The same was seen when Notch signaling, instead of being artificially activated, was inactivated by mutations in Notch1, or Dll1 (Delta1), or Pofut1 (coding for an enzyme that fucosylates Notch and is required for Notch function). In fact, a similar outcome is seen in zebrafish Notch pathway mutants -clefts eventually appear in the mesoderm, dividing it up into somites, but these clefts form later than normal and are crooked and irregularly spaced. The somitic mesoderm, it seems, has a propensity to split up into tissue blocks and will do so even if the segmentation clock is broken and Notch signaling defective. The role of the clock is to control the pattern of this splitting, ensuring that the clefts are regularly spaced, and to confer on each somite a regular antero-posterior polarity. For this last step, it seems that Notch signaling is required directly and not merely to keep the segmentation clocks of the individual cells ticking synchronously in the run-up to overt segmentation; for in the mice where NICD is expressed throughout the tissue, each somite has a double-posterior character, whereas when Notch fails each somite has a double-anterior character [50].
N No ot tc ch h s si ig gn na al li in ng g i is s u us se ed d r re ep pe ea at te ed dl ly y i in n t th he e s so om mi it te e c ce el ll l l li in ne ea ag ge e The formation of the somites is not the end of the involvement of Notch signaling in the development of the somitic cell lineage. For example, skeletal muscle tissue, which arises from the somites, also depends on this pathway to control the differentiation of myoblasts and satellite cells and their incorporation into multinucleate muscle fibers [51][52][53][54]. Like that other ubiquitous communication device, the mobile phone network, the Notch signaling pathway has been recruited for many different purposesfor the simple delivery of instructions from one individual to another, for competitions and collaborations, for the synchronization of individual actions, and for the playing of the tunes to which cells dance. a an nd d t th he e n no ot tc ch h p pa at th hw wa ay y f fu un nc ct ti io on n w wi it th hi in n t th he e o os sc ci il ll la at to or r m me ec ch ha an ni is sm m t th ha at t r re eg gu ul la at te es s z ze eb br ra af fi is sh h s so om mi it to og ge en ne es si is s. . Development 2002, 1 12 29 9: :1175-1183. 21. Jouve C, Palmeirim I, Henrique D, Beckers J, Gossler A, Ish-Horowicz D, Pourquie O: N No ot tc ch h s si ig gn na al li in ng g i is s r re eq qu ui ir re ed d f fo or r c cy yc cl li ic c e ex xp pr re es ss si io on n o of f t th he e h ha ai ir ry y--l li ik ke e g ge en ne e H HE ES S1 1 i in n t th he e p pr re es so om mi it ti ic c m me es so o-d de er rm m. . Development 2000, 1 12 27 7: :1421-1429. 22. Oates AC, Ho RK: H Ha ai ir ry y/ /E E( (s sp pl l) )--r re el la at te ed d ( (H He er r) ) g ge en ne es s a ar re e c ce en nt tr ra al l c co om mp po on ne en nt ts s o of f t th he e s se eg gm me en nt ta at ti io on n o os sc ci il ll la at to or r a an nd d d di is sp pl la ay y r re ed du un nd da an nc cy y w wi it th h t th he e D De el lt ta a/ /N No ot tc ch h s si ig gn na al li in ng g p pa at th hw wa ay y i in n t th he e f fo or rm ma at ti io on n o of f a an nt te e-r ri io or r s se eg gm me en nt ta al l b bo ou un nd da ar ri ie es s i in n t th he e z ze eb br ra af fi is sh h. . | 5,665.8 | 2009-05-22T00:00:00.000 | [
"Biology"
] |
TWIK-2, a New Weak Inward Rectifying Member of the Tandem Pore Domain Potassium Channel Family*
Potassium channels are found in all mammalian cell types, and they perform many distinct functions in both excitable and non-excitable cells. These functions are subserved by several different families of potassium channels distinguishable by primary sequence features as well as by physiological characteristics. Of these families, the tandem pore domain potassium channels are a new and distinct class, primarily distinguished by the presence of two pore-forming domains within a single polypeptide chain. We have cloned a new member of this family, TWIK-2, from a human brain cDNA library. Primary sequence analysis of TWIK-2 shows that it is most closely related to TWIK-1, especially in the pore-forming domains. Northern blot analysis reveals the expression of TWIK-2 in all human tissues assayed except skeletal muscle. Human TWIK-2 expressed heterologously in Xenopus oocytes is a non-inactivating weak inward rectifier with channel properties similar to TWIK-1. Pharmacologically, TWIK-2 channels are distinct from TWIK-1 channels in their response to quinidine, quinine, and barium. TWIK-2 is inhibited by intracellular, but not extracellular, acidification. This new clone reveals the existence of a subfamily in the tandem pore domain potassium channel family with weak inward rectification properties.
Potassium channels are found in all mammalian cell types, and they perform many distinct functions in both excitable and non-excitable cells. These functions are subserved by several different families of potassium channels distinguishable by primary sequence features as well as by physiological characteristics. Of these families, the tandem pore domain potassium channels are a new and distinct class, primarily distinguished by the presence of two pore-forming domains within a single polypeptide chain. We have cloned a new member of this family, TWIK-2, from a human brain cDNA library. Primary sequence analysis of TWIK-2 shows that it is most closely related to TWIK-1, especially in the pore-forming domains. Northern blot analysis reveals the expression of TWIK-2 in all human tissues assayed except skeletal muscle. Human TWIK-2 expressed heterologously in Xenopus oocytes is a non-inactivating weak inward rectifier with channel properties similar to TWIK-1. Pharmacologically, TWIK-2 channels are distinct from TWIK-1 channels in their response to quinidine, quinine, and barium. TWIK-2 is inhibited by intracellular, but not extracellular, acidification. This new clone reveals the existence of a subfamily in the tandem pore domain potassium channel family with weak inward rectification properties.
Potassium (K ϩ ) channels are the most diverse class of ion channels discovered. In humans over 50 distinct channels have been identified in both excitable and non-excitable cell types. These channels are involved in the control of a variety of cellular functions, including neuronal firing, neurotransmitter and hormone secretion, and cellular proliferation. All of these channels contain a pore-forming (P) 1 domain with a sequence motif common to all K ϩ channels, the tripeptide sequence G(Y/ F)G, found in this domain. The residues immediately adjacent to either side of this motif are also well conserved. It is believed that four of these P domains contribute to the formation of a functional K ϩ -conductive pore (1)(2)(3).
Analysis of the K ϩ channels in the Caenorhabditis elegans genome suggests eight distinct families of K ϩ channels (4). These channels contain 2, 4, or 6 transmembrane domains and 1 or 2 P domains, with both termini oriented toward the cytoplasm. The presence of other features in the primary sequence differentiates the families into distinctive functional subtypes. For example, among the six transmembrane domain channels, members of the K v channel family have a charged S4 transmembrane domain characteristic of voltage-gated channels. K ϩ channels with only two transmembrane domains surrounding a single P domain are all inward rectifiers (K ir ). Mammalian homologues of each of these classes have been identified.
The most recently discovered class of potassium channels is the tandem pore domain potassium (K t ) channel family. K t channels have four putative transmembrane domains and two P domains. The P domains are separated by the second and third transmembrane domains. Although all these K t channels have a conserved core region between M1 and M4, the aminoand carboxyl-terminal domains are quite diverse. K t channels represent the most abundant class of K ϩ channels in C. elegans, with at least 39 distinct members (5). Four mammalian K t channels have been cloned to date. All the mammalian channels in this family exhibit nearly instantaneous, non-inactivating K ϩ currents. The first mammalian K t channel discovered was the weak inward rectifier TWIK-1 (6). Although TWIK-1 is sensitive to some classical K ϩ channel blockers (e.g. Ba 2ϩ ), it is relatively insensitive to others (e.g. tetraethylammonium and 4-aminopyridine). TWIK-1 activity is indirectly inhibited by acidification and stimulated by the phorbol ester PMA. The second pore domain, P2, has an atypical sequence, GLG, often seen in the P2 domains of K t channels in C. elegans. Northern blot analysis shows that TWIK-1 mRNA is widely expressed.
We report here the discovery of a new K t channel cloned from human brain. We have named this channel TWIK-2 on the basis of three criteria. First, TWIK-2 shares significant sequence homology to TWIK-1, especially in the P domains. Second, the tissue distribution of TWIK-2 is similar to that of TWIK-1. Finally, the TWIK-2 channel is a weak inward rectifier whose electrophysiologic properties are similar to those of TWIK-1.
EXPERIMENTAL PROCEDURES
Molecular Cloning of TWIK-2-The basic local alignment search tool (7) was used to identify human expressed sequence tag entries similar to the mammalian K t channels TWIK-1, TREK-1, and TASK. The translation product of one of the returned ESTs, AA604914, is most closely related to human TWIK-1 (E ϭ 3 ϫ 10 Ϫ5 ) and human TASK (E ϭ 2 ϫ 10 Ϫ5 ) by BLAST-X analysis. No overlapping EST clones were found by BLAST-N analysis.
The sequence reported for AA604914 was used to isolate cDNA clones * This work was supported in part by Grants GM51372 and GM57740 from the National Institutes of Health. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EBI Data Bank with accession number(s) AA117708.
using a modified solution capture method (8). Briefly, three oligonucleotides were designed, with two unmodified oligonucleotides flanking a third, biotinylated 30-mer oligonucleotide (Keystone, Camarillo, CA). These oligonucleotides were incubated with a NaOH-denatured human brain SuperScript cDNA library (Life Technologies, Inc.). The solution was neutralized and the hybridization proceeded overnight at 40°C. The annealed plasmid-biotinylated oligonucleotide complexes were isolated with streptavidin-coated magnetic beads (Dynal, Lake Success, NY). The plasmids were eluted from the beads and were used to transform bacteria. The resultant bacterial colonies were transferred onto Hybond-N filters (Amersham Pharmacia Biotech) and probed with a radiolabeled oligonucleotide derived from AA604914. After extensive washing in 2ϫ SSPE, 0.5% SDS, the filters were exposed to BioMax film (Amersham Pharmacia Biotech). Individual bacterial clones were picked from the original dish, and the plasmid was recovered from overnight minicultures. Bidirectional sequencing demonstrated an ORF that contained the nucleotide sequence found in EST AA604914. Sequence analysis and alignments were performed with the LaserGene suite (DNAStar, Madison, WI), National Center for Biotechnology Information resources 2 and the Prosite internet resource. 3 The TWIK-2 cDNA was subcloned into a Xenopus oocyte expression plasmid, pOX, a generous gift from Dr. Tim Jegla (Stanford University). This vector contains the 5Ј-and 3Ј-untranslated regions of the Xenopus -globin gene flanking the insert (9).
Northern Blot Analysis-A 510-base pair fragment was amplified from the TWIK-2 sequence by the polymerase chain reaction and was used to probe a human multiple tissue Northern blot (CLONTECH, Palo Alto, CA). The cDNA fragment was labeled with ␣-[ 32 P]dCTP (Amersham Pharmacia Biotech) with the Rediprime labeling kit (Amersham Pharmacia Biotech) and hybridized to the Northern blot membrane with the high efficiency hybridization system (Molecular Research Center, Cincinnati, OH) under conditions specified by the manufacturer. The washed blot was exposed to film (Amersham Pharmacia Biotech).
Transcript Preparation and Oocyte Electrophysiology-Transcripts were synthesized from linearized cDNA templates with T3 RNA polymerase (mMessage mMachine; Ambion, Austin, TX). Defolliculated Xenopus laevis oocytes were injected with 1-15 ng of cRNA. Standard methods for oocyte preparation and maintenance were used (10). One to four days after injection, two-electrode voltage clamp recordings were performed at room temperature. Voltage pulse protocols were applied from a holding potential of Ϫ80 mV with 1-s voltage pulse steps ranging from Ϫ140 to ϩ40 mV in 20 mV increments, with 1.5-s interpulse intervals. Except where noted, all two-electrode voltage clamp experiments were performed with frog Ringer's solution (FR, composition in mM: 115 NaCl, 2.5 KCl, 1.8 CaCl 2 , 10 HEPES, pH 7.6), high potassium frog Ringer's solution (HK, composition in mM: 2.5 NaCl, 115 KCl, 1.8 CaCl 2 , 10 HEPES, pH 7.6) or ND96 (composition in mM: 96 NaCl, 2.0 KCl, 1.8 CaCl 2 , 1.0 MgCl 2, 5 HEPES, pH 7.6) as perfusate. Recordings were obtained in a 25-l recording chamber at flow rates of 1-4 ml/min. Water-injected oocytes were used as controls, undergoing the same treatments as transcript-injected oocytes. Except where noted, at least three oocytes were assayed for each experimental condition reported.
Signals were filtered with an 8-pole low-pass Bessel filter (Frequency Devices, Haverhill, MA) set at a 20 -100-Hz cut-off prior to sampling at 40 -1000 Hz. To quantify responses, leakage currents of water-injected oocytes were averaged and subtracted from currents of cRNA-injected oocytes. Relative response is defined as current measured for the Ϫ80 to ϩ40 mV pulse during the treatment condition compared with control.
RESULTS
Primary Structure of TWIK-2-We analyzed eight positive clones from our solution hybridization screen. Six of the eight clones contained inserts of 2.5-2.7 kb. Two of these positives were sequenced. They contained an identical ORF of 939 base pairs encoding a 313-amino acid polypeptide with primary sequence features of a novel human K t channel (Figs. 1 and 2; GenBank TM accession number AF117708). The gene encoded by this ORF was named TWIK-2 (see below). The predicted molecular mass of TWIK-2 is 33,751 daltons. Analysis of this clone reveals two upstream in-frame termination codons and an adequate translation initiation sequence. Two distinct P domains are located between pairs of transmembrane domains identified by hydropathy analysis (Kyte/Doolittle). This core region is preceded by a 3-amino acid amino-terminal domain and followed by a 57-amino acid carboxyl-terminal domain. The arrangement of P and transmembrane domains, coupled with the lack of an amino-terminal signal sequence and a large domain between transmembrane domain M1 and the first P domain, suggests a transmembrane topology identical to that of other K t channels.
Of the known mammalian K t channels, TWIK-2 has structural features most similar to TWIK-1. In TWIK-1, a specific cysteine residue found in the M1-P1 loop (Cys-69) is implicated in the formation of homodimers by extracellular disulfide bond formation (11). A similarly placed cysteine is found in TWIK-2 2 Available at the following on-line address: www.ncbi.nlm.nih.gov. 3 Internet resource found on the ExPASy server at expasy.hcuge.ch. (Cys-53). TWIK-2 has two N-glycosylation consensus sequences (Asn-79 and Asn-85; Fig. 2), whereas TWIK-1 has a single glycosylation site in the M1-P1 loop. Two potential phosphorylation sites have been identified in TWIK-2 (Fig. 2). Like TWIK-1, TWIK-2 has a single casein kinase II recognition sequence (Ser-304) but lacks a Ca ϩ2 -calmodulin kinase phosphorylation site. TWIK-2 has a potential PKC phosphorylation site (Ser-158) that is located in the putative M2-M3 cytoplasmic loop. In TWIK-1 this PKC site is also found in the M2-M3 loop at residue Thr-161. We have named the new gene TWIK-2 because of its greatest overall sequence similarity with TWIK-1 compared with other K t channels (53.8 versus 24 -33%). In addition, 19 consecutive amino acids in its P2 domain are identical with amino acids within the P2 domain in TWIK-1. The P1 and P2 domains of TWIK-2 are more closely related to their counterparts in TWIK-1 (72 and 88% amino acid identity, respectively) than any other K t channel (Fig. 3, A and B).
Expression Pattern of TWIK-2-To determine the expression pattern of TWIK-2 in human tissues, a human multiple tissue Northern blot was hybridized with a cDNA probe derived from the TWIK-2 nucleotide sequence. A 2.6-kb species, similar in size to the full-length TWIK-2 cDNA described above, was present in a variety of tissues, primarily in placenta, heart, colon, and spleen (Fig. 4). Lower levels of TWIK-2 mRNA could be detected in peripheral blood leukocytes, lung, liver, kidney, and thymus, with the lowest detectable levels found in brain. Two additional species, 6.8 and 1.35 kb, were also detected. The relative expression levels for these two species in these tissues closely paralleled the level of the 2.6-kb species. Significant TWIK-2 expression was also detected in pancreas (data not shown). TWIK-2 could not be detected in skeletal muscle by Northern blot analysis. Although the hierarchy of expression levels in these tissues is different for TWIK-1 and TWIK-2, TWIK-2 is expressed in the tissues in which TWIK-1 expression has been detected (6).
Pharmacology-Pharmacologic modulators of other cloned tandem pore domain potassium channels were examined, in addition to non-selective potassium channel inhibitors. To test the sensitivity of TWIK-2 currents to changes in extracellular pH, oocytes expressing TWIK-2 were incubated in FR whose pH was titrated with either HCl or NaOH. TWIK-2 currents were not sensitive to extracellular pH over the range of pH 5.6 to 8.4 (n ϭ 4 -6 oocytes at each extracellular pH, data not shown). However, pharmacologic treatments (1 mM 2,4-dinitrophenol or 100% carbon dioxide) known to lower intracellular pH (13) reduced TWIK-2 currents substantially (Fig. 6A).
Unlike TREK-1 currents, TWIK-2 currents were not inhibited by N-methyl D-glucamine substitution for sodium in frog Ringer's solution (n ϭ 5, data not shown). The following pharmacologic treatments had no or only minimal effect on TWIK-2 currents (data not shown): 4-aminopyridine (1 mM, n ϭ 3), cesium chloride (1 mM, n ϭ 3), ethanol (17 mM, n ϭ 3), n-octanol DISCUSSION We describe the cloning and functional expression of a new member of the mammalian K t family of channels, TWIK-2. This new channel is the smallest of the K t channels yet cloned but possesses structural features that define this family of potassium channels, i.e. two P domains, each bounded by a pair of transmembrane domains. For each of the P domains, TWIK-2 is most closely related to TWIK-1. The P2 domain in TWIK-2 is strikingly homologous to the P2 domain in TWIK-1 (Fig. 3B). Only TWIK-1 and TWIK-2 have a GLG sequence motif in the second P domain; all of the other cloned mammalian K t channels have a GFG sequence in this position.
Besides the high primary sequence similarity of TWIK-2 to TWIK-1, we also found physiologic and pharmacologic similarities between the two channels. Both channels show weak inward rectification and have similar current-voltage relationships. Like TWIK-1 channels, TWIK-2 channels are inhibited by intracellular acidity. The only K t channels inhibited by intracellular acidity are TWIK-1 (6) and TOK1 (14), an outwardly rectifying K t channel from Saccharomyces cerevisiae (14,15). In contrast to TOK1 and TWIK-1, TWIK-2 is not sensitive to barium. Although our data suggest a role for modulation of TWIK-2 by PKC, the effect we observed in oocytes was small. Treatment of oocytes expressing TWIK-1 with the phorbol ester PMA has been shown to activate TWIK-1 channel activity, presumably by activating PKC (6). The single PKC site in TWIK-1 is located within the M2-M3 loop, but mutation of this site (Thr-161) to an alanine residue did not alter the PMA sensitivity of TWIK-1. The PKC site in TWIK-2 is in the same loop. Therefore, like TWIK-1, the sensitivity of TWIK-2 to PMA may reflect an indirect effect of PKC on the channel. TWIK-2 is distinct from the other members of the K t family. The second cloned mammalian K t channel, TREK-1, is an outward rectifier K t channel (16). Expressed in many brain regions and the lung, as well as in kidney, heart, and skeletal muscle, this channel, unlike TWIK-2, is insensitive to acidification. TASK, the first mammalian K ϩ channel found to satisfy all the characteristics of a background channel, is a voltage-insensitive open rectifier whose activity is inhibited by extracellular acidification (10,17). Most recently, a fourth mammalian K t channel, TRAAK, was reported (18). This open rectifier also has characteristics of a background K ϩ conductance and is expressed only in the central nervous system, including the retina and spinal cord.
Like TWIK-1, TWIK-2 has a widespread distribution. Of the tissues examined, TWIK-2 expression was absent from skeletal muscle only. TWIK-2 is abundantly expressed in pancreatic tissue, and heterologous TWIK-2 currents resemble physiologic base-line potassium channels that maintain the resting membrane potential of acinar cells (19 -21). Like the physiologic base-line potassium channels expressed in pancreatic acinar cells, TWIK-2 is inhibited by intracellular acidity but is not sensitive to a number of potassium channel inhibitors, including tetraethylammonium, Ba 2ϩ , and 4-aminopyridine (20). In addition, barium-insensitive potassium currents have been described in the kidney (22)(23)(24). Weak inward rectifiers have also been reported in hepatocytes (25) as well as in eosinophils (26).
TWIK-2 is a weak inward rectifier whose zero current potential follows the reversal potential for K ϩ . For some potassium channels, the primary sequence provides clues to potential mechanisms of inward rectification. Intracellular polyamines and magnesium control the degree of inward rectification of K ir FIG. 6. Acid and barium sensitivity of TWIK-2. A, intracellular pH sensitivity of TWIK-2. Mean current-voltage curves of TWIK-2-injected oocytes in the presence or absence of carbon dioxide (100%, n ϭ 5) or 2,4-dinitrophenol (DNP) (1 mM, n ϭ 6) perfused with high potassium frog Ringer's solution (HK). Currents from control water-injected oocytes were unchanged by these treatments. B, effect of barium of TWIK-2 currents. Mean current-voltage curves of TWIK-2-injected oocytes in the presence or absence of barium (1 mM, n ϭ 5). Barium dose-response data are shown in the bar graph of B. Currents (Ϫ140 mV pulse in HK) have been normalized to control currents. 2,4-Dinitrophenol (DNP) was applied for 6 -10 min prior to application of the voltage pulse. Other treatments were applied for 2 min prior to pulse protocol. Error bars represent S.E. (n values shown above). channels by interacting with acidic residues within the second transmembrane domain (27,28). Although the degree of inward rectification of TWIK-1 is known to depend on intracellular magnesium levels, in general the mechanisms of rectification within the tandem pore domain family remain unknown.
One area of future study is the formation of K t channel dimers. TWIK-1 forms homodimers via a disulfide bridge between subunits involving identical extracellular cysteine residues (Cys-69) found in the M1-P1 linker region (11). TWIK-2 has a similarly placed cysteine residue, Cys-53. A TWIK-2 mutant in which Cys-53 has been mutated into an alanine residue did not show channel activity after heterologous expression in oocytes (data not shown). TWIK-2 may therefore form functional homodimeric channels via a disulfide bridge. Northern blot analysis of TWIK-2 expression demonstrates significant overlap with the pattern of TWIK-1 expression. It is therefore possible that TWIK-1/TWIK-2 heterodimers may form in those tissues, suggesting a mechanism for further functional diversity within the tandem pore domain K ϩ channel family. | 4,523.4 | 1999-03-19T00:00:00.000 | [
"Biology"
] |
Low-Complexity PAPR Reduction by Coded Data Insertion on DVB-T2 Reserved Carriers
Digital terrestrial video broadcasting systems based on orthogonal frequency division multiplexing (OFDM), such as the second generation of terrestrial digital video broadcasting standard (DVB-T2), often include carriers reserved for peak-to-average power ratio (PAPR) reduction. This paper presents a low-complexity PAPR reduction method that inserts coded data onto a subset of reserved carriers. Using differential binary phase shift keying (DBPSK) with repetition coding, the proposed method generates a time-domain peak-cancellation signal that carries also additional information. The carrier selection, as well as the signal amplitudes, of the coded DBPSK data is optimized, taking also under control the regrowth of other peaks. Simulation results show that the proposed algorithm outperforms the tone reservation (TR) algorithm of the DVB-T2 standard in terms of both PAPR reduction performance and computational complexity. The proposed algorithm is also significantly less complex than the clipping-and-filtering (CF) algorithm and the active constellation extension (ACE) algorithm of the DVB-T2 standard, paying a small price in terms of PAPR. Noteworthy, the compatibility with existing DVB-T2 receivers is fully guaranteed, since the proposed method modifies only the reserved carriers.
I. INTRODUCTION
Digital terrestrial television systems such as those standardized by the Digital Video Broadcasting (DVB) Consortium, the Advanced Television Systems Committee (ATSC), and the Digital Broadcasting Experts Group (DiBEG), have been designed to deliver high definition video [1], [2], [3]. The physical layers of DVB-terrestrial (DVB-T), second generation DVB-T (DVB-T2) [4], ATSC, ATSC 3.0, ISDB-terrestrial (ISDB-T), and ISDB-T Brazil (ISDB-Tb) are based upon coded orthogonal frequency-division multiplexing (OFDM), which can deal with communication channels impaired by multipath, fading, and non lineof-sight (NLOS) conditions. However, one of the main The associate editor coordinating the review of this manuscript and approving it for publication was Yunlong Cai . problems of OFDM, at the transmission side, is the high peak to average power ratio (PAPR) generated by the possibility that several OFDM carriers may add in-phase in the time domain [5]. Thus, differently from constantenvelope single-carrier modulations, characterized by a low PAPR, multicarrier modulations like OFDM suffer from high PAPR, especially when there are thousands of carriers. Unfortunately, signals with large PAPR lead to a reduced efficiency of the power amplifier and, definitely, to a waste of electrical power [6].
In the last two decades, several solutions for PAPR reduction have been proposed in the literature. A survey of different PAPR reduction methods can be found in [6] and [7]. For the sake of brevity, in this introduction, we briefly mention the main methods, with their advantages and drawbacks, including some recent techniques. Among low-complexity methods, clipping-and-filtering (CF) [8] is an effective approach in reducing the PAPR, at the price of significant in-band distortions that deteriorate the bit error rate (BER) performance [6]. Using optimized CF, PAPR reduction may be improved at the expense of increased complexity [9], [10] or side information [11]. Nonlinear companding [12], [13], [14], [15], [16], [17], [18] has low complexity, but, similarly to clipping, inband distortions are introduced. Also near-complementary sequences can be used to reduce the PAPR [19], as well as optimized waveform design [20], [21], [22]. More complex methods, such as channel coding [6], [23], [24], [25], discrete Fourier transform (DFT) precoding [26], [27], other transform coding [28], rotated constellations [29], index modulation with dithering [30], deep learning [31], and genetic algorithms [32] can be used to reduce the PAPR, but these methods have a detrimental effect on the data rate or on the complexity. Some other methods require the transmission of side information together with the original data, which makes them unsuitable for the broadcasting standards already in use [33], [34]. For instance, the partial transmit sequence (PTS) technique splits the signal in distinct blocks, which are then recombined together with optimal phase offsets, to reduce the PAPR [35], [36], [37], [38]. Clearly, in this case, both the segmentation strategy and the phase offsets must be known at the receiver to reconstruct the original signal [39]. Differently, the selective mapping (SLM) technique requires the transmitters to generate several different versions of the data signal, each one multiplied by a random phase offset sequence: the version with the smallest PAPR is selected for transmission [40], [41]. Even in this case, the optimal phase sequence must be communicated to the receiver. In addition, some other techniques optimize the position of null or free carriers [42], [43]: this is not suitable for DVB-T2, where the position of reserved carriers is predetermined by the standard [4].
The TR approach has been formulated as a convex linear programming (LP) problem, whose closed-form solution, as well as a simplified algorithm based on gradient descent, was proposed by Tellado [44]. Among the different variants of TR PAPR reduction methods that have been proposed, a first set is based on splitting the reserved carriers into different groups. For instance, Mroué et al. [45] control the power of each group, so as to cancel multiple peaks at each iteration. Bulusu et al. [46] also use multiple groups, with minimization of many peaks at each iteration. In addition, Lahbabi et al. [47] employ a group-wise TR to limit also complexity and latency, by simplifying the peak search and optimizing the inserted signal phases.
A second set of variants of TR methods makes use of approximations or iterative methods to reduce complexity. For instance, Li et al. [48] determine the peak canceling signal through a least-squares approximation of the original clipping noise. On the other hand, Wang et al. [49] adopt a fast iterative shrinkage-thresholding algorithm (FISTA) based on proximal maps, which has reduced complexity with respect to the original convex LP problem. Moreover, El Hassan et al. derive novel theoretical expressions for the error vector magnitude (EVM) of a PAPR-reduced OFDM signal, when TR with either gradient descent [50] or quadratic programming [51], [52] is applied. With a similar approach, Liu et al. [53] apply the alternate direction method of multipliers (ADMM) to the convex LP TR method, so as to relieve some of its complexity.
In addition to the variants previously listed, other TR-based PAPR reduction algorithms have been proposed. One example is the one of Mounzer et al. [54], which expands the TR method of DVB-T2 by adding more control on the power constraint and on the selected threshold. As another example, Jiang et al. [55] adopt a multiblock TR method to combat PAPR in multicarrier systems based on overlapping and filtered data blocks. Differently, Zayani et al. [56] combine TR optimization and digital predistortion steps together in an iterative manner, so as to obtain a synergistic improvement on emitted signal linearity. Instead, Nguyen et al. [57] propose two different algorithms based on TR, where a step is precomputed offline, and another step is applied in real-time to the OFDM signal in a wireline scenario. Shehata et al. [58] use some TR methods to reduce the PAPR of compressed OFDM signals in cloud-based radio access network, opening a new application field for TR and CF methods. PAPR reduction by TR optimization is proposed also for MIMO-OFDM radar processing, for example by Wu et al. in [59].
A limiting factor of the TR methods is the reduced number of available reserved carriers in existing systems: in order to maintain a backward compatibility, the PAPR performance of TR methods is somewhat limited by the number of reserved carriers. On the other hand, ACE [60], [61], [62], [63] acts on the quadrature amplitude modulation (QAM) of data carriers and modifies some selected edge points of the QAM constellation by extending them outwards: this offset does not degrade the BER performance at the receiver, but reduces the peak amplitudes at the transmitter. Both TR and ACE are included in DVB-T2 and ATSC 3.0, but their complexity can be relevant [4], [6], [60], [64], [65], especially for ACE methods that modify thousands of data carriers. Hybrid schemes are also possible, such as for instance [66], which uses CF with companding, [67], which combines SLM with ACE, [68], mixing TR and CF, and [69], which adopts SLM, PTS, and companding. Finally, there are also hardwarebased methods, such as those using a bank of amplifiers in parallel [70].
A common feature of most PAPR reduction methods is their focus on performance improvement, with secondary goal of complexity reduction. A main motivation of our work is to give equal importance to both performance and complexity. We aim at PAPR reduction, with limited complexity as a necessary goal. Specifically, this paper proposes a low-complexity TR-like PAPR reduction method for the DVB-T2 standard [4]. From the complexity viewpoint, our proposal is notably cheaper than conventional TR and ACE of DVB-T2, and than CF. Moreover, ACE cannot be used with other DVB-T2 features, such as rotated constellations or Alamouti's coding [4], and can produce inaccurate calculation of log-likelihood ratios of the data bits [71].
To be more specific, the proposed method conveniently modulates a selected subset of the reserved carriers to reduce the largest peaks of the time-domain OFDM signal, while avoiding the regrowth of other peaks. Since the proposed method only modifies the reserved carriers, the error performance of the QAM data is almost unchanged, except for a 0.1 dB reduction of the data signal-to-noise ratio (SNR). In addition, the proposed method inserts a coded signal information into the reserved carriers, similarly to the transmission parameter signaling (TPS) used in DVB-T. The additional information could be useful for additional signaling, like additional parameters of transmission, additional bits for future extensions, watermarking, and other purposes. Approaches such as SLM and PTS allow for dramatic improvements in terms of PAPR attenuation, however this comes at the expense of a reduced spectral efficiency [6]. The proposed method, instead, offers a greater flexibility, since not only PAPR is reduced, but also additional data are included with respect to the DVB-T2 TR method. This additional data stream can be used for signaling and it is designed aiming at low BER, even after modification for PAPR reduction purposes. Simulation results show that the proposed method effectively reduces the PAPR of DVB-T2 signals, with better PAPR than for the conventional TR technique of the DVB-T2 standard [4].
In summary, the main contributions of this work are: • A novel PAPR reduction method that, with respect to conventional TR, makes use of additional coded DBPSK data inserted in the reserved carriers while keeping backward-compatibility with existing DVB-T2 receivers; • A low-complexity algorithm for carrier selection and weighting optimization, by projecting the coded DBPSK signal onto the QAM data signal to reduce the peaks; • The derivation of a closed-form BER expression for the coded DBPSK data, validated by simulations; • A performance-complexity comparison with some representative PAPR reduction strategies for DVB-T2, showing that the proposed algorithm has reduced complexity with respect to TR, ACE, and CF, mainly because our solution is one-shot and does not require iterations. The remainder of this paper is organized as follows. Section II briefly introduces the system model, while Section III describes the proposed algorithm in detail. In Section IV, we compare the computational complexity of the proposed method with conventional TR, ACE, and CF methods. The theoretical performance of the additional data stream inserted in the reserved carriers for purposes of PAPR reduction is presented in Section V. Section VI discusses the simulated PAPR performance and the computational complexity, and compares the proposed method with conventional TR, ACE, and CF methods. Section VII concludes the paper. Fig. 1 shows the considered DVB-T2 system model. After channel coding (a concatenation of BCH and LDPC codes [4]), data are multiplexed, together with pilot data and reserved carriers, in the time-frequency domain. Each OFDM symbol contains N a = N d + N p + N s active carriers, where N d carriers transport the data, N p carriers are pilot carriers, and N s reserved carriers are available for the insertion of a correction signal to reduce the signal peaks: we will use these reserved carriers also to convey additional signaling bits.
II. SYSTEM MODEL
We now introduce the mathematical model of the DVB-T2 data signal, omitting for simplicity the P1, P2, and FEF parts. The l-th discrete-time baseband OFDM symbol can be expressed as signal is given by concatenating the N F blocks, as expressed by where n ∈ {−LN , . . . , N F L(N + N )−LN − 1} is the discrete time index that spans the whole data part. Using an oversampling factor L > 1, the oversampled digital signal better approximates the analog signal, so that the PAPR estimation is more accurate than for L = 1. The choice of L is a trade-off between complexity and accuracy; it was shown in [6] that L = 4 is a good choice. Note that the N s reserved carriers can be exploited to reduce the peaks of x [n, l]. Inspired by the signaling of DVB-T [72], [73], we insert in the reserved carriers a differential binary phase-shift keying (DBPSK) signal, modulated among successive OFDM symbols, with repetition factor of N s in the frequency domain (Fig. 2). Since the positions of the DVB-T2 reserved carriers periodically repeat every D OFDM symbols [4], in a frame with N F OFDM symbols we use D different bit sequences, each with length N F /D symbols. We denote with b q,d the q-th bit of the d-th binary sequence (b 0,d = 0 is the reference bit of the d-th sequence). The additional DBPSK-modulated data signal can be expressed as Note that (2) represents the mapping operation, (3) is the frequency-domain repetition coding with scrambling, and (4) stands for the differential encoding. The reserved carriers (2) are multiplexed into the frequency domain signal X [k, l] used in (1) by means of the indexing function k = K s [i, l], as expressed by
III. PAPR REDUCTION ALGORITHM
This section explains how the data signal (2)-(4), inserted on the DVB-T2 reserved carriers (5), is modified by the proposed algorithm to reduce the PAPR of the generated DVB-T2 signal (1). The idea of the proposed algorithm is to conveniently select a subset K H of the index set {K s [i, l]} N s −1 i=0 of reserved carriers, whose amplitude will be amplified, and to simultaneously clip to zero the amplitudes on the complementary setK H of reserved carriers. Our method will select the subset K H and the associated amplitude weight in such a way to reduce the PAPR. The proposed method does not destroy the information contained in X s [i, l], because the binary phase is left unchanged on the carriers belonging to K H , while repetition coding will help to recover the DBPSK data despite the carriers belonging toK H will be zeroed. Mathematically, the proposed algorithm replaces (5) with where H[i, l] is a carrier selection function that can be 0 or 1 and W [l] is an amplitude weight common to all the selected carriers. From now on, to simplify the notation, we omit the OFDM symbol index l whenever possible.
A. WORKING PRINCIPLE OF THE PROPOSED ALGORITHM
The working principle of the proposed algorithm is explained below. For a given information signal associated to the data carriers, the additional DBPSK data transmitted on the reserved carriers may contribute or prevent possible peaks in the time domain, depending on whether each reserved carrier adds constructively (in-phase) or destructively (counterphase) with the overall information signal. Thus, the proposed algorithm nullifies those reserved carriers whose signal adds constructively to the data carriers in those time indexes that correspond to the M highest peaks. At the same time, the proposed algorithm maintains the additional data on the reserved carriers whose signal adds destructively on the data carriers: this reduces the M highest peaks. Despite the DBPSK signal has been nulled on some reserved carriers, the additional data can still be recovered thanks to repetition coding, because the power scaling on the other reserved carriers does not alter the phase of the DBPSK signal.
Mathematically, the proposed algorithm can be explained as follows. We start by defining the ratio between the m-th largest peak power and the average power, denoted as m-PAPR, expressed by where where E{·} denotes the expected value. Note that the 1-PAPR ρ 1 = ρ is the classic definition of PAPR, while the 2-PAPR ρ 2 would be the PAPR obtained after reducing the first peak below the second peak. Our method aims at reducing not only ρ = ρ 1 , but also the m-PAPR ρ m with m > 1, by reducing the largest peaks of the time-domain signal (1).
To keep the PAPR ρ below a given valueρ, the algorithm must attenuate the peaks with power above the threshold T = ρµ y . To this purpose, we define the set of M indexes of those peaks with instantaneous power exceeding T . Note that the set in (8) relates the number M of largest peaks to the PAPR thresholdρ = T /µ y .
Consequently, our algorithm uses the parameter M instead of (1) is the sum of two contributions: the first,s m , caused by the data-plus-pilot signal, will be kept fixed, while the second, s m , caused by the additional data on the reserved carriers, will be adjusted to reduce the PAPR. (1) is where K s [i] is the frequency index of the i-th additional bit. Therefore, s m = N s −1 i=0 s i,m is the total contribution of all the additional signaling carriers to the m-th peak, whilē is the contribution of the data and pilot carriers (Fig. 3a).
Let us denote N h as the number of reserved carriers in the set K H ; therefore, the size of the complementary setK H is 1} be a carrier selection function, which will be determined in Section III-C, such that N h = . We denote W a common positive weight that multiplies the signal component of the N h reserved carriers selected to compensate the PAPR, while the remaining N s − N h reserved carriers are nulled. Hence, the time- represents the numerator of the PAPR in (7) after selection and weighting. Consequently, the PAPR metric becomes When the number of carriers N is large, the cancellation of few peaks does not alter significantly the average power µ y of the OFDM signal, i.e., the denominator of the PAPR in (7): this is also confirmed by the simulation results in Section VI (see Tab. 1). Hence, a PAPR reduction strategy aiming at VOLUME 11, 2023 reducing the m-PAPR (with 1 ≤ m ≤ M ) in (7) should minimize J in (11). The minimization of (11) gives the optimal power weighting (OPW) W O and the optimal selection function (OSF) H O for m-PAPR reduction. We follow a three-step approach: first, we simplify the PAPR metric J of (11) withĴ , which corresponds to the sum of the m-PAPR metrics in (7); second, we find the OSFĤ O using the simplified metricĴ ; and finally we find the OPWŴ O for the original metric J using the obtained OSFĤ O . The obtained functionĤ O and the obtained weightŴ O are then used in (6) to produce a PAPR-reduced signal (1).
B. FIRST STEP: METRIC SIMPLIFICATION
Our aim is to reduce the m-PAPR ρ m in (7), for m = 1, . . . , M P , with 1 ≤ M P ≤ M . M P is a design parameter that denotes the size of the largest peaks in P M : its value can be determined, for instance, by selecting those peaks that exceed a given threshold T P , with T P > T . The reason for the introduction of another design parameter M P , with M P ≤ M , can be explained as follows. While the largest peaks determine the PAPR, the reduction of the largest peaks may produce a regrowth of other peaks. Here we want to reduce the largest M P peaks and concurrently we want to avoid the regrowth of the other M − M P peaks. Hence we need two parameters: the largest M P peaks will be used in Section III-C for reserved carrier selection aiming at reducing the largest peaks, while the extended set of M peaks will be used in Section III-D for optimal weighting aiming at avoiding the regrowth of other peaks.
Note that max{y[ñ m ]} M m=1 = max{y[ñ m ]} M P m=1 , since the M P peaks are the largest among the M peaks. In order to simplify the metric in (11), we use the inequality In the above equation, the minimization of the first term or the third term is equivalent, therefore also the second term can be used as a cost function to minimize the PAPR. Thus, we replace the maximum in (11) with the sum, to obtain the approximation which is proportional to the sum of the first M P m-PAPR's in (7). Differently from J in (11),Ĵ in (12) uses a sum to replace the maximum: therefore, the minimization ofĴ is easier than that of J , because the differentiation ofĴ admits a closed-form solution.
The approximated metric (12) is explained in the following. Under the hypotheses of stationarity and ergodicity of y[n] = |x[n]| 2 , the sum in (12) is, in the limit for M P → ∞, (13) whereñ m is the index for the M P peaks that exceed the threshold T P , f Y (y) is the pdf of the instantaneous power, and ∝ means proportional to. Fig. 4 shows the pdf f P (ρ) of the PAPR calculated as in [74] and the pdf f Y (y) of the instantaneous power. Every PAPR reduction scheme shifts part of the PAPR pdf f P (ρ) to the left, as shown by the dashed red curve in Fig. 4a. This is equivalent to the cancellation of the shaded area in the pdf f Y (y) of the instantaneous power in Fig. 4b. This shaded area is expressed by while the approximated metricĴ (W , H) in (12)-(13) includes a weight y into the integral. With reference to Fig. 4b, a PAPR reduction scheme annihilates the integral in (14), or the integral in (13) that uses a larger weight for a bigger peak.
Instead of (14), the proposed algorithm minimizes the sum of m-PAPR's in (13), because bigger peaks require more reduction than the other peaks.
C. SECOND STEP: OPTIMAL SELECTION OF THE RESERVED CARRIERS
By differentiating (12) with respect to W and setting to zero, we find the weight In (15), h m (H) is the additional data contribution and in (16) p i,m is the projection of the contribution s i,m in (9) onto the data-pilot contributions m , as shown in Fig. 3b. Note also that, by substituting back (15) in (12), we resort to an integer optimization problem. The carrier selection function H[i], used in (15), is determined as follows. Since W > 0, to avoid negative values of W + (H) in (15) As shown in Fig. 3b, the N h reserved carriers with negative sum projection M P m=1 p i,m contribute to a peak reduction and hence are helping; differently, the N s − N h reserved carriers with positive projection produce a peak increase and therefore are dangerous. Using the OSF in (17) with The use of the weight W O in (20) increases the power of x[n] in (1): simulations in a typical scenario show a 2.6% or 0.1 dB power increase, as detailed in Section VI. The power increase is small because the weight is applied to the reserved carriers only. In any case, the power increase can be controlled in two ways. A first strategy consists in limiting the maximum permitted value ofŴ O : this can be done by excluding from (20) those weights in (18) that exceed a predetermined threshold W th that controls the maximum allowable power increase. A second strategy simply avoids the OPW (20) and selects the weight according to a predetermined power increase: for instance, if we desire to not increase the power, we can adopt a same power weighting (SPW) strategy, which replacesŴ O with W s = √ N s /N h .
E. SUMMARY OF THE PROPOSED ALGORITHM
The proposed OSF-OPW algorithm, detailed in Sections III-B, III-C, and III-D, is summarized in Algorithm 1. Since QAM data and pilot carriers are left unchanged by the algorithm, the proposed PAPR reduction algorithm will not degrade the modulation error ratio (MER) on the data carriers. Note that the proposed algorithm does not require side information at the receiver, because the receiver will perform the recovery of the DBPSK additional data without knowledge of which reserved carriers are amplified or zeroed.
IV. COMPUTATIONAL COMPLEXITY
Computational complexity has a significant effect on the power consumption of the digital section of the modulator.
Herein we evaluate the computational complexity of the proposed PAPR reduction algorithm and of some alternative techniques. In this computation, we omit the complexity of all the transmitting blocks that are common to all the PAPR reduction techniques (channel coding, interleaving, QAM mapping, multiplexing, IFFT of OFDM, cyclic prefix insertion, etc.). The complexity is calculated as the number VOLUME 11, 2023 LN for OSF-OPW and the kernels for the TR method are precomputed, and that sign inversions, multiplications by powers of two, and sign comparisons have negligible cost. In the following, we provide the total estimated cost for each compared technique (details are reported in Appendix B).
A. COMPUTATIONAL COST OF OSF-OPW
We neglect the cost of the simple binary and mapping operations for DBPSK in (2)-(4). To reduce the computational complexity, instead of using (1) with (6), the OSF-OPW algorithm is equivalently implemented by first constructing the original signal (1) with (5) and by successive addition of a correction signal, expressed by that produces the same result of (6). By combining DBPSK modulation, scrambling, and reserved carrier selection, and using an oversampling factor L = L OSF , the proposed OSF-OPW method requires 4L OSF N +2N s M P +M eff (3M +4)+8M RM, L OSF N (N s +1)+ In addition to the techniques described in the DVB-T2 standard, we also consider the CF algorithm of [8]. In addition, Fig. 5 also shows the combined complexity, evaluated as a weighted average of RA and RM. According to [75], the weights for RA and RM are 1.0 and 9.3264, respectively, for FPGA-based implementation, and 1.0 and 6.2711, respectively, for ASIC-based implementation [75]. In Fig. 5, we used a typical DVB-T2 setting with M P = 10, M = 27, and M eff ≈ 4 (the values of the other parameters are detailed in Section IV), but the results of the comparison would be similar also for other settings. From Fig. 5, it is clear that OSF-OPW has a computational advantage over the other techniques, in terms of RM and combined complexity. Although the number of RA of the proposed algorithm is somewhat larger than for TR, our method is less complex because an RM is significantly more costly than an RA [75], [76, p. 320]. As a consequence of the reduced complexity, the latency of the proposed method is not larger than that of the other compared methods.
V. THEORETICAL BER OF THE ADDITIONAL DATA
This section focuses on the BER performance of the additional data inserted in the reserved carriers. The additional data bits on the reserved carriers are DBPSK modulated along the time direction and repeated over N s reserved carriers. We focus on additive white Gaussian noise (AWGN) channels. If all the reserved carriers are used with the same power, the theoretical bit error rate (BER) of the additional bits is expressed as in [77, p. 740, eqs. (11.1-13)- (11.1-14)], using N s independent channels. When some reserved carriers are nulled, the theoretical BER can be derived by generalizing (to non-identically distributed variables) the results described in Appendix B of [77, p. 1090-1095]. In this case, for N h = N s /2 and a random assignment of the helping carriers, the generalization of [77, p. 1094, eq. (B-21)] produces the theoretical BER where Q 1 (·, ·) is the Marcum Q function of first order, I n (·) is the modified Bessel function of the first kind and order n, γ = /F is the SNR on the reserved carriers, is the overall carrier-to-noise ratio (CNR) of the DVB-T2 signal (including the power of data, pilots, and reserved carriers), and F is a correction factor to include the power of the N p pilot carriers. The derivation of (22) is detailed in Appendix C.
VI. SIMULATION RESULTS
We choose the following DVB-T2 parameters [4]: (12) and in (17), while M is the extended set of peaks used by the OPW in (11) and in (20): indeed, the selection of the reserved carriers is guided by the largest peaks only, while the selection of the weight must ensure that also the other M − M P peaks do not regrow. To keep complexity under control, we have assumed a fixed value for M − M P = 17 for all the cases. To control the power increase, the OSF-OPW algorithm uses a threshold W th = 5, which produces a 2.6% increase of signal power, as shown in Tab. 1 for a typical simulation scenario. As highlighted by Fig. 6, when the number of canceled peaks raises up to M P = 10, the PAPR reduces; then, when the number of canceled peaks increases further, the PAPR does not reduce anymore and tends to increase. This behavior can be explained as follows. When the number of canceled peaks M P is small, the increase of M P improves the correct identification of the helpful reserved carriers. However, since the signs of the projections p i,m may be different for different peaks (i.e., for different m), when the number of canceled peaks M P increases further, the number of reserved carriers that are helpful for all the M P peaks reduces, and the selection of the reserved carriers inevitably must include some carriers that are dangerous for some peaks. Indeed, we should bear in mind that the number N s = 72 of reserved carriers is low, compared with the number of active carriers N a = 6817. Fig. 7 highlights the obtained PAPR ρ ′ atF P (ρ) = 10 −4 as a function of M P . To better appreciate the behavior of the curve in Fig. 7, we have rounded the simulated PAPR results to one tenth of dB. Fig. 7 clearly explains that the optimal value of M P is around M P ≈ 8, and that the nearby values are optimal as well. This behavior helps the selection of the M P parameter. Fig. 8 compares the CCDF of the PAPR for different methods. We also include the OSF-SPW method and a random selection function (RSF) OPW method that randomly activates N h = N s /2 reserved carriers. The PAPR ρ ′ is compared atF P (ρ ′ ) = 10 −4 . With respect to the original signal, the PAPR reduction of the proposed OSF-OPW algorithm is 2.1 dB, when M P = 10 and M = 27. As shown in Fig. 7, the attempt to cancel more than M P = 10 peaks does not reduce the PAPR further. Fig. 8 also shows that the OSF-SPW method is suboptimal, yielding a PAPR reduction of 1.0 dB only (with respect to the original signal): therefore, the proposed OPW approach outperforms the SPW approach, thereby confirming the correctness of our proposed weighting strategy. In addition, Fig. 8 reveals that the RSF-OPW approach gives a bad result with almost no PAPR reduction: this confirms the correctness of our proposed subset selection strategy. Fig. 8 also compares the PAPR CCDF for the TR and ACE methods described in the DVB-T2 standard [4, pp. 116-121]. We have exhaustively simulated both TR and ACE techniques over a wide range of parameters with a fine grid. For both TR and ACE, herein we present the results for the parameters that produce the best PAPR reduction: for TR, V CLIP,TR = 2.9 and number of maximum iterations I MAX = 20; for ACE, V CLIP,ACE = 2.2, gain G ACE = 10 and extension limit L E = 1.4 [4]. Since the ACE method can modify up to N d = 6562 carriers, ACE achieves a better PAPR reduction than OSF-OPW, which can modify N s = 72 carriers only. However, ACE is significantly more complex than OSF-OPW. The TR technique of DVB-T2 [4] is less effective than OSF-OPW from the PAPR reduction viewpoint and is also more complex than OSF-OPW. Fig. 8 also includes the PAPR of CF with parameters M CF = 8 iterations and V CLIP,CF = 3.1, 3.4, and 3.7, which produce a MER of 48.8, 57.4, and 66.4 dB, respectively. For V CLIP,CF = 3.4, the PAPR ρ ′ for CF is similar to the PAPR for the proposed OSF-OPW, but the CF introduces a MER of 57.4 dB and has an increased complexity, as detailed in the next sections. The CF algorithm can achieve the PAPR performance of the ACE algorithm by using V CLIP,CF = 3.1, at the price of a MER of 48.8 dB and increased data distortion. Differently, the proposed OSF-OPW method does not alter the data. Table 1 compares the PAPR ρ ′ , the transmitted power µ y modified by the effect of reserved carriers or extended constellations, and the consequent SNR loss SNR dB = 10 log 10 µ y experienced by the QAM data when all the techniques transmit with the same power. Table 1 summarizes the computational costs of the algorithms for the parameters used in Fig. 8. For complexity comparison, FIGURE 8. Simulated CCDF of the tested DVB-T2 PAPR reduction methods. like in [7] we use the number of RM, since a multiplication is more costly than an addition, for any hardware device or software algorithm [76, p. 320]. The proposed OSF-OPW method with M P = 10, M = 27, and M eff ≈ 4 requires nearly 4L OSF N + 5M P N s + 3MM eff ≈ 1.3 · 10 5 RM. The DVB-T2 TR method requires on average M TR ≈ 6.3 iterations and consequently costs about M TR (8L TR N + 13N s ) ≈ 4.2 · 10 5 RM. For ACE, we have found M ACE ≈ 80.1 and an average number of N d,ext = 1243.5 extended points, which is similar to the estimated value N d (2 M qam − 3)/M qam ≈ 1347.5: this method is more complex and requires roughly 4L ACE N log 2 (L ACE N ) ≈ 2.0 · 10 6 RM per OFDM symbol. For CF, the complexity amounts to 4M CF L CF N log 2 (L CF N ) ≈ 1.6 · 10 7 RM per OFDM symbol. Hence the RM complexity of the OSF-OPW algorithm is approximately one third of TR, roughly 15 times lower than ACE, and approximately 120 times lower than CF.
B. COMPLEXITY COMPARISON
C. SIMULATED BER OF THE ADDITIONAL DATA Fig. 9 displays the BER of the DBPSK additional data on the reserved carriers. For the theoretical evaluations, we assume an AWGN channel, while for simulations we also consider the multipath channels: the Rician channel F1 and the NLOS channel P1 [4], [71]. As shown in the legend of Fig. 9, we also considered approaches that use a fixed number N h of helpful carriers, which are those with lower projections. As shown in Fig. 9, the best BER performance is obtained by using all the N s carriers, but in this case there is no PAPR reduction. The OSF-based PAPR reduction strategies null some reserved carriers and hence reduce the repetition factor of the DBPSK coded bit. Anyway, the BER performance is still good: thanks to repetition coding, the CNR required for a BER of 10 −4 is below 3.5 dB and hence is significantly lower than the CNR > 15 dB required for the correct detection of 64-QAM data with code rate R c = 3/4 [71]. For OSF-SPW with N h = N s /2, there is some difference between the theoretical curve (22) and the simulations, because in the simulations the selected N h carriers randomly change from one OFDM symbol to the next OFDM symbol of DBPSK: therefore, those carriers that are helpful in both OFDM symbols are no longer exactly N h /2 = N s /4, as assumed in (22); i.e., for OSF-SPW with N h = N s /2, the effective repetition factor is a random variable, with mean N h /2 but with variance larger than zero, differently from (22). In multipath channels, the BER for OSF-OPW is close to the BER in the AWGN channel: this is due to the large diversity order (around N s /4 = 18) in multipath channels.
D. SIMULATED BER OF THE QAM DATA
The simulated BER of the DVB-T2 QAM data in AWGN is shown in Fig. 10, for the tested PAPR reduction methods. Both linear and nonlinear amplification is considered: the nonlinear amplifier is an ideally predistorted amplifier 73388 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
operating at 4 and 5 dB of input back-off (IBO). For reference, all the simulated cases comprise also the unmodified original signal (indicated with NO in the legend of Fig. 10). The BER is estimated at the output of the LDPC decoder, assuming ideal channel knowledge, for the transmitter configuration defined before, i.e., 8K, 64-QAM, R c = 3/4, and PP5 pilot pattern. The results of Fig. 10 show that, in the linear case, TR gives the best performance, followed by original, ACE, and CF, which are practically equivalent. In this condition, the OSF-OPW method has a loss of 0.1 dB. The BER improvement of TR and loss of OSF-OPW with respect to the original can be explained by how the two methods modify the reserved carriers. Indeed, while TR modifies the reserved carriers with a correction signal that has a relative power lower than that of the data, OSF-OPW does the opposite. Thus, in total, more power is dedicated to QAM data for TR, while less power is available when using OSF-OPW. This power loss of OSF-OPW is the price to be paid for giving increased robustness to the additional data on the reserved carriers. Indeed, the comparison of Fig. 10 with Fig. 9 shows that the additional data of OSF-OPW achieve a BER of 10 −5 when the SNR is about 5.5 dB, while the QAM data achieve the same BER when the SNR is larger than 15.5 dB. Therefore, the OSF-OPW method trades the energy loss (0.1 dB) on QAM data with the increased reliability of the additional data (more than 10 dB gain). Fig. 10 also shows that, at IBO of 5 dB, ACE stands out and achieves the best performance, with TR practically keeping the same advantage with respect to original and CF. For the reasons detailed before, OSF-OPW has a power loss with respect to the original (however, lower than in the previous case). The BER improvement of ACE is due to a combination of factors. First, ACE extends the QAM points located along the border of the constellation square, strengthening these points against noise and other impairments. In other words, the ACE method improves the performance of thousands of QAM data, while OSF-OPW and TR methods act on dozens of reserved carriers only. Second, the improved PAPR reduction of ACE yields a reduced nonlinear distortion, which produces a BER improvement with respect to the other methods. In the same Fig. 10, we have also simulated 4 dB of IBO. The curves show the same performance ranking of 5 dB, with a loss for all methods due to the higher level of nonlinear noise. With respect to 5 dB of IBO, the performance advantage of ACE is more evident and we note that OSF-OPW keeps the same performance loss with respect to the original.
In summary, OSF-OPW is able to achieve a similar BER performance to that of the other methods, with an energy loss lower than 0.1 dB; it has the advantage that it can also deliver an additional signaling stream with increased reliability, which the other methods cannot.
VII. CONCLUSION
We have proposed a novel algorithm that reduces the PAPR of the DVB-T2 signal by insertion of a coded DBPSK sequence on the reserved carriers. The proposed OSF-OPW algorithm amplifies the reserved carriers that are helpful to reduce the PAPR, while zeroing the reserved carriers that would increase the PAPR. The proposed algorithm achieves a PAPR reduction of about 2 dB (for the 64-QAM case) with respect to the original OFDM signal, and of about 1 dB with respect to the TR algorithm of the DVB-T2 standard. The proposed algorithm is also 15 times less complex than ACE, which however has better PAPR performance, and 120 times less complex than CF (with MER = 57.4 dB) with similar PAPR performance. A nice feature of the proposed method is the introduction of additional signaling with respect to the TR PAPR reduction scheme of the DVB-T2 standard. Moreover, we highlight that the proposed PAPR reduction method is fully backward-compatible with all receivers compliant with the DVB-T and DVB-T2 standards. The proposed method could be extended to generic OFDM systems with reserved carriers by optimizing the number of reserved carriers and their position.
APPENDIX A OPTIMAL WEIGHT SEARCH ALGORITHM
We want to minimize the metric J (W ,Ĥ O ) in (11). To this purpose, we first rewrite every element inside the maximum in (11) as which is quadratic in W . Fig. 11 shows a visual example for M = 27: each solid line represents one of the M = 27 terms in (11), i.e., a parabola expressed by (23). The maximum operator in (11) returns M eff parabolic segments (in our example, M eff = 3) intersecting at points I 1 , . . . , I M eff −1 . From Fig. 11, it is clear that the minimum of (11) occurs either at one of the M eff + 1 edges of the parabolic segments, or between two edges of the parabolic segments (this second case happens when the parabolic segment contains the minimum of the parabola). The arrangement of the parabolic segments suggests a reducedcomplexity strategy for the search of W that minimizes the metric (11): 1) given an initial value W = W I 0 , evaluate (23) and find the index m 0 = arg max(A m W 2 I 0 + B m W I 0 + C m ) of the parabola with the highest value in this point. This is the first dashed segment in Fig. 11 3) choose the intersection with the lowest abscissa, corresponding to I 1 , This operation defines the first parabolic segment I 0 I 1 in Fig. 11 min ; 6) in an iterative way, repeat step 5 for all the following segments, until the end of the search range for W is reached; 7) for each couple (m n , W I n ), and for each included couple (m n , W (m n ) min ), calculate (23) and select the optimal weightŴ O that minimizes (23).
APPENDIX B DETAILS OF THE COMPUTATIONAL COMPLEXITY
In this appendix we detail the computational costs of the proposed OSF-OPW algorithm and of the alternative methods TR and ACE of [4].
The proposed OSF-OPW method requires 2LN RM and LN RA to find the magnitude of the signal samples and M (LN − 1) VC to find the M peaks in (19); M eff (2M − 3) VC to find the lowest abscissa; 6M eff RM and 4M eff RA to compute the terms in (23); 2M eff − 1 VC to find the OPW weight in (20); the weighted carrier values in (6) are computed at no cost; in (21), we recall that X s [i, l] = ±1. Then, the first term in the right hand side of (21) requires 2LN RM and L(N s /2 − 1)N RA, the second term also requires only L(N s /2 − 1)N RA, and the composition of the two terms requires 2LN further RA. Thus, this step needs 2LN RM and LN s N RA. In total, OSF-OPW requires 4LN + 2N s M P + M eff (3M + 4) + 8M RM, About the alternative PAPR reduction methods, we consider the TR algorithm in Clause 9.6.2.1 of [4, p. 119] and the ACE algorithm in Clause 9.6.1 of [4, p. 117]. With reference to the steps 1-9 of the TR algorithm in [4, p. 119], which uses L = L TR = 1, the cost of TR for each iteration is 2LN RM, LN RA, and LN − 1 VC for the maximum magnitude in step 2; 2 RM and 2 RA for the unit-magnitude phasor in step 3; 11N s RM and 6N s RA for the correction magnitude in step 4; 1 RA and N s + 1 VC to find the largest magnitude of correction in step 5; 6LN RM and 4LN RA to update the peak reduction signal in step 6 (we suppose the kernels are precomputed); 2N s RM and 2N s RA to update the frequency domain coefficient in step 7. After the last step of the last iteration, 2LN RA are needed to add the peak reduction signal to the data signal. In total, the TR algorithm requires M TR (8LN + 13N [4, p. 118]; an IFFT to generate the PAPR-reduced OFDM symbol with 2LN log 2 (LN ) RM and 3LN log 2 (LN ) RA. In total, the ACE algorithm needs 2LN (2 log 2 (LN ) + 1) + 2N d,ext + 4M ACE RM, LN (6 log 2 (LN ) + 1) + 4N d,ext RA, and LN + 10N d,ext VC.
APPENDIX C THEORETICAL BER OF THE ADDITIONAL DATA
This appendix demonstrates that the bit error probability of the additional DBPSK data on the reserved carriers, using the proposed OSF approach, is equal to (22), in AWGN channels. Let us assume that the complex-valued baseband received signal on the i-th reserved carrier is expressed by Specifically, the estimated bit isb q,d = 0 when Dq+d ≥ 0, andb q,d = 1 when Dq+d < 0. Without loss of generality, we assume that the transmitted bit is b q,d = 0, which leads to X s [i, l − D] = X s [i, l], and we evaluate the error probability as P b = Pr{ l < 0|b q,d = 0}. Using the approach of [77, p. 1090-1095], the error probability can be expressed by where ψ l (jv) is the characteristic function of the decision variable l and ϵ is a positive constant. Since the noise terms on different carriers are independent, the random variables (26) are independent, and therefore ψ l (jv) = N s −1 i=0 ψ θ i,l (jv). The characteristic function of θ i,l can be calculated as ψ θ i,l (jv) = (2/N 0 ) 2 v 2 + (2/N 0 ) 2 exp (2/N 0 ) 2 g i,l (jv) v 2 + (2/N 0 ) 2 , where G(jv) = N s −1 i=0 g i,l (jv) = −v 2 N s E b N 0 /4 + jvN s E b /4. Equations (27) and (29) where Q 1 (·, ·) is the Marcum Q function of first order, I n (·) is the modified Bessel function of the first kind and order n, γ = E b /N 0 is the SNR on the reserved carriers. Equation (22) is obtained from (30) by exploiting | 11,240.6 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
The First Two Complete Mitochondrial Genomes of Neoephemeridae (Ephemeroptera): Comparative Analysis and Phylogenetic Implication for Furcatergalia
Mayflies of the family Neoephemeridae are widespread in the Holarctic and Oriental regions, and its phylogenetic position is still unstable in the group Furcatergalia (mayflies with fringed gills). In the present study, we determined the complete mitogenomes of two species, namely Potamanthellus edmundsi and Pulchephemera projecta, of this family. The lengths of two mitogenomes were 15,274 bp and 16,031 bp with an A + T content of 73.38% and 73.07%, respectively. Two neoephemerid mitogenomes had a similar gene size, base composition, and codon usage of protein-coding genes (PCGs), and the sequenced gene arrangements were consistent with the putative ancestral insect mitogenomes as understood today. The most variable gene of Furcatergalia mitogenomes was ND2, while the most conserved gene was COI. Meanwhile, the analysis of selection pressures showed that ND6 and ATP8 exhibited a relaxed purifying selection, and COI was under the strongest purifying selection. Phylogenetic trees reconstructed based on two concatenated nucleotide datasets using both maximum likelihood (ML) and Bayesian inference (BI) estimations yielded robust identical topologies. These results corroborated the monophyly of seven studied families and supported the family Leptophlebiidae as being of the basal lineage of Furcatergalia. Additionally, the sister-group relationship of Caenidae and Neoephemeridae was well supported. Methodologically, our present study provides a general reference for future phylogenetic studies of Ephemeroptera at the mitogenome level.
Introduction
The mitochondrion is an important organelle in metazoan cells as it is mainly involved in life cycle, apoptosis, and metabolism [1]. In most insects, the mitochondrial genome (mitogenome) contains a small double-stranded circular molecule of 14-20 kb in size and has a relatively stable organization and structure [2]. It generally encodes 37 genes including 13 protein-coding genes (PCGs), 22 transfer RNA genes (tRNAs), and two ribosomal RNA genes (rRNAs). In addition, it is composed of a control region (also called A + T-rich region) which contains the initiation sites for transcription and replication [2,3]. Compared to the nuclear genome, the mitogenome possesses multiple obvious advantages, such as maternal inheritance, absence of introns, conserved gene composition, the relatively rare recombination, and high evolutionary rate [4]. Given the vast diversity of insects, mitogenome sequences are usually considered to be effective molecular markers for species identification and play an increasingly important role in intraspecific and interspecific genetic differences, phylogenetic implications, molecular evolution, and phylogeographic studies across various taxa [5][6][7][8]. With the rapid advance of high-throughput sequencers (whole-genome sequencing (WGS) and next generation sequencing (NGS)) lowering the ing (NGS)) lowering the processing requirements and expense of DNA sequencing, increasing numbers of mitogenomes have been obtained in diverse insect orders [9][10][11]. However, the mitogenomes of Ephemeroptera have been limitedly studied and approximately only 50 reliable sequences (with exact Latin name of the species) were made public in the NCBI database (National Center for Biotechnology Information). More importantly, most of these available mitogenomes were mainly sequenced and focused on several families (such as Heptageniidae and Ephemerellidae), and no sequence of the vast majority of mayfly families was reported up to now [4,12]. This unbalanced distribution of mitogenomes has limited our comprehensive understanding of the evolutionary and phylogenetic relationships within Ephemeroptera at the mitogenome level.
Ephemeroptera (mayflies) is one of the most archaic of extant winged insects, originating in the late Carboniferous or early Permian periods (about 300 Mya) [13]. Mayflies occupy freshwater habitats throughout the world, except for Antarctica, with over 3700 described species belonging to 460 genera (42 families) [14][15][16][17]. The well-supported monophyly of Ephemeroptera was estimated in the pterygote insects [18,19]. However, the phylogenetic relationships among mayflies themselves remain partially unresolved [20,21]. Additionally, the higher-level classification and relationships are still unstable, especially for the suborder Furcatergalia, which includes the major clades Leptophlebiidae, Pannota, and the burrowing mayflies [22]. The phylogenetic hypothesis of McCafferty supported the idea that the burrowing mayflies (Behningiidae and Ephemeroidea) were included in a monophyletic group [22]. Leptophlebiidae was hypothesized as a sister group to the group (Behningiidae + Ephemeroidea), with the Pannota (Caenidae, Neoephemeridae, and Ephemerelloidea) sister to this monophyletic group ( Figure 1A). The Kluge hypothesis proposed that the burrowing mayflies clustered together with two pannote lineages (Caenidae and Neoephemeridae) [23]. From this system, the infraorder Pannota was not supported as monophyletic ( Figure 1B). According to the study based on 440 targeted genomic protein-coding regions (exons), the phylogenetic results were consistent with the Kluge hypothesis that two families (Caenidae and Neoephemeridae) were grouped together with the burrowing mayflies ( Figure 1C). Furthermore, this molecular analysis showed that the monophyletic Leptophlebiidae was sister to all other clades within Furcatergalia [21]. However, our previous study using mitogenomes showed that Leptophlebiidae was clustered with the group (Caenidae + Baetidae) [24]. In the newly published research by Xu et al. (2021), the results placed Caenidae as a sister group of (Baetidae + Teloganodidae) and Leptophlebiidae was then consolidated together [25]. Over the past decade, multiple phylogenetic trees using mitogenomes were reconstructed and all of these included only a subset of Ephemeroptera families due to a lack of taxon sampling [24,25]. Until now, the phylogenetic relationships of different families in Furcatergalia remain unresolved at the mitogenome level. Mayflies of the family Neoephemeridae are widespread in the Holarctic and Oriental regions [26]. So far, 13 species of Neoephemeridae have been described worldwide [27].
Larvae could be found from mountain torrents to large streams and rivers, generally being either clingers on erosional substrates or sprawlers on depositional substrates [26,27]. Studies of Neoephemeridae have mainly focused on the morphological classification and biogeography [26][27][28][29][30]. However, there is no mitogenome sequence of this family available in GenBank. In order to better understand the characteristic of the neoephemerid mitogenome and the phylogenetic relationships within Furcatergalia, we sequenced and analyzed two complete mitogenomes of Potamanthellus edmundsi and Pulchephemera projecta. Subsequently, we performed comparative analyses of the mitogenome features among Furcatergalia species concerning genomic structure, nucleotide composition, and the secondary structure of tRNAs. In addition, we incorporated the new mitogenome sequences into the Furcatergalia dataset to obtain a more reliable and robust phylogeny to understand the phylogenetic relationships within Furcatergalia.
Features of the Sequenced Mitogenomes
A total of 2.82 Gb and 2.25 Gb pair-end clean data from P. edmundsi and P. projecta were generated by next-generation sequencing on the Illumina platform. The sequencing qualities were high for both mayflies and the Q20 (quality score 20) base percentage of two samples were 98.41 and 98.17, respectively. Both complete mitogenomes were circular double-stranded structures (GenBank accession numbers: OK272542 and OK272543) and the sequences were 15,274 bp (P. edmundsi) and 16,031 bp (P. projecta) in size (Table 1). Circular maps of two newly sequenced mitogenomes are shown in Figure 2. The finding was comparable to the sequence sizes found for other reported Ephemeroptera complete mitogenomes, which ranged from 14,589 bp of Alainites yixiani [31] to 16,616 bp of Siphluriscus chinensis [32]. Differences between species were predominantly driven by the overall length of the non-coding regions, especially in the control region (CR; Table 1). The mitogenomes of both neoephemerid species encoded a complete set of 37 genes (13 protein-coding genes (PCGs), two rRNA genes (rrnL and rrnS) and 22 tRNA genes, and a control region (CR; Figure 2 and Table 1). Twenty-three genes (nine PCGs and 14 tRNAs) were located on the majority strand (H-strand), while the other 14 genes (four PCGs, two rRNAs, and eight tRNAs) were oriented on the minority one (L-strand). Two mitogenomes showed an identical gene order and organization ( Figure 2). All genes of both sequences were arranged in the same way without rearrangement or the cracking phenomenon compared with the putative ancestral insect mitogenomes. Of the reported mayfly mitogenomes, several gene rearrangement events have been validated in four families (Siphluriscidae, Baetidae, Heptageniidae, and Ephemerellidae) [31][32][33][34][35]. Previous studies demonstrated that gene rearrangements could effectively resolve the relationships of some groups, which provided additional evidence for phylogenetic reconstruction [35]. For Furcatergalia, the lack of rearrangement was found in limit-sequenced species, except for ephemerellids, up to now [12]. It is apparent that more mitogenomes from diverse groups of Furcatergalia are in demand to well-explore gene rearrangement events in the following studies.
Nucleotide Composition
The alignment analysis showed high sequence similarity across most of the extension of two mitogenomes and the sequence identity was 79.54% (Supplementary Materials, Figure S1). The nucleotide compositions of the two mitogenomes are summarized in Table 2. Both whole mitogenomes were biased in base composition ((A + T)% > (G + C)%), which is consistent with the genomes from other insects [25]. The A + T content of the complete sequence was 73.38% for P. edmundsi and 73.07% for P. projecta (Table 2). In addition, A + T contents of PCGs (n = 13), rRNAs (n = 2), tRNAs (n = 22), and the control regions (CR) also showed a bias towards A and T nucleotides. Table 2. Nucleotide composition of the mitogenomes of P. edmundsi and P. projecta. Skew metrics of two mitogenomes showed a negative GC-skew, indicating an obvious bias towards the use of Cs in the complete sequences (Table 2). Meanwhile, the AT-skews were different between two mitogenomes: P. projecta was negative and P. edmundsi was positive. The comprehensive analysis of all the components showed that most of the ATskews were negative, except tRNAs. During the progress of replication and transcription, these asymmetries of the nucleotide composition were generally regarded as an indicator for gene direction and replication orientation [36,37].
Protein-Coding Genes
Two newly sequenced mitogenomes comprised the usual set of 13 PCGs: nine PCGs were coded on the H-strand and the other four (ND1, ND4, ND4L, and ND5) were coded on the L-strand ( Figure 2 and Table 1). In the mitogenome of P. projecta, the total size of all PCGs was 11,223 bp, accounting for 69.38% of the complete sequence. The total size of P. edmundsi was 11,246 bp, accounting for 73.62% of the whole mitogenome (Table 2). Both Neoephemeridae species showed a negative AT-skew and positive GC-skew in PCGs, while the third codon position had an A + T content (85.58% and 86.71%) much higher than that of all the positions (73.38% and 72.24%; Table 2). All PCGs of both mitogenomes started with the typical initiation codon ATN (ATG, ATT, and ATC), as seen in other mayflies [4,[31][32][33][34]. For the stop codons, most PCGs were terminated by the typical TAN codon (TAG and TAA), apart from several genes (COI, COII, ND5, ND4, CYTB, and ND1) which ended with an incomplete codon T. This incomplete stop codon (T) has been reported in other insect mitogenomes, which connects with transfer RNAs at their 3 ends, and the processing is presumed to be completed through posttranscriptional polyadenylation [38].
The total numbers of amino acids (without the termination codons) were 3739 (P. edmundsi) and 3732 (P. projecta), and the codon AGG was not found in P. edmundsi. Relative synonymous codon usage (RSCU) values for the 13 PCGs of two mitogenomes are sum-marized in Figure 3, Tables S1 and S2. Four most frequently used codons UUA (Leucine), UUU (Phenylalanine), AUU (Isoleucine), and AUA (Methionine) were observed in two mitogenomes also fit with some other insects, such as in Coleoptera [9] and Hemiptera [39]. Meanwhile, the RSCU analysis showed that codons were biased to use more A/T at the third codon (Figure 3). Similarly, the biased usage of A + T nucleotides was reflected in the codon frequencies (Table 2).
processing is presumed to be completed through posttranscriptional polyadenylation [38].
The total numbers of amino acids (without the termination codons) were 3739 (P. edmundsi) and 3732 (P. projecta), and the codon AGG was not found in P. edmundsi. Relative synonymous codon usage (RSCU) values for the 13 PCGs of two mitogenomes are summarized in Figure 3, Tables S1, and S2. Four most frequently used codons UUA (Leucine), UUU (Phenylalanine), AUU (Isoleucine), and AUA (Methionine) were observed in two mitogenomes also fit with some other insects, such as in Coleoptera [9] and Hemiptera [39]. Meanwhile, the RSCU analysis showed that codons were biased to use more A/T at the third codon ( Figure 3). Similarly, the biased usage of A + T nucleotides was reflected in the codon frequencies (Table 2).
Ribosomal and Transfer RNA Genes
The size, structure, and distribution of RNA genes between two Neoephemeridae mayflies showed high similarities. The two rRNAs (rrnL and rrnS) were situated between trnL (CUN) and the control region in the L-strand, separated by trnV ( Figure 2 and Table 1). The length of rrnL was 1255 bp for P. edmundsi and 1250 bp for P. projecta, and the size of rrnS was 780 bp for P. edmundsi and 781 bp for P. projecta.
Two neoephemerid mitogenomes included 22 typical tRNA genes, with the size ranging from 62 bp (trnC and trnP of P. projecta) to 71 bp (trnV of P. edmundsi). All tRNA secondary structures of two mayflies were inferred and, among all the tRNA genes, 21 could be folded into the regular clover-leaf secondary structures (Figures S2 and S3). The lack of the dihydrouridine (DHU) arm (forming a loop) in trnS (AGN) was found in two species, which is commonly found in other insects, including all other available mitogenomes of mayflies [6][7][8][9]. In the conservative structures, apart from the normal base pairs A-U and G-C, there was also the non-standard pair G-U and other mismatches (U-U and C-A), including 29 mismatches (one C-A and four U-U) found in the stems of P. edmundsi and 27 (three U-U) in P. projecta. The pair of G-U could be corrected through the editing process and should not affect the transport function [40].
Non-Coding Regions
Two new mitogenomes had gene overlaps ranging from 1 to 35 bp. (P. edmundsi: 8 gene junctions and 66 bp overlaps; P. projecta: eight gene junctions and 62 bp overlaps). The existing common pairs of gene overlaps, including ATP8−ATP6 (7 bp) and ND4-ND4L (7 bp), were also found in two mitogenomes (Table 1). Additionally, two species also shared the biggest gene overlap of trnY-COI with the size of 35 bp. Apart from the control region (CR), there were 12 and 10 intergenic spacer (IGS) regions, totaling to 402 and 75 bp non-coding bases in P. projecta and P. edmundsi, respectively. Two new mitogenomes possessed the same four intergenic spacer patterns between ND5-trnH (1 bp), ND4L-trnT (2 bp), trnP-ND6 (2 bp), and ND1-trnL (1 bp). The two biggest IGS regions (162 and 165 bp) between ND3 and trnR (separated by trnA) were found in P. projecta. The subsequent Sanger-sequencing confirmed the accuracy of the Illumina sequencing and assembly. Interestingly, none of the other Ephemeroptera mitogenomes showed these IGSs, hence we supposed that this was an isolated incidence during the long evolutionary process of the P. projecta mitogenome.
Of the non-coding regions, the putative control region (CR) is usually thought to be the longest one in the whole sequence, which contains signals for the regulation and initiation of mitochondrial DNA transcription and replication [41][42][43]. Like most mayflies, the CRs of neoephemerid mitogenomes were located in the conserved position between two genes: rrnS and trnI. The full length of the CR in P. edmundsi was 547 bp, while that of P. projecta was 998 bp. In total, the non-coding regions between two mayflies have a significant difference, which is in accordance with other insect mitogenomes considering that the different size of the sequences is generally due to the size variation of the non-coding regions (intergenic spacer region and control region) [44,45].
Comparative Analysis of Furcatergalia Mitogenomes
All complete mitogenomes of 13 Furcatergalia mayflies exhibited a AT nucleotide bias (60.32-73.96%) and the AT content (59.60-70.19%) of the PCGs was slightly lower than that of the other regions (Figure 4). AT and GC-skews are a measure of compositional asymmetry. The AT-skew of tRNA and rRNAs were mainly positive and other regions were negative, which indicated that the number of Ts was higher compared to the number of As ( Figure 5). For the GC-skew, the whole sequences of 13 species were obviously negative (−0.14 to −0.25), while tRNA and rRNAs were positive. The GC-skews for three matrixes of PCGs were different across species, even among closely related ones, which showed that the GC content was not conservative in Furcatergalia. Beyond that, the AT content of PCGs3 was significantly higher than that of PCGs and PCGs12. The codon usage of all the PCGs of the 13 mayflies was calculated and used to establish the frequently used amino acids. Comparative analysis showed that codon usage patterns and major customarily utilized codons of 13 mitogenomes were highly conservative ( Figure 6). The most frequently used amino acids were Leucine2 (Leu2), Phenylalanine (Phe), and Isoleucine (Iso), which were represented by codons with high A or T contents. The AT content and codon usage bias phenomenon could be attributed to the fact that the synonymous codon usage bias in the mitogenomes tends to be from codons ending with A/T to promote transcription [46]. used amino acids. Comparative analysis showed that codon usage patterns and major customarily utilized codons of 13 mitogenomes were highly conservative ( Figure 6). The most frequently used amino acids were Leucine2 (Leu2), Phenylalanine (Phe), and Isoleucine (Iso), which were represented by codons with high A or T contents. The AT content and codon usage bias phenomenon could be attributed to the fact that the synonymous codon usage bias in the mitogenomes tends to be from codons ending with A/T to promote transcription [46]. used amino acids. Comparative analysis showed that codon usage patterns and major customarily utilized codons of 13 mitogenomes were highly conservative ( Figure 6). The most frequently used amino acids were Leucine2 (Leu2), Phenylalanine (Phe), and Isoleucine (Iso), which were represented by codons with high A or T contents. The AT content and codon usage bias phenomenon could be attributed to the fact that the synonymous codon usage bias in the mitogenomes tends to be from codons ending with A/T to promote transcription [46]. . The most variable region was in ND2, while the most conserved fragment was in COI (lowest Pi), as also found in other mayflies [4,35]. Ka (Nonsynonymous substitutions) and Ks (Synonymous substitutions) are used as indicators of the selectivity constraint [47]. The average Ka/Ks ratios were estimated to investigate evolutionary rates of Furcatergalia PCGs. Ratios ranged from 0.075 of COI to 0.669 of ND6, which suggested that all PCGs were under purifying selection (Ka/Ks < 1). Our results showed that ND6 and ATP8 exhibited relaxed purifying selection, while COI was under the strongest purifying selection ( Figure 7B), which showed the gene of Furcatergalia under strong functional restriction. The nucleotide diversity of the PCGs of Furcatergalia mitogenomes is shown in Figure 7A (Sliding window = 100 bp). The most variable region was in ND2, while the most conserved fragment was in COI (lowest Pi), as also found in other mayflies [4,35]. Ka (Nonsynonymous substitutions) and Ks (Synonymous substitutions) are used as indicators of the selectivity constraint [47]. The average Ka/Ks ratios were estimated to investigate evolutionary rates of Furcatergalia PCGs. Ratios ranged from 0.075 of COI to 0.669 of ND6, which suggested that all PCGs were under purifying selection (Ka/Ks < 1). Our results showed that ND6 and ATP8 exhibited relaxed purifying selection, while COI was under the strongest purifying selection ( Figure 7B), which showed the gene of Furcatergalia under strong functional restriction.
Phylogenetic Analysis
Phylogenetic trees based on two datasets of 26 specimens (seven families were included as the ingroups) were reconstructed to further investigate the phylogenetic relationships of Furcatergalia. Two datasets used in this study were the matrix PCG123, containing 11,027 sites, and the matrix PCG12, containing 7378 sites. Four phylogenetic trees generated from both analytical methods (BI and ML) had unique topologies, which were combined together ( Figure 8). Our phylogenetic analyses were hence considered stable with high nodal support values with mitogenome data.
All analyzed families of Furcatergalia were recovered monophyletic in our phylogenetic trees. Our findings revealed that the family Leptophlebiidae was sister to all other clades, which supported Leptophlebiidae as the basal lineage of Furcatergalia [48]. The trees were then split into two large branches. In the lower branch, the species of Ephemerellidae were clustered together with that of Vietnamellidae. This sister-group relationship of the two families was supported by many previous studies [24,34]. The upper branch was composed of two lineages: the burrowing mayflies (Ephemeridae and Potamanthidae) and the superfamily Caenoidea (containing two families: Neoephemeridae and Caenidae; Figure 8).
Phylogenetic Analysis
Phylogenetic trees based on two datasets of 26 specimens (seven families were in cluded as the ingroups) were reconstructed to further investigate the phylogenetic rela tionships of Furcatergalia. Two datasets used in this study were the matrix PCG123, con taining 11,027 sites, and the matrix PCG12, containing 7378 sites. Four phylogenetic tree generated from both analytical methods (BI and ML) had unique topologies, which wer combined together (Figure 8). Our phylogenetic analyses were hence considered stabl with high nodal support values with mitogenome data. All analyzed families of Furcatergalia were recovered monophyletic in our phylogenetic trees. Our findings revealed that the family Leptophlebiidae was sister to all other clades, which supported Leptophlebiidae as the basal lineage of Furcatergalia [48]. The trees were then split into two large branches. In the lower branch, the species of Ephemerellidae were clustered together with that of Vietnamellidae. This sister-group relationship of the two families was supported by many previous studies [24,34]. The upper branch was composed of two lineages: the burrowing mayflies (Ephemeridae and Potamanthidae) and the superfamily Caenoidea (containing two families: Neoephemeridae and Caenidae; Figure 8).
The morphological studies revealed that adults of the family Neoephemeridae possessed potamanthid-like wing venation and genitalia but the nymphs possessed caenidlike body and gill patterns [27]. In our analysis, the superfamily Caenoidea (Caenidae + Neoephemeridae) was well supported (Figure 8). The results were in accordance with other molecular research studies based on different markers (molecular and morphology) [14,21,49,50]. Additionally, the topologies of our phylogenetic trees were almost consistent with the Kluge hypothesis and the newly published phylogenetic study for Ephemeroptera with over 440 targeted genomic exons [21]. The phylogenetic relationships of Furcatergalia were well supported in our mitogenomic analyses. The morphological studies revealed that adults of the family Neoephemeridae possessed potamanthid-like wing venation and genitalia but the nymphs possessed caenid-like body and gill patterns [27]. In our analysis, the superfamily Caenoidea (Caenidae + Neoephemeridae) was well supported (Figure 8). The results were in accordance with other molecular research studies based on different markers (molecular and morphology) [14,21,49,50]. Additionally, the topologies of our phylogenetic trees were almost consistent with the Kluge hypothesis and the newly published phylogenetic study for Ephemeroptera with over 440 targeted genomic exons [21]. The phylogenetic relationships of Furcatergalia were well supported in our mitogenomic analyses.
Sample Collection, Identification, and DNA Extraction
Specimens of P. edmundsi were obtained from Maolan, Guizhou province of China, in July 2019, while P. projecta were obtained from Shangri-La, Yunnan province of China, in April 2021. Fresh samples were originally preserved in 100% ethyl alcohol and cryopreservation at −20 • C was used for further storage of the analyzed samples in the laboratory. All specimens were then morphologically identified by Dr. Changfa Zhou using available taxonomic keys and voucher specimens were deposited in the specimen room of Nanjing Normal University (College of Life Sciences). Total genomic DNA was extracted separately from the whole body of individual sample using the TIANamp Genomic DNA Kit (Tiangen, China) according to the manufacturer's instructions. The quality of the extract DNA was tested with 1% agarose gels and the concentration was measured using a Nanodrop 2000 spectrophotometer (Thermo, Wilmington, DE, USA).
Whole-Genome Sequencing and Mitogenome Assembly
The DNA samples were remitted to Personal Biotechnology Co., Ltd. (Nanjing, China), for library construction and sequencing. Whole-genome data was generated on the Illumina NovaSeq platform (Illumina, San Diego, CA, USA) with a PE150 strategy (2 × 150 base, paired-end reads). The library (insert size of 400 bp) with two indexes was constructed using the Illumina TruSeq ® DNA PCR-Free HT Kit and two libraries were then pooled as well as sequenced together with other projects.
More than 2 GB of raw data of each sample was yielded and filtered into clean reads prior to assembly. The reads with adapter contamination were trimmed with the NGS-Toolkit [51]. Low quality reads (Phred scores < 20) comprised more than 10% of the unknown bases (N) and any duplicated sequences were removed by Prinseq [52]. The de novo assemblies of mitogenomes were performed with clean data using NOVOPlasty 4.2.1 [53]. This software assembles organelle genomes with a seed-and-extend algorithm from whole-genome sequencing data, starting from a related or distant single "seed" sequence and an optional "bait" reference mitogenome. For the mitogenome assembly of two neoephemerid species, we used the COI gene and the complete mitogenome of Caenis pycnacantha (Caenidae, GenBank accession number: GQ502451) as the seed and bait reference sequence, respectively. Two assemblies were performed following the developer's suggestions and using an empirical k-mer size of 33. Subsequently, to investigate the accuracy of next-generation sequencing and de novo assembly, three different fragments (partial COI gene, partial ND5 gene, and fragment containing ND3, trnA, trnR, and trnN genes) were amplified for both species by polymerase chain reaction (PCR) and were then Sanger-sequenced. Six pairs of specific primers for two mitogenomes were designed and are shown in Table S3. The sequencing results were finally aligned with the assembled draft mitogenomes using MEGA X [54].
Mitogenome Annotation and Bioinformatic Analysis
MITOS 2 on the MITOS web server (http://mitos2.bioinf.uni-leipzig.de/ accessed on 11 July 2021) was used for the primary annotation of the two assembled mitogenomes [55]. The invertebrate mitochondrial genetic code was used as the optional setting and BLAST searches in NCBI were used to confirm their accuracy. The start and stop positions of 13 PCGs were manually adjusted and corrected by aligning published data of the Ephemeroptera species in GenBank. Additionally, initiation and termination codons of those PCGs were identified by the ORF Finder (https://www.ncbi.nlm.nih.gov/orffinder/ accessed on 11 July 2021). To verify the results of the primary annotation and to make modifications, ARWEN 1.2 [56] and tRNAScan-SE 1.2.1 [57] were used to identify tRNAs. The secondary structure was also predicted by the MITOS web server [55], followed by manual plotting with Adobe Illustrator CS6. Blastp and Blastn (https://blast.ncbi.nlm. nih.gov/Blast.cgi/ accessed on 11 July 2021) were used to compare the rRNA genes of the mitogenomes of related species reported previously. Intergenic spacers and overlapping regions between genes were estimated manually. The graphical maps of the mitochondrial circular genomes of P. edmundsi and P. projecta were visualized using the program MitoZ in the visualization module [58]. To assess interspecific variation, pairwise comparison of two new mitogenomes was made with the web tool of mVISTA in the Shuffle-LAGAN mode [59]. Except for two mitogenome sequences of P. edmundsi and P. projecta, the complete mitogenomes of the other 11 species in Furcatergalia (three species of Caenidae, a single species of Vietnamellidae, five species of Ephemerellidae, a single species of Leptophlebiidae, and a single species of Ephemeridae) were downloaded from GenBank (NCBI). A total of 13 species were used for the analysis of nucleotide composition, codon usage, and relative synonymous codon usage (RSCU) values. Codon usage statistics were calculated using DnaSP 6.0 [60]. The composition skew (base compositional differences) was calculated on the basis of the formula: AT-skew [(A − T)/(A + T)] and GCskew [(G − C)/(G + C)] [36]. The analysis of the nucleotide diversity with a sliding window of 100 bp and step size of 25 bp was conducted using DnaSP 6.0 [32]. The numbers of the synonymous substitutions (Ks) and non-synonymous substitutions (Ka), and the ratios of Ka/Ks for each PCG were also measured in the DnaSP 6.0 [60].
Phylogenetic Analysis
Including two newly sequenced mitogenomes of P. edmundsi and P. projecta, a total of 24 mitogenome sequences of Furcatergalia (seven families) were used for phylogenetic analyses (Table S4). Additionally, mitogenomes of two species from the family Siphluriscidae (Ephemeroptera and S. chinensis) and Coenagrionidae (Odonata and Ischnura pumilio) were used as outgroups. The nucleotide sequences of all 13 PCGs were used for our phylogenetic analyses. Sequences for each PCG were aligned individually with codon-based multiple alignments using the MAFFT 5 algorithm [61] within the TranslatorX online platform (with the L-INS-i strategy and default setting) [62]. The program Gblock 0.91b was used to identify the conserved regions with default settings [63]. The individual aligned sequences were then concatenated by PhyloSuite [64] and split into two datasets: (1) PCG123, including all codon positions of 13 PCGs, and (2) PCG12, including the first and second codon positions of 13 PCGs.
The best partitioning schemes and best-fitting substitution models for two datasets were determined by the program PartitionFinder implemented in the PhyloSuite under a greedy search algorithm with linked branch lengths on the basis of Bayesian information criterion (BIC; Tables S5 and S6) [65]. Two methodologies of Bayesian inference (BI) and maximum likelihood (ML) were selected to reconstruct the phylogenetic trees based on both datasets. For BI analysis, MrBayes 3.2.7a [66] was carried out through the online CIPRES Science gateway [67]. Each run of four, chains was set for 10 million generations with sampling every 1000 generations. The first 25% of the trees of each run were discarded as burn-in during BI analysis and the consensus tree was computed from the remaining trees. ML analysis was performed in RAxML 8.2.12 [68] under a GTRGAMMAI model and branch support for each node was estimated with 1000 bootstrap replicates. The phylogenetic trees were viewed and edited with FigTree 1.4.2 (http://tree.bio.ed.ac.uk/software/figtree/ accessed on 11 July 2021).
Conclusions
In this study, two neoephemerid mitogenomes were newly sequenced, among which were representatives from the genus Potamanthellus (P. edmundsi) and Pulchephemera (P. projecta). Results showed that the sequenced gene arrangements were consistent with the putative ancestral insect mitogenomes as understood today. Two neoephemerid mitogenomes generated in this study had a similar AT nucleotide bias, AT and GC-skews, and a codon usage of PCGs, and were comparable overall to other sequenced Furcatergalia mayflies. The nucleotide diversity of the PCGs of Furcatergalia mitogenomes showed that the most variable gene was ND2, while the most conserved gene was COI. Meanwhile, the analysis of the selection pressures on each gene showed that ND6 and ATP8 exhibited relaxed purifying selection, while COI was under the strongest purifying selection. The comparative analysis could improve our understanding of the evolution of mayfly mitogenomes. Phylogenetic analyses based on PCGs from the mitogenomes of 26 species clarified the phylogenetic relationships of Furcatergalia. The results corroborated the monophyly of the seven families and supported the family Leptophlebiidae as the basal lineage of Furcatergalia. Additionally, the sister-group relationship of Caenidae and Neoephemeridae was well supported at the mitogenome level. Our findings also suggested that the mitogenome sequences were effective molecular markers to study the phylogenetic relationships within Furcatergalia.
Supplementary Materials:
The following are available online at https://www.mdpi.com/article/ 10.3390/genes12121875/s1, Figure S1: Alignment of the mitogenome sequences of P. edmundsi and P. projecta; Figure S2: The putative tRNA second structure for the P. edmundsi mitogenome; Figure S3: The putative tRNA second structure for the P. projecta mitogenome; Table S1: Codon number and RSCU in the mitogenome of P. edmundsi; Table S2: Codon number and RSCU in the mitogenome of P. projecta; Table S3: The specific primers for the mitogenomes of P. edmundsi (PE) and P. projecta (PP) used in this study; Table S4: The species information used in this study; Table S5: Partition schemes and best-fitting models selected in the PCG123 dataset; and Table S6: Partition schemes and best-fitting models selected in the PCG12 dataset. | 7,365.4 | 2021-11-24T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Continuous Operation of a Bragg Diffraction Type Electrooptic Frequency Shifter at 16 GHz with 65% Efficiency
We demonstrate for the first time the continuous operation of a Bragg diffraction type electrooptic (EO) frequency shifter using a 16 GHz modulation signal. Because frequency shifting is based on the Bragg diffraction from an EO traveling phase grating (ETPG), this device can operate even in the millimeter-wave (>30 GHz) range or higher frequency range. The ETPG is generated based on the interaction between a modulation microwave guided by a microstrip line and a copropagating lightwave guided by a planner waveguide in a domain-engineered LiTaO3 EO crystal. In this work, the modulation power efficiency was improved by a factor of 11 compared with that of bulk devices by thinning the substrate so that the modulation electric field in the optical waveguide was enhanced. A shifting efficiency of 65% was achieved at the modulation power of 3 W.
Introduction
Coherent optical frequency conversion based on external modulation is an important technique not only for the optical communication and optical measurement but also for microwave and millimeter-wave photonics because it corresponds to an upconversion from RF to optical domain.Photomixing of two optical modulation sidebands generated by an electrooptic phase modulator (EOM) or a frequency comb generator has been used to generate low phase noise coherent microwaves or millimeter-waves, which are desirable for many applications such as the radar [1], sensing [2][3][4], and wireless communications [5].The advantages of the external modulation method over other methods such as optical injection locking, optical phase-locked loop, and dual-wavelength laser source are the system's simplicity, stability, and frequency tunability.However, typical sinusoidal phase modulation is an inherently inefficient method of frequency conversion.At best, the fraction of the power in the first-order sideband generated by normal phase modulation is theoretically (J 1 (β)) 2 ≈ 34%, where J 1 () is a first-order Bessel function of the first kind and β is the modulation depth.The conversion efficiency of 34% corresponds toan extra loss of about 5 dB.Because the noise figure (NF) of an optical amplifier is relatively poor compared to electronic amplifiers, extra loss due to a low conversion efficiency impacts most key aspects of microwave and millimeter-wave photonics in which low-loss 1550 nm components should be used [6,7].The lower conversion efficiency results in a lower carrier-to-noise ratio (CNR).The higher conversion efficiency is essential not only for microwave and millimeter-wave photonics applications but also for other photonic applications in which the system performance is governed by the CNR.
Recently, we proposed a new-type of EO frequency shifter and experimentally demonstrated the 16.25 GHz frequency shift with 82% efficiency [8].The operation is based on the Bragg diffraction from an EO traveling phase grating (ETPG).Because the ETPG is produced based on the Pockels effect, this shifter can operate in the millimeter-wave (>30 GHz) range or in the higher frequency range.Moreover, multiplication of the frequency shift can be realized by cascading the ETPGs.We demonstrated a frequency shifting of ±32.5 GHz with a 60% efficiency, using a 16.25 GHz modulation signal (doubler operation) [9].Together with recent progressive high-speed modulation technologies [10], multiplication of the Bragg diffraction from the ETPG promises to realize the terahertz (>100 GHz) coherent link with high efficiency (more than Bessel function).However, bulk devices previously reported required relatively higher modulation power, typically several tens of watts at 16 GHz, therefore they were driven with pulsed magnetron.
In this paper, we demonstrate a waveguide-based frequency shifter in which the modulation power is significantly reduced compared with bulk shifters.A planar waveguide is fabricated using a conventional proton exchange method in a congruent LiTaO 3 (CLT) EO crystal [11].The modulation electrode is a microstrip line.To improve the modulation efficiency, we reduced the thickness of the substrate, that is, the distance between the hot and ground electrodes, from 0.5 mm in the former bulk device to 0.1 mm.
In Section 2, the principle of the ETPG and frequency shifting are briefly summarized.Section 3 discusses the improvement of the modulation power efficiency by reducing the thickness of the substrate.In Section 4, we summarize the design and fabrication of the waveguide-based frequency shifter.Experimental results and discussions are summarized in Section 5.
Principle of the Frequency Shift Based on the ETPG
We briefly explain the principle of the frequency shift based on the ETPG.The ETPG is a time-varying phase distribution within the optical beam cross-section produced by an interaction between the traveling modulation wave and the copropagating lightwave in a periodically domain-inverted (polarization-reversed) EO crystal.Figure 1(a) shows the basic structure of the domain-inverted EO crystal.The device is formed by repeating this basic structure and arranging the repeated structure in two dimensions on the x-y plane.When a lightwave with the group velocity of v o propagates along the y-axis with a collinear traveling modulation wave with the phase velocity of v m in the crystal, the modulating electric field E(y, t 0 ), which the lightwave entering the device at time t 0 encounters at position y, can be written as , where E m is the amplitude of the modulating wave, f m is the modulation frequency, and From this equation, we find that the traveling phase grating can be realized by setting the length and the center position of the domain-inverted region to be L and y 0 (x) = 2Lx/Λ, respectively.The phase shift induced to a light passing through the length of 2L of this domain-inverted crystal, which is shown in Figure 1(a), is expressed as where Δφ m is the modulation index and β = 2π/Λ.This resultant phase distribution expresses a traveling sinusoidal phase grating pumped by refer to this time-varying phase distribution as the ETPG.Operation of the acoustooptic (AO) frequency shifter is based on the Bragg diffraction from the traveling phase grating pumped by an acoustic wave.Based on the analogy with the AO frequency shifter, we can realize the EO frequency shifter based on the Bragg diffraction from the ETPG [8].
The remarkable features of the ETPG are (1) the period of the ETPG is determined by the period (Λ) of the domain inversion in the x direction, not by the wavelength of the modulating wave, and (2) the traveling direction of the ETPG is determined by the sign of the β, that is, the sign of the tilting angle (θ TPG ) of the periodic domain inversion.The first feature allows the ETPG to operate at a higher modulation frequency even at the millimeter-wave frequency.The second feature allows us to cascade the ETPG to accumulate frequency shifting (multiplication) [9].
Improvement of the Modulation Power Efficiency
Although the proton exchange process might degrade the Pockels effect [12], in this section, we will confirm experimentally the effect of reducing the thickness of the substrate on the improvement of the modulation efficiency.We fabricated waveguide devices with four different thicknesses t and measured the modulation efficiency, which is defined as the modulation index per interaction length per modulation power (rad/mm/W).Figures 2(a) and 2(b) show a crosssection view of the conventional bulk device and the proposed waveguide device, respectively.The substrate is a congruent LiTaO 3 (CLT) EO crystal.The lightwave and modulation microwave propagate along the y direction.The traveling modulation microwave is guided by the microstrip line.The SiO 2 buffer layer is deposited on the waveguide before evaporating the hot electrode of the microstrip line to reduce propagation losses.The thickness of the buffer layer is about 0.1 μm.The depth of the waveguide is about 0.8 μm for single mode operation, which will be discussed later.Figure 3 plots the modulation efficiency of the waveguide device for t = 0.1, 0.15, 0.3, and 0.5 mm.The modulation efficiencies are normalized by the values obtained with the 0.5-mm bulk device.The dashed line is the theoretical function (∝ 1/t 2 ) fitted to the data.Because of the proton exchange process, the experimentally achieved modulation efficiency of the waveguide device was reduced to about 50% of that of the bulk device.The modulation efficiency improved inversely proportional to the square of the device thickness, t 2 .The modulation power efficiency of the 0.1mm waveguide device was enhanced by a factor of 11 compared with that of 0.5-mm bulk devices.substrate with the period of Λ p = 60 μm.The interaction length was 34 mm.The thickness of the substrate was 0.1 mm.An extraordinary refractive index n s and a group refractive index n g of the CLT substrate were calculated to be n s = 2.21 and n g = 2.41, respectively, using the Sellmeier equation for 514.5 nm (Ar laser) [13].The refractive index of the waveguide was measured to be n f = 2.26.The refractive index of the SiO 2 buffer layer was also calculated to be 1.47, using the Sellmeier equation [14].Using these refractive indices, the depth of the optical waveguide was designed to be d = 0.8 μm, based on mode calculation using effective index method to realize a single TM-guided mode operation.
Design and Fabrication of the Waveguide Frequency Shifter
The 0.8-μm waveguide was fabricated based on the standard proton exchange technique using melted benzoic acid.The temperature of the melted benzoic acid for the proton exchange was set at 240 degrees Celsius.The relation between the exchanging time (t ex ) and the depth (d) of the waveguide can be expressed as d = 2 D ex t ex , where D ex = 8.41 × 10 −2 (μm 2 /h) [15].The exchanging time was set to 2 hours.After the exchanging, the device was thermally annealed for 30 minutes at 400 degrees Celsius to reduce the propagation losses and recover the Pockels effect.
The effective relative permittivity for the modulation microwave should be calculated to determine the half period L of the domain inversion.We calculated the effective relative permittivity using an electromagnetic field simulator (method of moments).Figure 5 shows the results.The filled circles represent results of the simulations without SiO 2 buffer layer.The solid line is calculated using the accurately approximate formula for isotropic substrate [16].The formula is well fitted to the simulated results, therefore the effective relative permittivity with the SiO 2 buffer layer can be calculated through the simulation (open circles in Figure 3).From the simulated result, the phase velocity of the modulation microwave for t = 0.1 mm substrate can be determined to be v m = 5.48 × 10 7 m/s.Together with the group velocity of the optical wave, the half period of the domain inversion for t = 0.1 μm is determined as L = 3.03 mm using (1).
Experimental Results and Discussion
The frequency of the modulation signal was 16.25 GHz, which was supplied from a signal generator and amplified by a commercially available power amplifier (R&K: AA380).The available maximum modulation power was 3 W.
Figure 6 shows the frequency-resolved far-field pattern of the output beam.The frequency of the output beam is resolved to the vertical direction with the diffraction grating and the Fourier transform mirror, therefore the horizontal axis is space, whereas the vertical axis corresponds to the frequency.The modulation power was about 1.7 W. Filled and open circles are calculated based on the electromagnetic field simulator (method of moments) for the substrate of CLT only and CLT with SiO 2 buffer layer, respectively.The solid curve is calculated using an accurately approximate formula for the isotropic substrate.
We observed two optical beam spots resolved in both space and frequency.The frequency of the 1st diffracted component was shifted by 16.25 GHz and there were no unwanted components at this modulation power.We define the frequency shifting efficiency as the output power ratio of the 1st order diffracted components to the whole output optical power.A frequency shifting efficiency of about 50% was achieved at the modulation power of 1.7 W.
Figure 7 shows the diffraction efficiency of the 1st order of the diffracted component as a function of the square root of the modulation power.The dashed curve is the theoretical curve fitted to the data.A diffraction efficiency of about 65% was achieved at the modulation power of 3 W.About 35% of the optical power was observed in other diffracted components, such as 1st and 2nd order diffracted components.From the fitting, we conclude that the maximum diffraction efficiency can be achieved at the modulation power of 3.6 W. Previously, we achieved the maximum diffraction efficiency at the modulation power of about 300 W for the interaction length of 12.5 mm in the 0.5-mm bulk device [9].Compensating the difference of the interaction length, the power efficiency of the current waveguide device should be improved by a factor of 84, which corresponds to the expected modulation power of 3.8 W. The achieved value of 3.6 W is close to this predicted value.
On the other hand, the maximum diffraction efficiency we achieved the 0.5-mm bulk device was 82% [8].The degradation of the diffraction efficiency might be due to the degradation of the Q factor of the device.The boundary between the Bragg regime and Raman-Nath regime can be described in terms of the so-called Q factor, given by Q = 2πλl/Λ 2 .The Q factor of the current waveguide device was 13.5, whereas that of the former bulk device was 23.Shortening the period of the domain inversion will recover the diffraction efficiency.
We confirmed experimentally the improvement of the modulation power efficiency at 514.4 nm by reducing the gap distance between the hot and ground electrode of the microstrip line.Operation of the shifter at 1550 nm requires further improvement of the modulation power efficiency, by factor of 9.It can be achieved by narrowing the gap to about 0.03 mm for a 34-mm-long device.The 0.03-mmthick device can be fabricated by the mechanical polishing with the proton exchange process.Crystal ion slicing and wafer bonding (smart guide) [17] is another technique to fabricate the thin-film device without degradation of the optical quality compared with the bulk crystal.Rabiei and Steier demonstrated a LiNbO 3 waveguide modulator at 1550 nm with a low propagation loss in which the waveguide was fabricated using the smart guide [18].Using Sellmeier equation, the group refractive index is calculated to be about 2.17 for 1550 nm, which requires slight modification of the period of the domain inversion for the QVM condition.On the other hand, the propagation characteristics of a thin-film microstrip line can be calculated using equivalentcircuit model [19].The effective relative permittivity for the 0.03-mm-thick microstrip line with the SiO 2 buffer layers is estimated to be about 28.Using calculated values of the group refractive index for 1550 nm and the effective relative permittivity, the modified half-period of the periodic domain inversion is calculated to be 2.96 mm.The Q factor of the device with 60 μm domain inversion at 1550 nm will be about 40, which promises the frequency conversion efficiency of more than 80% [8].It is feasible to implement the frequency shifter for the 1550 nm band operating at 16 GHz with more than 80% efficiency.
Improvement of the modulation power efficiency will increase the operation frequency of the device.The thinfilm microstrip line supports the THz wave transmission, although stronger attenuation appears above 300 GHz, which is caused by an increasing amount of radiation loss [20].A 2 W Q-band (30 to 50 GHz) power amplifier is commercially available.Our technique would promise to fabricate the low-loss upconverter which connects the millimeter-wave and 1550 nm optical wave.
Conclusion
We proposed waveguide-based frequency shifter, in which modulation power efficiency is improved by a factor of 11 compared with the conventional bulk device.At the modulation power of 3 W, we achieved a frequency shifting of 16.25 GHz with the efficiency of 65%.Improvement of the modulation power efficiency enabled us to demonstrate the continuous operation of the shifting.
Figure 1 :
Figure 1: (a) Basic structure of the domain-inverted ferroelectric crystal.The width and the center position of the domain-inverted region are L and y 0 , respectively.(b) Schematic of the frequency shifting operation.
Figure 4 Figure 2 :Figure 3 :
Figure 4 schematically shows the waveguide frequency shifter.Periodic domain inversion was performed on a CLT
Figure 5 :
Figure 5: Effective relative permittivity of the microstrip line.Filled and open circles are calculated based on the electromagnetic field simulator (method of moments) for the substrate of CLT only and CLT with SiO 2 buffer layer, respectively.The solid curve is calculated using an accurately approximate formula for the isotropic substrate.
Figure 6 :
Figure 6: Far-field pattern of the output beam with resolved frequency.The horizontal axis is space.The vertical axis corresponds to the frequency.
Figure 7 :
Figure 7: Diffraction efficiency as a function of the square root of the modulate power.The filled circles show the experimental data.The dashed curve shows the theoretical curve fitted to the data. | 3,854.2 | 2012-10-18T00:00:00.000 | [
"Physics",
"Engineering"
] |
Practical reference-frame-independent quantum key distribution systems against the worst relative rotation of reference frames
Reference-frame-independent quantum key distribution (RFI-QKD) can generate secret keys with the slow drift of reference frames. However, the performance of practical RFI-QKD systems deteriorates with the increasing drift of reference frames. In this paper, we mathematically demonstrate the worst relative rotation of reference frames for practical RFI-QKD systems, and investigate the corresponding performance with optimized system parameters. Simulation results show that practical RFI-QKD systems can achieve quite good performance against the worst relative rotation of reference frames, which exhibit the feasibility of practical QKD systems with free drifting reference frames. Furthermore, we propose a universal estimation method of the secret key rate in practical RFI-QKD systems, which conforms to the nature of RFI-QKD more well than the usual estimation method.
Introduction
Based on the theory of quantum physics, quantum key distribution (QKD) [1] provides a way of generating information-theoretic secure keys between two distant parties Alice and Bob, even in the presence of a malicious eavesdropper Eve. Since the first Bennett-Brassard-1984 (BB84) protocol was proposed in 1984, a series of works have been made to improve the practical performance of QKD systems [2][3][4][5][6][7][8][9][10][11][12][13][14]. Generally, in most of these systems, the complicated and real-time operation of calibrating reference frames is indispensable to assure the regular running of practical QKD systems [4,[11][12][13][14]. However, the calibration of reference frames may complicate QKD systems and reduce key generation rates.
Fortunately, reference-frame-independent QKD (RFI-QKD) [15] can bypass the calibration of reference frames and generate secret keys with the slow drift of reference frames. In RFI-QKD, Alice (Bob) adopts three orthogonal bases Z, X, and Y to encode (decode) quantum states, and her (his) local bases are denoted as Z A(B) , X A(B) , and Y A(B) , where Z, X, and Y are three Pauli matrices. The Z basis can be well aligned for common encoding schemes, and the X and Y bases may drift slowly with unknown angle β. In other words, the Z, X, and Y bases satisfy A . Alice and Bob can distill secret keys from data in Z basis, and estimate Eve's information from data in X and Y bases.
To date, there have been a lot of theoretical and experimental reports on RFI-QKD [16][17][18][19][20][21][22][23][24][25][26] due to its immunity against the slow drift of reference frames. However, the performance of practical RFI-QKD systems worsens with the increasing drift of reference frames, and their performance will become the worst when the relative rotation of reference frames reaches its maximum. Hereafter, the maximum relative rotation of reference frames is termed the worst relative rotation of reference frames. Hence, it is vital to investigate the worst relative rotation of reference frames of practical RFI-QKD systems and the corresponding performance.
In this paper, we mathematically demonstrate the worst relative rotation of reference frames for practical RFI-QKD systems, and investigate the corresponding performance with full optimized parameters. Simulation results show that practical RFI-QKD systems can achieve quite good performance even against the worst relative Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. rotation of reference frames, which obviously exhibit the possibility of practical QKD systems with free drifting reference frames. Moreover, we propose a universal estimation method of the secret key rate in practical RFI-QKD systems, which does not need a good estimation of the relative rotation β.
The worst relative rotation of reference frames for practical RFI-QKD systems
For simplicity, we demonstrate the worst relative rotation of reference frames for RFI-QKD with the weak coherent source without statistical fluctuations. In order to make the demonstration more practical, we introduce the three-intensity decoy-state method [27][28][29] into RFI-QKD, where Alice randomly modulates her quantum states into signal states of intensity μ, decoy states of intensity ν, and vacuum states of intensity ω. For signal (decoy) states of intensity μ(ν), Alice randomly chooses bases Z A , X A , or Y A , and for vacuum states of intensity ω, she does not choose any bases. Bob randomly chooses bases Z B , X B , or Y B to measure the quantum states he received. After the quantum communication phase, Alice and Bob can obtain a series of gains and quantum bit error rates (QBERs).
Here, we give the related system parameters and models used in this demonstration first. d (η d ) denotes the dark count rate (detection efficiency) of single-photon detectors, α (L) represents the loss coefficient (length) of the standard fiber link, h h = -a 10 d L 10 denotes the overall transmission efficiency between Alice and Bob, e d characterizes the intrinsic error rate of the optical system, and f measures the inefficiency of the key reconciliation process. In this paper, we use the typical parameters, which are d=3×10 −6 , η d =14.5% , α=0.2 dB/km, e d =1.5% , and f=1.16, for simulation. Moreover, two threshold single-photon detectors are adopted at Bob's side, and the probability of the valid detection events conditional on j-photon states is where the valid detection events are those only one detector clicks. As for the double-click events in real-life RFI-QKD systems, Bob assigns a random bit value to one detector. The photon-number distribution of the phaserandomized weak coherent source is = where λ is the intensity of the weak coherent source, and i is the number of photons.
Obviously, the gain of vacuum states, denoted as Q ω , is = - , 2 and the QBER of vacuum states, denoted as E ω , is The gain of signal or decoy states of intensity λ prepared in basis ζ A and measured in basis ζ B , denoted as z z l Q A B , can be given as, (4), Alice and Bob can obtain explicit expressions of different gains. We take l Q X X A B for example to show how to get these explicit expressions. First, l P X X Similarly, we can easily get With the same procedure, we can obtain . With the above gains and QBERs, they can estimate the lower bound of the single-photon yield [29] and the upper bound of the single-photon error rate [29] in ζ A ζ B n = - where H is the binary entropy function given by 1 log 1 2 2 , and I E denotes Eve's information. Particularly, Eve's information is = - As shown in equation (17), the relative rotation of reference frames β only affects Eve's information I E . More specifically, β only affects C. Hence, to investigate the performance of practical RFI-QKD systems against the worst relative rotation of reference frames β, we only need to investigate the performance of C against the worst β. In our demonstration, we consider RFI-QKD with the slow drift of reference frames, where β can be simply deemed as constant in a short period of time. Interestingly, [22,24] assume β suffers from certain variation.
To investigate the performance of C against β, we temporarily set μ=0.6, ν=0.1, and L=100, which obviously will not change the behavior of C against β. First, with equations (16), (21), we can demonstrate C is a periodic function of β by calculations, and the minimal positive period of C is p 2 . In other words, where n=±1,±2, L . Moreover, in the domain 4 4 , which means the symmetry axis of C is p 4 . Figure 1 shows the performance of C against the relative rotations of reference frames β in two periods. It can be easily seen that the minimal positive period of C is p 2 , and the performance of C reaches its worst at the symmetry axis, where β is p 4 or p Then, in the domain p ( ) 0, 2 , we calculate the derivative of C, denoted as ¢ C , and find ¢ = 2 , which illustrates that C reaches the minimum value at b = p 4 . Figure 2 shows the performance of C′ against the relative rotations of reference frames β in p ( ) 0, 2 . Hence, considering the periodicity of equations (17) or (21), we demonstrate the worst relative rotation of reference frames for RFI-QKD is b = p m 4 , where m is odd. The physics behind this demonstration can be explained by means of Bloch sphere: 1) β=0 means that Alice's X A and Y A bases align perfectly with Bob's X B and Y B bases, which is the ideal case; 2) b = p 2 means that Alice's X A is Bob's Y B and Alice's Y A is Bob's X B , which is in essence the ideal case due to the high symmetry of C; and 3) b = p 4 means the worst-aligned reference frames between Alice and Bob, which is the worst relative rotation of reference frames for RFI-QKD. Considering the periodicity of equation (17), we can deduce that the performance of RFI-QKD is worst at b = p m 4 (m is odd).
Simulation
With the above demonstration, we consider the reasonable data set N=10 11 for statistical fluctuations [31] in practical RFI-QKD systems, and investigate its performance in p ⎡ ⎣ ⎤ ⎦ 0, 2 by the full parameter optimization method [32]. Notice that the domain p ⎡ ⎣ ⎤ ⎦ 0, 2 is ample to study the performance of practical RFI-QKD systems due to the periodicity of equations (17) or (21). For comparison purpose, we investigate the performance of practical RFI-QKD with b = p p p p 0, , , 2 . Simulation results are shown in figure 3. In figure 3, the solid lines from right to left denote the secret key rate of RFI-QKD with b = p p 0, , 8 4 , respectively, and the dotted lines be noted that, to clearly show the performance of RFI-QKD against the worst relative rotation of reference frames, results in figure 3 are simulated with the usual method.
Conclusions
In conclusion, we have demonstrated the worst relative rotation of reference frames for practical RFI-QKD, and investigated its performance with optimized parameters. Simulation results show that RFI-QKD can achieve quite good performance against the worst relative rotation of reference frames, which clearly demonstrates the possibility of practical QKD systems with free drifting reference frames. We believe practical QKD systems with free drifting reference frames will easily find a place in the satellite-to-ground QKD scenario [11,12] or other scenarios where it is difficult to calibrate the reference frames. Furthermore, we propose a universal estimation method of the secret key rate in practical RFI-QKD systems, which conforms to the nature of RFI-QKD more well than the usual estimation method. | 2,515.8 | 2018-05-25T00:00:00.000 | [
"Computer Science"
] |
Analyzing the Impact of Ambient Temperature Indicators on Transformer Life in Different Regions of Chinese Mainland
Regression analysis is applied to quantitatively analyze the impact of different ambient temperature characteristics on the transformer life at different locations of Chinese mainland. 200 typical locations in Chinese mainland are selected for the study. They are specially divided into six regions so that the subsequent analysis can be done in a regional context. For each region, the local historical ambient temperature and load data are provided as inputs variables of the life consumption model in IEEE Std. C57.91-1995 to estimate the transformer life at every location. Five ambient temperature indicators related to the transformer life are involved into the partial least squares regression to describe their impact on the transformer life. According to a contribution measurement criterion of partial least squares regression, three indicators are conclusively found to be the most important factors influencing the transformer life, and an explicit expression is provided to describe the relationship between the indicators and the transformer life for every region. The analysis result is applicable to the area where the temperature characteristics are similar to Chinese mainland, and the expressions obtained can be applied to the other locations that are not included in this paper if these three indicators are known.
Introduction
The life of a transformer is defined as the life of the paper insulation [1]. The paper insulation deteriorates with several factors, primarily including temperature, moisture content, and oxygen content. In all these factors, temperature has a major impact because the contribution of moisture or oxygen to the insulation deterioration can be minimized with modern oil preservation systems [2].
The temperature of a transformer, often considered as hot spot temperature (HST), is primarily controlled by ambient temperature and load, where ambient temperature is the dominated factor. On one hand, some types of load are directly affected by ambient temperature, such as the cooling load in summer: the higher the ambient temperature is, the greater the load will be. On the other hand, the load capability of a transformer is generally governed by ambient temperature. Transformers have to operate under the prescribed limit of HST, and ambient temperature is an uncontrollable factor in the influential factors of HST (ambient temperature and load); therefore, if the ambient temperature is high, the load capability of a transformer is always low, and vice versa. From these two aspects, it can be seen that ambient temperature is an important factor in estimating the transformer life.
There are some literatures that studied the impact of an increase of ambient temperature on transformer life. Reference [3] used a standardized engineering approach according to IEEE Standard C57.91 [4] to preliminarily assess the impact of increased ambient temperature on transformer loss of life at five locations in the USA. For an assumed 4 ∘ C rise in ambient temperature during 1900-2100, the predicted loss of life in the interval from 1990 to 2045 rises approximately 32% at a relatively warm location and 8% at a relatively cold location. Reference [5] applied an improved model of top oil temperature rise differing from the IEEE C57.91 model to assess the impact of ambient temperature change on transformer life, based on seven transformers and four loading test beds specified in IEEE C57.115. With the assumption of temperature rise of 3.5 ∘ C in 100 years, a probabilistic model for ambient temperature rise was performed to conclude that a reduction in the life of a transformer was about 3-6 years for the case studied, and there was a marked difference in 2 The Scientific World Journal the mean life of a transformer for several different loading conditions. These literatures focus on the impact of ambient temperature rise on transformer life, and the impact of different temperature characteristics on transformer life at different locations is not included. Actually, the ambient temperature characteristics of one location have a great impact on the local transformer life. For example, the transformer life at a warmer area is shorter than that at a colder area. Furthermore, there are many indicators portraying temperature characteristics in meteorology, and the key issue related to transformer life prediction and power system operation is which indicators are most important for the transformer life.
This paper focuses on quantitatively analyzing the impact of different ambient temperature characteristics on transformer life at different locations of Chinese mainland and attempts to find the most important temperature indicators for transformer life estimation based on regression analysis. Chinese mainland is selected for study due to its vast territory and diverse climates. In practical situations, difference in latitude, longitude, or altitude results in complex temperature characteristics; different temperature characteristics cause different values of transformer life. The life consumption model in IEEE Std. C57.91-1995 [2] is employed to estimate different values of transformer life at 200 typical locations of Chinese mainland. These locations are specially divided into six regions. For each region, the local historical temperature and load data are provided as inputs variables of the life consumption model to estimate the transformer life at every location. Then, the partial least squares regression (PLSR) method is applied to construct the regression between the transformer life and five temperature indicators. Finally, based on a criterion to measure the contribution of temperature indicators in PLSR, three indicators are considered the most important factors and involved in the regression analysis for every region. The relationship between the transformer life and these three temperature indicators is formulated with a simple and acceptable equation for every region, and the equations can be used for life estimation at the locations that are not included in this paper.
Transformer Life Estimation at Different Locations
when is at the reference temperature 110 ∘ C AA is equal to 1. Based on this definition, the loss of life in a given time period is proposed as [2] where EQA is the equivalent aging factor in the total time period, AA is the aging acceleration factor during the time interval Δ , is the total number of time intervals, and "Normal insulation life" is the proposed value of transformer life at the reference temperature 110 ∘ C.
According to the model, ambient temperature and load are two important root causes to the life consumption. The configuration of these two variables will be proposed subsequently to calculate the transformer life. In addition, to simplify the calculation process of the transformer life, ambient temperature and load are assumed to be annually cyclic so that only a sample annual temperature curve and a sample annual load curve on an hourly basis are needed to calculate the transformer life. With this assumption, the life value obtained for one location may not be very accurate, but it has little effect on the analysis result because the aim of this paper is to compare the life values at different locations, and the hypothesis and predigestion in calculation process for all locations are the same.
Hourly Ambient Temperature
Curve. Ambient temperature is an input variable of the IEEE model. We obtain historical temperature records from the total of 200 meteorological stations for this research [6]. These locations have a variety of temperature and load characteristics so that they can provide a relatively comprehensive description of different climate and load types in Chinese mainland. Figure 1 illustrates the 200 locations on the map, and the important cities (capital cities, municipalities directly under the central government, or capitals of autonomous region) are labeled specially. The locations cover the latitude range approximately from 19 ∘ N to 51 ∘ N, longitude range from 76 ∘ E to 132 ∘ E, and historical temperature records are available from 1951 to 2011 for most meteorological stations. The temperature data includes daily maximum temperature ( max, , , = 1, 2, . . . , 365, = 1, 2, . . . , 61) and minimum temperature ( min, , ).
To obtain a sample annual temperature curve on an hourly basis for every location, mean daily maximum temperature ( max, ) and mean daily minimum temperature ( min, ) are calculated by averaging the data set in 61 years, according to acquired. They are used to obtain the hourly temperature curve, according to [7] ℎ, = max, − ℎ ( max, − min, ) (ℎ = 1, 2, . . . , 24, = 1, 2, . . . , 365) , where ℎ, is the temperature at hour ℎ in the day and ℎ is the temperature ratio shown in Figure 2 [7]. Hence, the hourly temperature curves in the sample year for the 200 locations are formed.
Hourly Load Curve.
Load is the other input variable of the IEEE model. We set the hourly load curve for every location according to the statistics data [8,9]. The load repeats both daily and annually, described by hourly and monthly load variation curves. For hourly load variation curves, 200 locations are hypothesized to have the same shape as Figure 3 because the actual difference is not significant. However, the monthly load variation curves are significantly different from location to location due to many influential factors. Based on the load statistics [8,9], locations are divided into six regions as Figure 4 illustrates, including Figure 2: Temperature ratio, a parameter used to obtain the hourly temperature [7].
Dongbei, Huabei, Huadong, Huazhong, Xibei, and Nanfang, and the number of locations included in every region is given in Table 1. The locations in one region have similar load characteristics, and the difference between two regions becomes large. The regional average monthly load variation curves are given in Figure 5 according to [8,9]. It shows 4 The Scientific World Journal that every region generally has a summer load peak and a winter load peak in one year, but some have larger winter load ( Figure 5(a)), and some have larger summer load ( Figure 5(b)). Accordingly, the monthly load variation curves can be roughly classified into two categories.
where ℎ, is the load factor in hour ℎ of the month , is the average monthly load factor in Figure 5, ℎ is the hourly load variation in Figure 3, and ℎ is the average load factor in Figure 3.
As a result, all locations in one region have the same hourly load curve. Such treatment aims to do the analysis in Section 3.
Local Transformer Life Estimation.
The local hourly ambient temperature and load curves in the sample year are inputted into the IEEE life estimation model. The parameters of a sample transformer used are shown in Table 2, most of which are extracted from [10]. The "Normal insulation life" in (3) is set to be 180000 h according to [2]. The calculation results of the transformer life values at all locations are shown in Figure 6. The range for the value of life is about 10.6-149.3 years. It can be seen that the distribution of transformer life has a certain geographical feature, which will be discussed in details in the next section.
Analyzing the Impact of Ambient Temperature on Transformer Life
Section 2.4 obtained the transformer life values at 200 locations and concluded that the transformer life has a geographical feature. In this section, we will discuss the reason for it by analyzing the temperature and load characteristics. Then, the relationship between ambient temperature indicators and transformer life will be quantified by regression analysis.
The Temperature and Load Characteristics in Chinese
Mainland. Figure 6 shows that the distribution of transformer life has a certain geographical feature, which can be inferred from the geographical feature of temperature and load. After the hourly temperature curve in the sample year is got in Section 2.2, many indicators describing the temperature characteristics can be acquired, such as The Scientific World Journal annual maximum (minimum) temperature and annual mean temperature (AMT). Figure 7 illustrates the annual mean temperature and annual temperature range (ATR) at the 200 locations to give a general description of the temperature characteristics in Chinese mainland. AMT shows the annual mean temperature level at one location, and ATR is the mean temperature difference between the warmest and coldest months during one year, demonstrating the temperature amplitude variation mostly from summer and winter. ATR can be obtained according to where ℎ, , is the temperature in hour ℎ of the day of the month and is the number of days in the month . Figure 7(a) shows that the locations can be approximately divided into north and south, and it is warmer in south than in north on annual mean temperature level basis. Figure 7(b) illustrates that the temperature difference from large to small is in the northern area, central area, and southern area. Generally, the temperature levels in the warmest month are similar all across the country, and the differences of ATR primarily come from the coldest month. Therefore, the northern area has the lowest temperature level in the coldest month, and the climate of the southern area is more moderate correspondingly.
For the load characteristics, it can be found from Figures 4 and 5 that North China has larger winter load, and South China has larger summer load. The load has characteristics similar to the temperature. One reason is that the temperature is a primary influential factor of the load. For the regions in Figure 5(a), the cold weather of winter causes a larger heating load in winter compared to the cooling load in summer. This phenomenon is more pronounced in Dongbei compared to the other two in Figure 5(a) due to the significant larger winter load resulting from its extreme cold weather in winter. For the regions in Figure 5(b), their indistinct seasons and mild temperature in all year result in larger cooling load in summer than the heating load in winter.
The discussion above demonstrates the preliminary characteristics of temperature and load distribution in Chinese mainland. Different types of ambient temperature and load will result in different values of life; therefore, the distribution of transformer life also has its characteristics at these locations. In addition, the ambient temperature characteristics largely affect the characteristics of the load. Therefore, the ambient temperature is an extremely important factor influencing the life of a transformer in normal operation.
The Impact of Ambient Temperature on Transformer Life.
From Figures 6 and 7, we can find that there are a rough positive correlation between the transformer life and AMT and a rough negative correlation between it with ATR. It is worth mentioning that because all locations in one region have the same hourly load curve in the sample year as discussed in Section 2.3, the differences in transformer life for the locations in one region are considered primarily from the different temperature characteristics. The treatment is a starting point for the analysis of the relationship between temperature characteristics and transformer life. Consequently, the similarities between the life distribution and the temperature can be found, and their relationship can be investigated and analyzed in a regional context.
The Selection of Regression
Variables. The regression technique [11], a widely used relationship assessment method, is performed to assess the relationship between transformer life and ambient temperature and find out the significantly influential temperature indicators. The temperature indicators involved in the regression analysis are picked out firstly. AMT is chosen as a representative temperature indicator which reflects the overall temperature level of one location. Generally, it affects the transformer life strongly because the life is consumed greatly when the ambient temperature is high according to the Arrhenius theory [12] (when the ambient temperature is high, the transformer is generally subjected to a heavier load meanwhile which also increases the life consumption). Based on this theory, another two indicators, the mean temperature of the warmest day (MTWD) and the mean temperature of the warmest month (MTWM), are included. Additionally, diurnal temperature range (DTR) and ATR are involved to take account into the temperature range. DTR is defined as the temperature difference between the maximum and the minimum temperatures during the day, which describes the temperature amplitude variation in one day. These five temperature indicators can provide a relatively comprehensive description of the temperature characteristics of one location, and they are noted as 1 (AMT), 2 (MTWD), 3 (MTWM), 4 (DTR), and 5 (ATR).
The dependent variable involved in the analysis is the transformer life. An indicator-life scatterplot can preliminarily describe the relationship between transformer life and temperature indicators. For example, Figure 8 illustrates the relationship for two indicators in Huazhong, respectively, where it is approximately expressed in an exponential form. The similar conclusion can also be drawn for the other five regions. Actually, this fact is associated with the exponential form of (1). The natural logarithm of the variable "transformer life" is taken before the regression analysis so that the multiple linear regression (MLR) can be conducted conveniently. The variable "transformer life" after the natural logarithm, noted as , is considered the dependent variable of the regression analysis.
Partial Least Squares Regression.
Correlation coefficient analysis is applied to check the correlation of these variables. Table 3 shows the result for Huadong as an example. It can be seen that there are strong correlation between and every ( = 1, 2, . . . , 5), which confirms the important influence of temperature indicators on the transformer life.
In addition, there is also a close correlation between every , which identifies the multiple correlations among the independent variables. This multicollinearity problem must be paid attention in order to find the most crucial temperature indicators influencing the transformer life and analyze the relationship between the indicators and the transformer life.
The Scientific World Journal 7 Under this situation, the classical MLR is hard to employ because it will obtain an unstable or impractical solution [13], and PLSR is a good alternative because it is more robust [13][14][15][16]. PLSR combines the functions of principal component analysis and MLR, aiming at analyzing a set of dependent variables from a set of independent variables [15]. It constructs a new set of orthogonal factors extracting from the independent variables, often called latent variables or components, to predict the dependent variables. The latent variables capture most of the information in the independent variables and have the best predict power; therefore, the regression can be constructed by using fewer components than the numbers of independent variables, and the dimensionality of the regression problem is reduced [14,15]. In addition, because finding the most influential variables on the dependent variables interests many analysts, the variable importance in the projection (VIP) scores obtained by the PLSR has been paid an increasing attention as an important measure of each independent variable [17,18]. VIP is a criterion to measure the contribution of independent variables to the regression in PLSR, and its analytical definition can be found in [17]. VIP will be applied to measure the importance of each temperature indicator.
The Calculation Result and Discussion.
Based on the PLSR, one can construct an estimation formula of transformer life: where ( = 0, 1, . . . , 5) are the regression coefficients. The red markers in Figure 9 demonstrate that with these five indicators, the predicted value of the transformer life from (8) (horizontal axis) and the actual value of life (vertical axis) are in good agreement, which verifies the correctness of the regression result. Figure 10 illustrates the VIP values of the five variables in these six regions. It can be seen that the variables have different important levels in different regions. According to their relevance in explaining , X could be classified as follows: VIP > 1.0 (highly influential), 0.8 < VIP < 1.0 (moderately influential), and VIP < 0.8 (less influential) [18]. It can be found that 1 and 3 have great influence on the transformer life in every region. In other words, the annual mean temperature and mean temperature of the warmest month are important for the transformer life. For AMT, it is not surprising because transformer life has a strongly positive correlation with it based on the aforementioned discussion. The importance of MTWM implies that the loss of life in the warmest month accounts for a large percentage in the whole year. Accordingly, controlling the load level in the warmest month becomes especially significant in order to postpone the lifetime of the transformer in warm regions.
It can be also seen from Figure 10 that only parts of temperature indicators are strongly related to the transformer life in every region, some indicators are apparently less relevant than the others, and the VIP values of some are smaller than 0.8. Additionally, there are some similarities that 1 , 2 , and 3 are highly influential in most regions except for Huadong. The difference between Huadong and the other regions may be attributed to the less sample data in Huadong compared to the other regions as shown in Xibei Nanfang Considering the convenience of practical application, we attempt to construct the PLSR with only 1 , 2 , and 3 . The regression coefficients are listed in Table 4, and Figure 9 also gives the regression result with three indicators. It can be seen from Figure 9 that the difference between the result with five indicators and with three indicators is small, and an acceptable accurate level still can be achieved with only 1 , 2 , and 3 . In this case, the values of VIP of these three temperature indicators in every region are almost equal except that the VIP of 2 is a little smaller than 1 , 3 in Huadong.
Conclusively, AMT, MTWD, and MTWM are considered the most important indicators influencing the life of transformers in all regions of Chinese mainland. The conclusion is meaningful due to the large span in latitude and longitude and a wide variety of climatic conditions in Chinese mainland, and it is applicable to the area where the temperature characteristics are similar to Chinese mainland. In addition, a unified and common regression model is constructed with these three variables involved for all these six regions in 1 Nanfang Chinese mainland. It can be applied to the other locations that are not included in this paper if these three variables are known. Additionally, the assumption of the load configuration causes the difference in transformer life at different locations in one region to be considered primarily from the different temperature characteristics at different locations. Consequently, the effect of temperature indicators can be investigated and analyzed in a regional context.
Conclusion
To quantitatively analyze the impact of different ambient temperature characteristics on transformer life, PLSR is performed with five independent variables (annual mean temperature, mean temperature of the warmest day, mean temperature of the warmest month, diurnal temperature range, and annual temperature range) for every region. These five indicators can provide a comprehensive description of the temperature characteristics and give a well prediction of the transformer life at one location. Furthermore, considering the convenience of practical application, we use VIP as a contribution criterion to assess the importance of the independent variables, and annual mean temperature, mean temperature of the warmest day, and mean temperature of the warmest month are considered the most important indicators influencing the life of transformers for all regions. The conclusion is applicable to the area where the temperature characteristics are similar to Chinese mainland, and the regression model with these three variables can be applied to the other locations that are not included in this paper. | 5,455.8 | 2013-06-13T00:00:00.000 | [
"Physics"
] |
A semi supervised approach to Arabic aspect category detection using Bert and teacher-student model
Aspect-based sentiment analysis tasks are well researched in English. However, we find such research lacking in the context of the Arabic language, especially with reference to aspect category detection. Most of this research is focusing on supervised machine learning methods that require the use of large, labeled datasets. Therefore, the aim of this research is to implement a semi-supervised self-training approach which utilizes a noisy student framework to enhance the capability of a deep learning model, AraBERT v02. The objective is to perform aspect category detection on both the SemEval 2016 hotel review dataset and the Hotel Arabic-Reviews Dataset (HARD) 2016. The four-step framework firstly entails developing a teacher model that is trained on the aspect categories of the SemEval 2016 labeled dataset. Secondly, it generates pseudo labels for the unlabeled HARD dataset based on the teacher model. Thirdly, it creates a noisy student model that is trained on the combined datasets (∼1 million sentences). The aim is to minimize the combined cross entropy loss. Fourthly, an ensembling of both teacher and student models is carried out to enhance the performance of AraBERT. Findings indicate that the ensembled teacher-student model demonstrates a 0.3% improvement in its micro F1 over the initial noisy student implementation, both in predicting the Aspect Categories in the combined datasets. However, it has achieved a 1% increase over the micro F1 of the teacher model. These results outperform both baselines and other deep learning models discussed in the related literature.
INTRODUCTION
Recent applications in business intelligence have considerably profited from access to beneficiaries' online communication via various platforms (e.g., social networks, forums, e-commerce sites, etc.). Those users' views and opinions about numerous coarse or finegrained services/products have impacted the design of such applications. Automatically analyzing this huge web of unstructured opinion data has been a practical research focus in recent years. Specifically, there is a surge in researchers' work on improving machine learning (ML) and deep learning (DL) techniques used for sentiment analysis (SA) in business-oriented domains (Sun et al., 2019). In this research context, SA's main objective is to discover opinions, extract the sentiments they represent, then classify these sentiments into polarities (Pang, Lee & Vaithyanathan, 2002). Recently, the focus of SA research has shifted towards enhancing sentiment polarity detection and classification. This, alongside the developments in the field of natural language processing (NLP), DL models, the emergence of aspect-based sentiment analysis (ABSA) have impacted current research in the field (Mowlaei, Saniee Abadeh & Keshavarz, 2020). ABSA's sentiment classification is primarily conducted on aspect, sentence, or document levels. Generally speaking, the ABSA process involves aspect term extraction (ATE), aspect category detection (ACD), opinion term extraction (OTE), and aspect sentiment classification (ASC) (Mowlaei, Saniee Abadeh & Keshavarz, 2020).
As a sub-task of ABSA, ACD involves the categorization of a given sequence into a group of pre-defined aspect categories (Zhang et al., 2021). In business, detecting the aspect categories which demarcate the domain's intelligence has recently become a priority research agenda. Particularly from a DL perspective, various supervised, unsupervised, and semi-supervised methods have been implemented to enhance the process of aspect extraction from reviews, forum content, social media, etc. These methods are implemented in practical business and marketing contexts like hotels and hospitality, tourism, dining and restaurants, e-commerce, gaming, etc. (Valdivia et al., 2020;Chauhan et al., 2020).
Implementing these DL methods for detecting coherent and representative aspects is always a challenging task for several reasons. Firstly, most of the current ACD research still focuses on testing supervised DL methods on labelled data. For example, datasets of user hotel reviews for which both aspect and SA polarity are defined. The research problem, in this case, is represented as a sequence labeling experiment which implements a sequence transduction model e.g., long short-term memory (LSTM). Such solutions are basically context independent (Sun et al., 2020). Attention layers enhance long range dependencies in LSTMs and RNNs. Nonetheless, they do not support parallelization, subsequently making it inefficient in training and implementation. What is lacking in this research design is the attention to the overall semantic or contextual features of the data. These features are significant for defining the most representative aspect entities which could be utilized in an ABSA task. Therefore, what is needed are models which ensure that the contextual semantic domain is treated as input.
Secondly, reports on these supervised DL methods achieving best performance are understandably useful. Nonetheless, these reports are impractical when considering the cost of developing labeled data. Moreover, real-life user review data is unstructured, ever-changing, and unlabeled. This means it lacks a definitive identification of its main aspect entities (Dragoni, Federici & Rexha, 2019;Zhuang et al., 2020). In current ABSA research, the discussed DL solutions are primarily model-centric. These solutions seek to test and refine various implementations of models that proved their success or can be enhanced by the capabilities of other algorithms to achieve better performance (Yadav & Vishwakarma, 2020).
Data-centric (alternatively, customer-centric) designs are rarely used, especially in the context of ACD research. In such contexts, a small portion of the opinion data is available in labeled formats, whereas a larger component of this data is present as unlabeled. With the former type of data, supervised learning methods are implemented, whereas unsupervised learning is chosen in most cases with the latter. Alternatively, SSL or semi-supervised learning (that implements self-training) is different from unsupervised learning in that it can be applied to a range of problems such as clustering, regression, classification, and association. This differs from supervised learning which is reliant solely on labeled data, unlike SSL which utilizes both labeled and unlabeled data, thereby leading to less manual data annotation expenses and faster data preparation (Zhu, 2005;Van Engelen & Hoos, 2020).
To these challenges, one might add the challenge of researching ABSA, more specifically ACD, in a foreign language like Arabic. Arabic, for example, demonstrates a recognizable linguistic diversity and complexity as well as well-documented dialectical variation. This complexity is further observed in business-oriented contexts. In addition, research in Arabic ABSA has mainly applied methods that required extensive preprocessing and feature extraction steps. Moreover, researchers relied on external resources such as lexicons (Bensoltane & Zaki, 2021). Even when applying DL techniques, the focus was not on context-dependent applications. Instead, solutions like word embeddings (e.g., word2vec), where each word has a fixed representation independent of its context, have been used (Abdelgwad, Soliman & Taloba, 2022).
Based on the previously discussed issues, this article aims to: (1) Implement a semi-supervised learning (self-training) approach for aspect category detection (ACD) in Arabic hotel reviews, which utilizes both labeled (SemEval 2016) and unlabeled (HARD 2016) datasets. The former is smaller in comparison to the latter, thus encouraging the adoption of a data-centric approach.
(2) Use a four-step noisy student framework to improve the performance of a base Bert model (AraBERT) in an ACD task. This process involves training a teacher model on the labeled SemEval 2016 dataset, generating pseudo labels for HARD, training a noisy student on the combination of SemEval 2016 and HARD; finally, ensembling both teacher and student models to improve the performance of the proposed model. (3) Evaluate the model on the combined datasets of labeled and unlabeled reviews using the micro F1 metric. Thus, one can summarized the main contributions of this article as follows: (a) To our knowledge, using BERT in a semi-supervised learning, self-training context to extract aspect categories from Arabic language hotel reviews has not been attempted. (b) As with any semi-supervised DL experiment, we are seeking to use the aspects extracted from a labeled hotel review dataset (SemEval 2016) in a self-training learning context, to detect and classify the aspects in a larger unlabeled Arabic hotel review dataset (HARD 2016). This contributes as well to unexplored research problems where more emphasis is placed on the data and less attention is given to model complexity. (c) To overcome the sequence dependence of some deep learning models, we use the transformer BERT (AraBERT v02) which is context-dependent, hence the contextual aspects of the reviews will be incorporated. This model overcomes the issues encountered with context-dependent solutions and even performs relatively better than attention enhanced models like RNNs and LSTMs. BERT facilities parallelization, which leads to efficient training and implementation. (d) Benefiting from research done in the field of computer vision, the Teacher-student framework is used to enhance the BERT's detection of aspect categories. To our knowledge, implementing this framework in Arabic ABSA or ACD research has not been attempted. This study is motivated by its attempt to contribute to improving ACD accuracy in Arabic language settings through implementing a self-training approach that does not rely on vast sets of labeled data, unlike previous studies. This can help organizations and researchers in getting more accurate insights from Arabic textual data, improving the quality of products and services, and getting a better understanding of customers' needs and preferences in Arabic-speaking regions.
The structure of the article is as follows. 'Review of the Related Literature' briefly reviews related work. Next, the experiments' details and the proposed solution are explained in 'Materials & Methods'. Results and findings are presented in 'Results'. In 'Discussion', we discuss the major points of the findings. Section 6 highlights future research directions. Finally, we conclude the article in 'Conclusions'.
REVIEW OF THE RELATED LITERATURE
Early research in the field of SA has considered the overall polarity of sentences in review datasets. For example, a text polarity is perceived as representing one sentiment category, regardless of the internal entities to which this polarity might refer. Nonetheless, a sentence might include explicit or implicit references to more than one aspect: The sentence ''This mobile's 6.53-inch display is suitable to my needs, but its camera is its greatest failure'' has two sentiment polarities. One sentiment is positive in relation to the display and the other is negative with reference to the camera.
The ABSA task is mainly composed of two interrelated sub-tasks. The first is aspect category detection (ACD) and the other is aspect category polarity (ACP). What is of interest to us in this research is the first sub-task. An ACD sub-task is basically a multi-label classification problem, where several review sentences, for example, are introduced as including a set of pre-defined aspect categories, e.g., (Dining, Hygiene, etc.). Hence, the aim in such a task is to detect all the aspect categories which are presented in each sentence. Some of these aspect categories will be explicitly presented whereas others will be implicit. Ultimately, one review sentence can belong to one or more of the detected categories (Majumder et al., 2020;Xu et al., 2019;Do et al. , 2019).
Aspect extraction sentiment analysis detects different aspect terms in reviews. Topic modeling and latent Dirichlet allocation (LDA) were adopted for extracting product aspects from user reviews. The experimental results show that the proposed method is competitive in extracting product aspects from user reviews (Ozyurt & Akcayol, 2021). Considering Arabic sentiment analysis and mainly hotel reviews, authors in (Abdulwahhab et al., 2019) proposed an algorithm that analyzes the dataset using LDA to identify the aspects and their essential representative words to extract nouns and adjectives as possible aspects in the review.
Most recent DL research has indicated the improved performance of the developed language models when using transfer learning (TL), especially, in tasks related to SA and ABSA (Chen et al., 2017;Zhang, Wang & Liu, 2018). Basically, TL involves the process of pre-training a deep neural network (DNN) with weights learned from a different task where large, mostly, web-crawled data is being used. In a DL context, this entails reusing the weights of a pre-trained model in a new model while performing one of two tasks. One task entails retaining the fixity of model weights. The other involves fine-tuning those weights depending on the new task and data. Lower generalization errors and training time reduction are reported to be two of the major outcomes of implementing TL.
In ABSA research, TL has been implemented and resulted in improved language models. In (Fang & Tao, 2019), for example, the researchers propose a TL approach to solve a multi-label classification problem in the context of an ABSA task. They hypothesize that through using bidirectional encoder representations from transformers (BERT) to classify restaurant visitors' opinions, they have been able to avoid additional preprocessing or transformation of their original data. The model achieved an accuracy of 61.65 in comparison to other tested approaches. In Tao & Fang (2020), the same researchers experimented with BERT and XLNet on different datasets: Restaurant, Wine, and Movies review datasets, and with an enhanced multi-hot encoding (reversible binary encoding). XLNet out-performed BERT on two datasets (restaurants = 66.65, and movies = 89.86), but generally, results of the experiments using BERT and XLNet have out-performed other DL techniques using LSTM, BiLSTM, CNN+LSTM plus other ML algorithms that have been used for baseline comparison.
With a specific focus on ACD, Ramezani, Rahimi & Allan (2020) have also used BERT with a supervised classifier to extract and classify 7 aspect categories from the OpoSum review dataset. Their base BERT model achieves high F1 on 5 of the identified categories in comparison to the other four models developed as baselines. Experimentation with TL in multi-lingual ABSA or ABSA-related tasks have also been reported to improve the model predictive performance. For instance, Winatmoko, Septiandri & Sutiono (2019) demonstrated the effectiveness of BERT and BERT+CRF models in OTE and ACD experiments, where BERT-CRF out-performed BERT on the Indonesian Airy Room hotel review dataset. Their models achieved an F1 of 0.92 and 0.93 on OTE and ACD respectively, which shows how enhancing context awareness with the CRF layer improved reflected positively on model performance. Similarly, Van Thin et al. (2021) pre-trained five BERT models (mBERT, mDistilBert, viBert4news, viBert FPT, and PhoBERT) on both mono-lingual and multilingual datasets from SemEval 2016 in the domains of hotels and restaurants. The researchers' findings indicate that PhoBERT performed better compared to the other models on the mono-lingual datasets achieving an F1 of 86.53 on the restaurant dataset, and a 79.16 F1 on the hotel dataset. Similarly, researchers in Pathak et al. (2021) developed two BERT-based ensemble models (mBERT E MV and mBERT E AS) to detect aspects in user Hindi reviews and perform sentiment analysis through constructing auxiliary sentences from the detected aspects and feeding them into the network. They implemented these models on the labelled IIT-Patna Hindi dataset which covers topics related to Electronics, Mobile Apps, Travel, and Movies. Their results indicate that the mBERT E MV outperforms mBERT E AS and other three ML baselines (naive Bayes (NB), decision tree (DT), and sequential minimal optimization (SMO)) on the four sub-datasets.
Regarding ACD studies on the Arabic language, baseline research experimented with ML approaches to detect aspect categories in Arabic datasets. In Al-Smadi et al. (2016), for example, an SVM has been trained on the SemEval2016 Arabic dataset with special focus on tasks like aspect category detection (ACD), Opinion target expression (OTE), etc. The SVM achieved an F1 score of 40.3 on the ACD task. Researchers in Pontiki et al. (2016) also utilized ML techniques to perform ACD on the Arabic hotel reviews in the SemEval2016 dataset. The SVM implementation achieved an F1 score of 52.114.
ACD Arabic research that implemented DL techniques has started to demonstrate comparable results. Bensoltane & Zaki (2021) evaluated four types of word embedding approaches (AraVec, FastText, domain-specific, BERT) to enhance four models (LSTM, BiLSTM, GRU, BiGRU) using the SemEval 2016 Arabic hotel reviews dataset. The BERT embeddings achieved the highest F1 of 63.6 and 65.5 when added to the GRU and BiGRU models respectively. Kumar, Trueman & Cambria (2021) have proposed a convolutional stacked bidirectional LSTM with a multiplicative attention mechanism for aspect category detection and sentiment analysis. When evaluated, the CNN and stacked BiLSTM Multi Attn model achieved the same weighted F1 (=0.52) on the SemEval 2015 dataset, however it achieved a higher weighted F1 of 0.60 on the SemEval 2016 dataset than the CNN stacked BiLSTM Attn which scored 0.58. Similarly, researchers in Tamchyna & Veselovsk'a (2016) developed an LSTM network with a logistic regression layer to output a sentence classification as either positive (i.e., including the E#A pair) or negative, which achieved the best F1 (=52.11) on the SemEval-2016 hotel review dataset. Table 1 summarizes the main contributions of the related literature in the field of ACD either using ML or DL methods.
Despite the reported effectiveness of transfer learning, there has been a predominant concern about how it utilizes labels in the source task to learn network weights. This leads to bias, and to a less informative generalization on the target task because the classes which are found in the target labels are dissimilar to those in the source task (Yang et al., 2020). Therefore, researchers started experimenting with other less biased pretraining tasks like self-training, which is a wrapper method based on a semi-supervised learning approach. Initially, a classifier is trained on a small dataset of labeled instances then is used to classify a larger unlabeled dataset. The process involves adding the most confident predictions of the supervised model to the labeled dataset, then iteratively re-training it the model (Pavlinek & Podgorelec, 2017;Triguero, Garc'ıa & Herrera, 2015).
Though mainly achieving its state-of-the-art in image classification tasks, Self-training combined with pre-trained language models has demonstrated its usefulness in several text classification tasks (Du et al., 2020;Mukherjee & Awadallah, 2020). The sacristy of ABSA research harnessing the outcomes of self-training using the teacher-student framework is noted while conducting this review of literature; especially, in Arabic ACD. For example, researchers in Bhat, Sordoni & Mukherjee (2021) reported that a self-training approach with a teacher-student framework has been implemented on the ERASER benchmark dataset which encompasses five tasks. With specific focus on the SA task, they experimented with Base-Bert and achieved an F1 of 87 in a few-shot (25% training data) rationalization.
MATERIALS & METHODS
Generally, the current research falls within the domain of ''Subtask 1: Sentence level ABSA'', where researchers seek to identify an expressed opinion in documents regarding a target entity (e.g., a TV, a store, or a hotel). More specifically, we engage with aspect category detection (ACD) where a pair of entities (laptop) and attribute (DESIGN) are to be detected. For this, we adapt the noisy student approach proposed and mostly researched in the field of computer vision.
Datasets
To test the main assumptions of this research, two datasets have been used: the Arabic hotel reviews dataset from the SemEval 2016 and HARD, the Hotel Arabic-Reviews Dataset 2016. The first dataset is one of the contributions to the multilingual fifth task in the SemEval 2016 competition, which required the provision of customer reviews for ABSA in eight languages (Arabic, English, Chinese, Dutch, French, Russian, Spanish, and Turkish) sorted under seven topics (restaurants, laptops, mobile phones, digital cameras, hotels, museums, and telecommunications). The Arabic hotel reviews dataset is extracted from booking.com and tripadvisor.com, and is annotated, both on text level (2,291 texts) and sentence level (6,029 sentences). For this dataset, aspects have been extracted, where an aspect category is decided based a pairing of Entity (hotel, rooms, facilities, room amenities, service, location, food drinks) and Attribute (general, price, comfort, cleanliness, quality, style options, design features, miscellaneous). Each sentence can be allocated to multiple categories resulting in a total of 34 categories. These categories are not distributed evenly in the dataset (Pontiki et al., 2016;Bensoltane & Zaki, 2021), and some of them are rare in the dataset as Fig. 1 shows.
We divided this dataset into a training set which includes 1,839 reviews, with a total of 4,802 sentences, and a Gold-standard Test Set. Figure 2 is an example from the SemEval 2016 dataset for hotel reviews. The test set contains 1,227 sentences extracted from 452 reviews as shown in Table 2.
On the other hand, the HARD dataset is larger than the SemEval 2016 dataset and comprises 373,750 uncategorized hotel reviews about 1,858 hotels in a mix of standard and dialectal Arabic. This review dataset has been obtained from Booking.com in 2016 and consists of both balanced and unbalanced sets. Whereas the balanced dataset consists of 93,700 reviews and is equally divided into positive and negative records (46,850 each), the unbalanced one is made of the raw 373,750 reviews. The data is arranged in several columns: Hotel name, a rate (out of 5), user type (family, single, couple), room type, nights, title, positive review (+1), negative review (−1). The positive reviews make up 68% of the unbalanced HARD dataset, whereas the negative ones constitute 13% and the neutral 19%. Each record contains the review text in the Arabic language, and the reviewer's rating on a scale of 1 to 5. In this research, the unbalanced dataset, which is made of review texts rated as either positive or negative, is used. Reviews with ratings 4 and 5 are mapped as positive, whereas reviews rated 1 and 2 are considered negative. The neutral rating is excluded. In its current state, the dataset is comprised of 93,700 reviews: 46,850 for each positive and negative classes (Elnagar, Khalifa & Einea, 2018). Figure 3 is an example from the HARD dataset.
Preparation and pre-processing
Regarding the experiment set-up, it is conducted using python where Ktrain is utilized as a lightweight wrapper for our TensorFlow Keras implementation. Basically, the Ktrain library supports NLP experiments and working with text data in DL problems in supervised, semisupervised or unsupervised learning tasks. Ktrain provides a learning rate finder, which can be used to discover an optimal learning rate for a model that is fitted to a specific dataset. Moreover, the Ktrain wrapper is useful in achieving improved generalizations and decreasing the model loss value through providing various types of learning rate schedules, for example, the triangular policy and Stochastic Gradient Descent With Restarts (SGDR).
Regarding the dataset, both datasets have undergone a uniform preprocessing step, where we have removed punctuation, normalized the words by dropping diacritics, replacing hamzated Alif with Alif, replacing AlifMaqsura with Yaa, normalizing Teh Marbuta, and removing Waaw at the beginning. As far as the HARD dataset, an alignment with the SemEval 2016 dataset is performed through segmenting the HARD full reviews into sentences, and removing extremely long sentences, so that the distribution of sentence lengths between the two datasets becomes similar.
Noisy student to improve BERT classification
In this study, we implemented a semi-supervised learning approach that utilizes a noisy student training framework, which is a promising new technique that uses self-training and teacher-student learning to enhance the classification capability of AraBERT v02. The pretrained ''AraBERT v02'' (Antoun, Baly & Hajj, 2020) has been originally trained on about 8.2 billion words of MSA and dialectical Arabic. For the current implementation of the AraBERT v02, architecture and hyperparameters are set to be identical for both the teacher and student models in our experiments. The model consists of 12 hidden layers, 12 attention heads, and the hidden size of 768. After using the model to pre-process, tokenize, and prepare the dataset, the classifier layer in the AraBERT v02 is stored to perform sequence classification on a multilabel classification problem (Herrera et al., 2016).
As far as hyperparameters are concerned, the max sequence length has been set to be 86. The Adam optimizer is used to fine-tune the model. A triangular learning rate policy is being applied and set to the maximum learning rate of 1e−4 to reduce the demand for a constant tuning of the learning rate. The objective is to achieve the best classification accuracy results (Smith, 2017). The dropout rate is set to 0.1 with a hidden dropout probability of 0.3. Similarly, a batch size of 32 is applied, and the number of epochs is set to 10. Our noisy student framework adapts the methodology described in Xie et al. (2020).
(1) Simplified, a first step in this methodology involves the processing of the inputs from the labelled SemEval 2016 dataset to train a teacher model utilizing a standard cross entropy loss. (2) The second step entails generating pseudo labels on the unlabeled HARD dataset using the previously developed teacher model. It is important to note that we did not add noise to the teacher model at this stage. (3) In the third step of this process, a noised student model (that is equal in size to the previously developed teacher) is trained on the combined datasets with the objective of minimizing the combined cross entropy loss. Model noise is introduced into the student so that it does not only learn the teacher's knowledge but goes beyond it. The noise parameters implemented are specifically dropout and stochastic depth function. We did not introduce noise into the data through augmentation, which is considered input noise. Using dropout removes dropout neurons from the neural network (AraBERT). As this is repeated for each training example, we end up with different models. The probabilities of each of those models are later averaged during the testing stage, which is similar to an ensembling step. Of significance to our research, and for purposes of comparison with the ImageNet 2012 used in Xie et al. (2020), where it is assumed that a balanced dataset has a positive effect on the performance of the student model, it must be observed that our unlabeled dataset, HARD 2016, is not balanced. (4) Diverging from the fourth step described in Xie et al. (2020), which suggests an iteration process where student becomes teacher to generate pseudo labels on the unlabeled dataset, we opted for implementing an ensembling technique. The objective has been to utilize the moderately better performance of the student model in combination with the teacher. The input data of the combined datasets is run through both student and teacher models resulting in different probabilities, which are ultimately averaged. This methodology will be implemented in four main steps. The general workflow of the proposed solution is shown in Fig. 4.
Model evaluation
The Gold-standard Test Set, which is an unseen set extracted from the SemEval 2016 dataset has been used in the evaluation of the teacher, student, and ensemble implementations of our methodology. To evaluate our model, we used the Micro F1, which represents a micro-averaged F1, often employed to estimate the quality of multi-label binary problems. It computes the F1 of the combined inputs of all classes (Goutte & Gaussier, 2005) by
RESULTS
The results as demonstrated in Table 3 reveal a promising improvement in the performance of the AraBERT classification when implementing the noisy student framework. Generally, these results outperform baseline research and can be summarized as follows:
DISCUSSION
Implementing a semi supervised self-training approach using a noisy student framework, as described in 'Materials & Methods' of this article, has led to recognizable improvement. The AraBERT classification results outperformed other models presented in related work on the SemEval 2016 hotel review dataset. A preliminary observation of the results confirms our assumption that the noisy student framework improves the performance of the AraBERT language model used in this study. The ensemble teacher-student model's F1 score has achieved a 0.3% improvement when compared to the noisy student model. Similarly, ensemble teacher-student model's F1 score has a 1% increase over that of the teacher model. As explored in detail in the findings, the proposed model with its ensemble teacher-student framework has considerably outperformed the baseline SVM-based models Pontiki et al., 2016) by 27.9% in the micro F1. Compared to the RNN, LSTM, and CNN implementations reported in the related literature (Tamchyna & Veselovsk'a, 2016;Bensoltane & Zaki, 2021;Kumar, Trueman & Cambria, 2021), our model has demonstrated a significant improvement ranging from 16% to 20% in micro F1 metric. The proposed model has as well performed better than the highest performing DL model reported in the literature (BERT embedding with BiGRU) by 2.7% in the ACD task. We can attribute the progressive improvement in the various implementations of the framework to the significance of adding more data instances through combining the SemEval 2016 dataset with HARD dataset to create a large dataset which consisted of ∼ 1 million sentences. Moreover, with the noisy student implementation, we used a probability threshold to drop the examples of unlabeled data that the model has predicted with low confidence since these examples usually indicate out-of-domain aspects. The best results were obtained with a probability threshold of 0.4. This indicates that any example of the unlabeled data that the model did not predict with a probability of 0.4 or above, was dropped. In addition, it was observed that the teacher model, though inferior to the student model, has performed better on some classes. Hence, the ensemble model was created from the predictions of the teacher and noisy student implementations to utilize this by-class difference. To achieve this, different weights have been tried, and the best result was obtained by giving each model the same weight of 0.5, which is equivalent to a simple mean.
A detailed examination of the aspects as shown in Fig. 5 reveals that out of the 34 aspects, the teacher detected 23 aspects, however failed to identify 11, whereas the student identified 22 and failed to detect 12. The inability of the model to detect those aspects is merely attributed to their limited representation in the datasets. For example, and according to Table 4 the category count of the rooms amenities#prices aspect in the datasets is 13 in both SemEval 2016 and combined datasets, hence getting an F1 of 0.00 due to the limited number of aspects. The service#general category achieves a 0.91 and a 0.90 F1 in the Teacher, and Student models because the category count in both implementations is 218,753 and 2,221 respectively, indicative of a higher representation of classes.
CONCLUSIONS
This study utilizes the Arabic version of the BERT model, AraBERT v02, to conduct a semi-supervised self-training approach using a noisy student framework to perform an ACD task on two datasets: a labelled and unlabeled hotel review datasets (SemEval 2016, andHARD 2016). Experimental findings revealed that an ensembled teacher-student model achieves the best F1 (0.68), hence outperforming the baseline and related work on the same dataset. The by-class F1 are relatively affected by the size of category representation in the datasets. Our future work in ACD research targets experimenting with other methods to introduce noise into the student model like augmentation and conducting student iterations. Moreover, and while adopting the same data-centric perspective, we seek to enhance the quality of the data being used through feature engineering and dealing with dialectical and lexical challenges in the Arabic language. It is expected that further enhancements of both data quality and the noisy student framework might significantly improve AraBERT text classification capability to outperform existing models in the ACD task on the Arabic language. Future work on this project might entail increasing the size of the unlabeled dataset, performing several student iterations, and experimenting with more complex BERT architecture, while implementing the noisy student framework might prove effective in future research experiments.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
This project is funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Saudi Arabia, Jeddah under grant No. (J-003-612-1443). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
• Reem Alotaibi conceived and designed the experiments, performed the experiments, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the article, and approved the final draft.
Data Availability
The following information was supplied regarding data availability: | 7,345.8 | 2023-06-08T00:00:00.000 | [
"Computer Science"
] |
Prey-Predator Model with a Nonlocal Bistable Dynamics of Prey
Spatiotemporal pattern formation in integro-differential equation models of interacting populations is an active area of research, which has emerged through the introduction of nonlocal intraand inter-specific interactions. Stationary patterns are reported for nonlocal interactions in prey and predator populations for models with prey-dependent functional response, specialist predator and linear intrinsic death rate for predator species. The primary goal of our present work is to consider nonlocal consumption of resources in a spatiotemporal prey-predator model with bistable reaction kinetics for prey growth in the absence of predators. We derive the conditions of the Turing and of the spatial Hopf bifurcation around the coexisting homogeneous steady-state and verify the analytical results through extensive numerical simulations. Bifurcations of spatial patterns are also explored numerically.
Introduction
Investigation of spatiotemporal pattern formation leads to understanding of the interesting and complex dynamics of prey-predator populations.Reaction-diffusion systems of equations are conventionally used to study such dynamics.Various forms of reaction kinetics in the spatiotemporal model give rise to a wide variety of Turing patterns as well as non-Turing patterns including traveling wave [1][2][3][4][5] and spatiotemporal chaos [6,7].Such patterns can be justified ecologically with the help of the field data and experiments which confirm the presence of patches in the prey-predator distributions.For example, Gause [8] has shown the importance of spatial heterogeneity for the stabilization and long term survival of species in the laboratory experiment on growth of paramecium and didinium.Luckinbill [9,10] has also studied the effect of dispersal on stability as well as persistence/extinction of population over a longer period of time.Based on these data, works are done where the prey-predator models with spatial distribution are considered for various ecological processes [11], such as plankton patchiness [12][13][14], semiarid vegetation patterns [15], invasion by exotic species [16,17] etc. (see also [18][19][20][21]).Such models have been successful in proving long term coexistence of both prey and predator populations along with formation of stationary or time dependent localized patches with periodic, quasi-periodic and chaotic dynamics [6,7].
The classical representation of two species interacting populations including the spatial aspect, consists of a reaction-diffusion system of equations in the form of two nonlinear coupled partial differential equations, with non-negative initial conditions and appropriate boundary conditions.Population densities of prey and predator species at the spatial location x and time t are denoted by u(x, t) and v(x, t), respectively.The nonlinear functions F 1 and F 2 represent the interactions among individuals of the two species.
The diffusion coefficients d u and d v represent the rate of random movement of individuals of the two species within the considered domain.
A wide variety of spatiotemporal patterns are described by these models, namely, traveling wave, periodic traveling wave, modulated traveling wave, wave of invasion, spatiotemporal chaos, stationary patchy patterns etc. [20][21][22][23].Among all these, only stationary patchy pattern results in due to Turing instability, represents a stationary in time but non-homogeneous in space distribution.A stable co-existence of both species occurs due to formation of localized patches where the average population of each species remains unaltered in time.Whereas the other patterns are time dependent with the individuals of both species following continuously changing resources.
The general assumption for consumption of resources in the spatiotemporal models of interacting populations is taken to be local in space.In other words, it is supposed that the individuals consume resources in some areas surrounding their average location.Whereas nonlocal consumption of resources is more general since it incorporates the interspecific competition for food [24][25][26].Such modifications enables the explanation of emergence and evolution of biological species as well as speciation in a more appropriate manner [27][28][29][30][31].The models with nonlocal consumption of resources present complex dynamics for the single species models [28,29,[32][33][34][35] as well as for competition models including two or more species [32,[36][37][38].Furthermore, such complex dynamics cannot be found in the corresponding local models.
Interesting results are obtained due to the introduction of nonlocal consumption of prey by predator in a reaction-diffusion system with Rosenzweig-McArthur type reaction kinetics [39].Contrary to the local model where Turing patterns are not observed, this model satisfies the Turing instability conditions and gives rise to Turing patterns under proper assumptions on parameters.Other than this, existence of non-Turing patterns like traveling wave, modulated traveling wave, oscillatory pattern and spatiotemporal chaos are also observed for the nonlocal model.Some of the non-Turing patterns are reported for the nonlocal model with the modified Lotka-Volterra reaction kinetics [39,40].
In order to introduce the prey-predator model with a nonlocal bistable dynamics of prey, let us recall the classical models for the single population.Single species population model with the logistic growth law is described by the following ordinary differential equation, assuming homogeneous distribution of the species over their habitat, where r and k denote the intrinsic growth rate and carrying capacity, respectively.Introducing multiplicative Allee effect in this single population growth model, the above equation becomes where l is the Allee effect threshold satisfying the restriction 0 < l < k [41][42][43][44][45].This equation accounts for two significant feedback effects: positive feedback due to cooperation at low population density and negative feedback arising through the competition for limited resources at high population density [46].In the framework of this formulation, the cooperation and competition mechanisms are described by the linear factors (u − l) and (k − u), respectively.Introduction of the Allee effect through a multiplicative term has a significant drawback since it represents a product of cooperation at the low population density and competition at the high population density.In this case, cooperation and competition influence each other, and their effects cannot be considered independently (see [46,47] for detailed discussion).The per capita growth rate is described by the factor r(k − u)(u − l) which is positive for l < u < k, it is an increasing function for l < u < k+l 2 , and a decreasing function for k+l 2 < u < k.To overcome such situations, an additive form of the per capita growth rate function, proposed by Petrovskii et al. [47], is given by where the functions f (u) and g(u) describe population growth due to the reproduction and density dependent enhanced mortality rate, respectively.Here σ is the natural mortality rate independent of population density.Depending upon appropriate parametrization and assumption for the functional forms, the above model describes various types of single species population growth.In particular, if we choose f (u) = µu and g(u) = ηu 2 then we get the growth Equation ( 4) from (5) with appropriate relations between two sets of parameters (r, k, l) and (µ, σ, η).With a different type of parametrization, f (u) = abu and g(u) = au 2 we can obtain the single species population growth model with sexual reproduction [34,35,48] as follows where a is the intrinsic growth rate, b is the environmental carrying capacity and σ is the density independent natural death rate.Introducing the nonlocal consumption of resources and random motion of the population, we get the following integro-differential equation model, where φ(z) is an even function with a bounded support and d is the diffusion coefficient.The kernel function is normalized to satisfy the condition It shows the efficacy of consumption of resources as a function of distance (x − y).The integral describes the total consumption of resources at the point x by the individuals located at y ∈ (−∞, ∞).This model shows bistability since the corresponding temporal model has two stable steady-states 0 and u + separated by an unstable steady-state u − [34].
Based on this model, we are interested to study pattern formation described by the nonlocal reaction-diffusion system of prey-predator interaction with the bistable reaction kinetics of prey in the absence of predators which are specialist in nature following Holling type-II functional response.We will obtain the conditions of the Turing instability and of the spatial Hopf bifurcation in Section 2. Section 3 describes spatiotemporal pattern formation observed in numerical simulations.Here we also present bifurcation diagrams for the model with nonlocal consumption.Main outcomes of this investigation are summarized in the discussion section.
Stability Analysis
In this section, we will introduce the prey-predator models without and with nonlocal consumption term and will study stability of the positive homogeneous stationary solution.
Local Model
We consider the following reaction-diffusion system for the prey-predator interaction: subjected to a non-negative initial condition and the periodic boundary condition.The consumption of prey by the predator follows the Holling type-II functional response, α is the rate of consumption of prey by an individual predator, κ is the half-saturation constant and β is the rate of conversion of prey to predator biomass.Furthermore, β/α is the conversion efficiency with the value between 0 and 1, consequently β < α.The reproduction of prey is proportional to the second power of the population density specific for sexual reproduction.In the absence of predator (α = 0) dynamics of prey is described by a bistable reaction-diffusion equation.
The coexistence (positive) equilibrium E * (u * , v * ) of the corresponding temporal model is given by the equalities and associated feasibility conditions which provide the positiveness of solutions.
Here we briefly present the local asymptotic stability condition of E * for the temporal model ( 10)-( 11) that will be required afterwards.Linearizing the nonlinear system ( 10)-( 11) around E * we can find the associated eigenvalue equation where Two eigenvalues of the above characteristic equation have negative real parts if a 11 < 0 and hence E * is locally asymptotically stable for a 11 < 0. The stationary point E * loses its stability through the super-critical Hopf bifurcation if a 11 = 0.
It is well known that the models of the form ( 8)- (9), that is for which a 22 = 0, are unable to produce any Turing pattern as the Turing instability conditions cannot be satisfied [40].However these type of models are capable to produce non-Turing pattern if the temporal parameter values are well inside the Hopf-bifurcation domain [6].The spatiotemporal prey-predator models with a specialist predator and linear death rate for predator population can produce spatiotemporal chaos, wave of chaos, modulated traveling wave, wave of invasion and their combinations if the spatial domain is large enough [7].
Nonlocal Model
Under the assumption that prey can move from one location to another one to access the resources, model ( 8)-( 9) can be extended to the model with nonlocal consumption of resources: subjected to a non-negative initial condition and the periodic boundary condition.Here Various forms of kernel functions are considered in literature.Here we consider the step function for simplicity of mathematical calculations [49].This step function means that the nonlocal consumption is confined within the range 2M, and the efficacy of consumption inside this range is constant.
We will analyze stability of the homogeneous steady-state (u * , v * ).We consider the perturbation around it in the form The characteristic equation writes as |H − λI| = 0 where and Therefore, the characteristic equation becomes as follows: where The homogeneous steady-state is stable under space dependent perturbations if the following two conditions are satisfied: for all positive real k and M. The homogeneous steady-state loses its stability through the spatial Hopf bifurcation if Γ(k H , M) = 0, ∆(k H , M) > 0 for some k H , and through the Turing bifurcation if Γ(k T , M) < 0, ∆(k T , M) = 0 for some k T .
Spatial Hopf Bifurcation
First, we find the spatial Hopf bifurcation threshold in terms of the parameter d 2 .It is important to note that Γ(k, M) < 0 and ∆(k, M) > 0 as M → 0+ if we assume that (u * , v * ) is locally asymptotically stable for the temporal model ( 10)- (11).One can easily verify that lim M→0+ Γ(k, M) = a 11 and lim M→0+ ∆(k, M) = −a 12 a 21 .For some suitable M if one can find a unique value k ≡ k H such that Γ(k, M) = 0 then k H is the critical wavenumber for the spatial Hopf bifurcation.This critical wavenumber can be obtained by solving the following two equations simultaneously: Using the expression of Γ(k, M), we find d 2 from the equation Γ(k, M) = 0: Substituting this expression into the second equation in (21) we get: Equation (23) can have more than one positive real root depending upon the values of parameters.It is necessary to verify that the corresponding values of d 2 (k) are positive.We choose the root k H for which d 2 (k H ) is the minimal positive number, and ∆(k H , M) > 0.
Turing Pattern for Nonlocal Prey-Predator Model
Next, we discuss the Turing bifurcation condition and we assume that lim M→0+ Γ(k, M) < 0 and lim M→0+ ∆(k, M) > 0. These conditions provide stability of the homogeneous steady-state under space independent perturbations.The critical wavenumber and the corresponding Turing bifurcation threshold in terms of d 2 can be obtained as a solution of the following two equations: From ∆(k, M) = 0, we get Substituting this expression into the second equation in (25), we obtain This equation can have more than one positive root.From now on, we assume that all parameter values are fixed except for M and d 2 .Suppose that for a chosen value of M, (27) admits a finite number of positive roots, (26) are not necessarily positive.The Turing bifurcation threshold d 2T is given by the minimal positive value d 2 (k i ), and k T is the value of k i for which the minimum is reached.
Let us consider the same set of parameters as in the previous subsection except for d 1 = 0.4.An interesting feature of the Turing bifurcation curve is that it is not smooth when plotted in the (M, d 2 )-parameter plane (Figure 2).The point of non-differentiability arises around M = 12.65.For M = 12.5, we find four positive roots of ( 27 Finally, it is important to mention that the choice of parameters (24) leads to an interesting scenario for which the spatial Hopf and Turing bifurcation curves intersect.The two curves are shown in Figure 1 for d 1 = 1, and they divide the parametric domain into four different regions.Spatial patterns produced by the prey population for parameter values taken from spatial Hopf domain and Turing domain are presented in Figure 3a,b.In order to emphasize the fact that the stationary Turing pattern can be obtained for equal diffusion coefficients, we set d 1 = d 2 = 1 and observe a periodic in space and stationary in time solution (see Figure 3b).Various spatio-temporal patterns are observed for the values of parameters at the intersection of Turing and Hopf instability regions.Some of them are described in Section 3.
Spatiotemporal Patterns
In this section, we study nonlinear dynamics of the prey-predator model without and with nonlocal term in the equation for prey population.We present the results of numerical simulations performed with a finite difference approximation of systems ( 8)-( 9) and ( 13)-( 14).
Patterns Produced by the Model (8)-(9)
In this subsection, we consider the non-Turing patterns described by system ( 8)-( 9) in the interval −L ≤ x ≤ L with non-negative initial condition and periodic boundary condition.Results presented here are obtained for L = 200.We consider a small perturbation around the homogeneous steady-state at the center of the domain as initial condition.The values of parameters are as follows and the value of β will vary.It is known [6,7,16,26,50] that the prey-predator models with specialist predator can manifest time dependent spatial patterns if the parameters of the reaction kinetics are far inside the temporal Hopf domain.In this case, the temporal Hopf-bifurcation threshold is β * = 0.339, that is E * is stable for β < β * , and it is unstable otherwise.
Solutions homogeneous in space and oscillatory in time are observed for β > β * but close to it.The spatiotemporal pattern presented in Figure 4a is almost homogeneous in space but oscillatory in time for the value of β close to the temporal Hopf bifurcation threshold.For larger values of β we find spatiotemporal patterns periodic both in space and time (Figure 4b) and symmetric around x = 0.This symmetry is maintained due to the choice of symmetric initial condition.With the increase of β we observe various complex aperiodic spatiotemporal regimes (Figure 4c).They are characterized by specific triangular patterns resulting from the merging of two peaks in the population density moving towards each other.
This model is capable to produce other type of spatiotemporal patterns, such as the traveling wave, periodic travelling wave, wave of invasion, wave of chaos similar to the prey-predator model with Rosenzweig-MacArthur reaction kinetics [26] but those results are beyond the scope of this work and will be addressed in the future.
Effect of Nonlocal Consumption
We will now analyze how the nonlocal term influences dynamics of the prey-predator model.Nonlinear dynamics of prey-predator system with nonlocal consumption of resources by prey is summarized in the diagram in Figure 5a.Parameter regions with different regimes are shown on the (M, β)-plane for the values of other parameters given in (1).For small β, predator disappears while the population of prey is either homogeneous in space or it forms a spatially periodic distribution.For large β, both population go to extinction.More interesting behavior is observed for the intermediate values of β.This can be homogeneous or inhomogeneous in space, stationary or non-stationary in time solutions.Some of the spatiotemporal patterns are shown in Figure 6.For M sufficiently small these patterns become similar to those presented in Figure 4.For M large enough, both prey and predator densities represent stationary periodic in space distributions (similar to Figure 3b).Figure 5b shows a similar diagram in the case of different diffusion coefficients of prey and predator, d 1 = 0.7, d 2 = 0.5, with the same values of other parameters.The region of spatiotemporal patterns exists here for a narrower interval of β while the regions of stationary patterns and of extinction change their shape.
Multiplicity of Stationary Solutions
Another interesting aspect of the stationary patterns arising through the Turing bifurcation for the spatiotemporal model with nonlocal interaction term is the existence of multiple stationary solutions for a particular value of M. We have used forward and backward numerical continuation method to determine the range of M for the stationary patterns with different periodicity (Figure 7).Fixed parameter values are same as (1) except d 1 = 0.4 and d 2 = 0.2.For example, stationary pattern with 33 patches (over a spatial domain of size L = 200) exists for 2 ≤ M ≤ 4.5, with 32 patches for 2.5 ≤ M ≤ 5, and so on.
Discussion
In this work we study a prey-predator model with a bistable nonlocal dynamics of prey.Without the interaction with predator, the prey density is described by a bistable reaction-diffusion equation taking into account the Allee effect or the sexual reproduction of population.In this case, the reproduction rate is proportional to the second power of the population density.The dynamics of the prey population changes due to introduction of nonlocal consumption of resources.The main difference compared to the results of conventional local consumption is that the positive stable equilibrium may become unstable resulting in the appearance of stationary in time but periodic in space solutions.
The interaction with predator provides an additional factor that influences the dynamics of solutions.If we characterize this interaction by the parameter β, which determines the reproduction rate in the equation of predator density, then we can identify three main types of behavior depending on its value (Figure 5).If it is sufficiently small then the predator population vanishes since the reproduction is not enough to overcome the mortality.If this parameter is too large, then both populations go to extinction, particularly due to the bistability of the prey dynamics.Both populations coexist in a relatively narrow interval of the interconnection parameter.There are three different types of patterns inside this parameter domain.The homogeneous in space equilibrium can be stable or it can lose its stability resulting in the emergence of spatiotemporal patterns.They are observed for limited values of the parameter M which determines the range of nonlocal consumption.If the range of nonlocal consumption is sufficiently large, then both populations represent a periodic in space distribution.Such solutions are specific for nonlocal consumption with a large range, in particular for the single prey population.Thus, nonlocal consumption takes over spatiotemporal oscillations specific for the local prey-predator dynamics.Let us note that there is multiplicity of stationary patterns for the same values of parameters.This effect is specific for the problems with nonlocal interaction [39].
Spatiotemporal oscillations are specific for the prey-predator dynamics [6,7,16,26,40,50].Here we observe the dynamics with "triangular" patterns (Figure 4) appearing when two pulses move towards each other and merge.Nonlocal consumption of resources modifies these patterns.Some questions related to the prey-predator dynamics with nonlocal bistable model for prey remain beyond the scope of this work.We have not discussed here travelling waves and pulses that can also be observed in such models.We will study them in the subsequent work.
), we find d 2 = 0.51.Since ∆(0.51, 6) = 0.0094, these values of k and d 2 correspond to the desired spatial Hopf bifurcation thresholds, k H = 0.297, d 2H = 0.51.Furthermore, Γ(k, 6) > 0 for d 2 < d 2H .Hence the spatial Hopf bifurcation takes place as d 2 crosses the critical threshold d 2H from higher to lower values.Therefore, oscillatory in space and time patterns emerging due to the spatial Hopf bifurcation are observed below the stability boundary.The spatial Hopf bifurcation curve in the (M, d 2 )-parameter space is shown in Figure 1.Spatiotemporal patterns for parameter values lying in the spatial Hopf domain is presented in Figure 3a.
Figure 2 .
Figure 2. Turing bifurcation curve on the (M, d 2 )-parameter plane (a).Stationary Turing patterns exist above the bifurcation curve.The functions ∆(k, 12.5) (b) and ∆(k, 12.7) (c).The root corresponding to the Turing instability is shown in green.
Figure 3 .
Figure 3. (a) Resulting spatio-temporal patterns produced by the nonlocal model (13)-(14) for d 2 = 0.3, M = 6.5 and other parameter values as mentioned in the text.(b) Stationary pattern produced by the prey population for d 1 = 1, d 2 = 1 and M = 6, other parameters are mentioned at text.
Figure 4 .
Figure 4. Spatiotemporal patterns (prey density) produced by the model (8)-(9) are presented in the left column for the parameter values as mentioned at the text and (a) β = 0.342; (c) β = 0.3445; (e) β = 0.36.Corresponding distribution of prey and predator population over space at t = 1000 are presented in the right column, see (b), (d), (f).
Figure 7 .
Figure 7. Stationary patterns with various number of patches exist for a range of nonlocal consumption (M) is plotted.Parameter values are same as in (24) except d 1 = 0.4 and d 2 = 0.2. | 5,352.2 | 2018-03-08T00:00:00.000 | [
"Mathematics",
"Environmental Science",
"Biology"
] |
Routability-Enhanced Scheduling for Application Mapping on CGRAs
Coarse-Grained Reconfigurable Architectures (CGRAs) are a promising solution to domain-specific applications for their energy efficiency and flexibility. To improve performance on CGRA, modulo scheduling is commonly adopted on Data Dependence Graph (DDG) of loops by minimizing the Initiation Interval (II) between adjacent loop iterations. The mapping process usually consists of scheduling and placement-and-routing (P&R). As existing approaches don’t fully and globally explore the routing strategies of the long dependencies in a DDG at the scheduling stage, the following P&R is prone to failure leading to performance loss. To this end, this paper proposes a routability-enhanced scheduling for CGRA mapping using Integer Linear Programming (ILP) formulation, where a global optimized scheduling could be found to improve the success rate of P&R. Experimental results show that our approach achieves <inline-formula> <tex-math notation="LaTeX">$1.12\times $ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$1.22\times $ </tex-math></inline-formula> performance speedup, 28.7% and 50.2% compilation time reduction, as compared to 2 state-of-the-art heuristics.
I. INTRODUCTION
With both energy efficiency and program flexibility, Coarse-Grained Reconfigurable Architectures (CGRAs) are becoming attractive alternatives for embedded systems. By deploying the configurable and parallel hardware resources efficiently, CGRA enables pervasive use of high energy-efficient solutions for a wide range of applications [1]- [3].
As shown in Fig.1, a CGRA [4]- [6] usually consists of a host controller, a data memory, a configuration memory and a Processing Element Array (PEA). The host controller controls the whole system and exchanges data with the PEA by the data memory, which is composed of multiple banks form such that PE columns can access the data memory in parallel. The configuration memory stores configuration contexts that can dynamically change the data path of the PEA. The PEA is usually composed of dozens of PEs connected by 2-D connecting networks. Each PE includes an ALU-like Function Unit (FU), a Local Register File (LRF), and an output register. With these basic resources in a CGRA, The associate editor coordinating the review of this manuscript and approving it for publication was Chong Leong Gan . a long dependence in a Data Dependency Graph (DDG) can be routed by PE, LRF, or data memory [7] and it makes routing strategy exploration viable in mapping.
In order to improve loop execution performance on CGRA, modulo scheduling [8], [9] is widely used. Modulo scheduling tries to map a DDG extracted from a loop to the target CGRA, where Initiation Interval (II) between adjacent loop iterations is the main concern to increase the performance of a loop. Usually, DDG mapping involves 3 basic steps: i) scheduling, ii) operator placement, and iii) value routing. As the connection topology among PEs is sparse, the Placement and Routing (P&R) are often done simultaneously for a better solution. Once the P&R step gets failed, rescheduling is usually performed to generate another scheduled DDG for P&R.
To route the long dependencies (that traversing more than 1 time steps) in a DDG, 3 basic routing strategies could be explored, including PE routing [10], register routing [11] and memory routing [7]. Some works [8], [12] integrate routing strategy exploration into the P&R step. Due to the difficult search in the huge space, it is unusable for all but the small DDGs. For fast compilation, most works [9]- [11], [13]- [15] integrate routing strategy exploration into the scheduling step. In BufAware mapping [9], as the first row shown in Fig. 2, the authors argue that initial scheduling exploration should be paid more attention for the purpose of higher performance and robust compilation, and therefore they comprehensively analyze and reschedule the DDG to help for improving the success rate. However, the scheduling exploration in [9] only generate one modified DDG and it will directly increase II once the P&R of this modified DDG fails. That is to say, rescheduling routing exploration is neglected in this work. Most works [10], [11], [15], as the second row shown in Fig. 2, pay more attention to rescheduling routing exploration, where routing strategy exploration is performed on those unmapped nodes from a P&R failure. If all rescheduling schemes are failed, it will increase II to relax resource constraints and try again. However, only learning from unmapped nodes would be too late and may easily get stuck in a local minimum.
From the analysis above, we note that as routing strategies are not fully (at both initial scheduling and rescheduling stage) and globally (on the whole DDG rather than unmapped nodes) explored, the following P&R is prone to failure leading to performance loss. To this end, as the third row shown in Fig. 2, this paper proposes a Routability-Enhanced Scheduling for Mapping (RESMap) applications on CGRAs, supporting global routing explorations in both initial scheduling and rescheduling stages. Our main contributions are summarized as follows: • We unify traditional operator scheduling and graph modification in an Integer Linear Programming (ILP) formulation. Besides basic time step adjustment, it also supports graph modification, such as node insertion and edge cutting. Therefore, various mainstream routing strategies, such as PE routing, register routing, memory routing and path sharing, and their combinations could be globally explored.
• We make routing explorations on both scheduling and rescheduling steps. By solving the ILP problem iteratively, we can make full exploration of routing strategy and generate optimized DDG for the P&R attempts, such that it may get successful P&R at the very beginning, reduce the number of scheduling-P&R iterations, and finally save compilation time. The remainder of the paper is organized as follows: Section II gives background of CGRA mapping. Then, section III states the proposed method. Next, Section IV shows the experimental results. Finally, we conclude in Section V.
II. BACKGROUND
In this section, we first introduce the routing strategies of long dependence. Then, we illustrate modulo scheduling on CGRA with an example.
A. ROUTING STRATEGIES OF LONG DEPENDENCE
As shown in Fig. 3, the three types of routing strategies are illustrated with a 4-node DDG mapping on a 1×2 PEA time extended to 4 time steps, where the yellow cell and blue cell indicate FU and LRF, respectively.
1) ROUTING DATA VIA LRF
LRF-routing supported mapping methods allow routing long dependencies via local registers [11], [16]. Although this method saves more FUs, it also implies a strict constraint that the father node and child node of a long dependence must be placed on the same PE. As shown in Fig. 3(a) and (b), the long dependence 1 → 4 indicates that it will be delivered by local registers in PE 0 . To satisfy the strong constraint, operator 4 can only be placed on PE 0 , too. 2) ROUTING DATA VIA PES PE-routing supported mapping methods [10], [16], [17] allow routing long dependencies via extra PEs by inserting routing nodes into DDG before P&R. As shown in Fig. 3(c) and (d), the input DDG is firstly modified by adding two routing nodes (1 and 1 ). Then, P&R is performed for the new 6-node DDG. Without placement constraints introduced by long dependencies, operator 4 can be placed on either PE 0 or PE 1 .
3) ROUTING DATA VIA MEMORY
Memory-routing supported mapping methods allow routing long dependence via data memory [7] by adding memory accessing nodes and cutting the corresponding edges before P&R. As shown in Fig. 3(e) and (f), the long dependence of the input DDG is firstly cut off. Then, two memory accessing nodes, storing node (S) and loading node (L), are inserted. Finally, the new modified DDG is mapped. Without placement constraints introduced by long dependencies, operator 4 can be placed on either PE 0 or PE 1 .
From the different routing methods, we find that although LRF routing does not need graph modification, it brings strong placement constraint to P&R. PE routing and memory routing could be friendly to P&R, but it involves complicated graph modification. Fig. 4 illustrated the basic process of modulo scheduling on CGRA. First, a loop kernel is transformed into DDG, a Data Flow Graph (DFG) with loop carried dependence. Then, the DDG is modified (scheduling and graph modification) and transformed into a folded DDG with the height of II, satisfying the resource constraint, which means the number of operators with the same Modulo Time Step (MTS), i.e., time step modulo II, should be no more than the size of the PEA. Finally, mapping algorithms perform P&R for the folded DDG on a Time-Extended CGRA (TEC). As shown in Fig. 4, a 4-step DDG ( Fig. 4(a)) is firstly folded into a 3-step DDG ( Fig. 4(b)) by assigning operator 5 to T1 and modulo operation on operator's times step is performed with II = 3. Next, the folded DDG is mapped on a 1 × 2 PEA (Fig. 4(c)), which is extended to a TEC with the height of II.
B. MODULO SCHEDULING ON CGRA
The TEC is shown in Fig. 4(d), where a node PE j i indicates the instance of PE i at time step j and each arrow indicates a direct hardware link among these PE instances. As long dependence indicates register routing implicitly in P&R and the local register can only be accessed by its corresponding FU, the endpoint operator of a long dependence must be placed on the same PE at different time steps. As shown in Fig. 4(d), operator 5 and 4 of the long dependence 5 → 4 are both placed on PE 1 for register routing. As the target time steps for nodes in the modified DDG are previously determined, P&R just needs to place each node on a PE instance at the corresponding time step and make all those placed nodes be connected according to dependencies.
III. ROUTABILITY-ENHANCED MAPPING METHOD
In this section, we first extend traditional scheduling to support global routing strategy exploration by formulating it as an ILP problem. Then, we solve the ILP problem with a simple and effective metric. Finally, we present the overall flow of DDG mapping based on the enhanced scheduling.
A. ILP FORMULATION
We use ILP to formulate the enhanced scheduling for two reasons: 1) As our enhanced scheduling involves graph modification, e.g., routing node insertion and edge cutting, traditional scheduling, such as list scheduling [18], force-directed scheduling [19], [20] or even SDC [21], cannot work in this problem. 2) ILP approach can formulate most problems into clear representations in mathematics and it is possible to find the optimal solution [22], [23]. Before ILP formulation, we first give some parameters which are illustrated with the DDGs with II = 4 in Fig. 5: • Maximal DDG latency (T l ): The maximal latency of the loop DDG. It could be greater than or equal to the length of the critical path (T c ) of the DDG. In the case of having greater latency, adjacent operators in the critical path may be scheduled apart such that a better mapping solution may be achieved. For the DDG in Fig. 5(b), T l is set to 7 equal to the length of the critical path.
• Routable operator set (R): The operator in this set has at least one output edge that could be traversing more than one time step in some cases (long edge candidates), such that a routing node could be inserted. For the DDG in Fig. 5(b), as edges 1 → 8, 8 → 5, 8 → 9 and 9 → 7 are long edge candidates, their father operators 1, 8 and 9 are in R. We also note that the back edge 4 → 2 (loop carried dependence) is also a long edge candidate in the case of II = 4, as the dashed arrow shown in Fig. 5(b). Thus, operator 4 is also in R. Therefore, the R set for the DDG in Fig. 5 • Earliest time step for scheduling (S n ): The earliest time step that operator n can be scheduled which could be easily obtained with As-Soon-As-Possible (ASAP) [19] considering a given T l . As shown in Fig. 5(b), the earliest time steps of operator 1, 4, 8 and 9 are 0, 3, 1 and 2, respectively. • Latest time step for scheduling (L n ): The latest time step that operator n can be scheduled which could be easily obtained with As-Late-As-Possible (ALAP) [19] considering a given T l . As shown in Fig. 5(b), the latest time steps of operator 1, 4, 8 and 9 are 0, 3, 3 and 5, respectively.
• Latest time step for routing ( L n ): The latest time step that operator n's routing node can be inserted which is equal to max n ∈OE n (L n − 1), where OE n indicates the set of sub-operator from operator n. In the case of n → n is a back edge (recurrence dependence), L n is equal to max n ∈OE n (L n + II − 1) as its sub-operator is actually from the next iteration II steps latter. In Fig. 5(b), given operator 8 has two sub-operators 5 and 9 with L 5 = 4 and L 9 = 5, such that L 8 = L 9 − 1 = 4. Similarly, consider that 4 → 2 is back edge from operator 4, L 2 = 1 and II = 4, such that L 4 = 1 + 4 − 1 = 4.
• Extended mobility ( M n ): This new defined mobility is extended from M n to include the time steps of operator's routing node, which is equal to [S n , L n ]. As shown in Fig. 5 [3,4], [1,4] and [2,5], indicated by the white bars plus yellow bars.
• Edge set (E): The dependence set in the original DDG.
• Memory accessing latency (T m ): The minimal latency of memory routing, including data storing, data holding and data loading. In case that data storing and data loading can be performed in one time step, T m is 3.
The number of PEs in the target PEA. We assume that the target PEA for the DDG in Fig. 5(a) is a 1 × 2 PEA and thus N pe is 2 there. Variable Definition: Based on the parameters above, we define 3 binary variable sets and 1 integer variable for ILP formulation. The first binary variable set defines operator scheduling and PE routing node insertion for an operator. The last two binary variable sets define memory routing node insertion. The integer variable defines the number of PE consumed. These variables are necessary for the formulation, and are elaborated as follows: Operator n is scheduled at time step i and one of its PE routing node is inserted at time step j, where j is not less than i. In the case of j = i, it means operator n is scheduled at time step i. In the case of j > i, it indicates a PE routing node is inserted later than operator n's scheduled time step i. As shown in Fig. 5(a), it presents all binary variables referring to node 8. As the M 8 and M 8 are 3 and 4, there are nine x-variables for node 8.
• y n,i,j , i ∈ [S n , L n ], j ∈ [i + 1, L n − T m + 1]: The value of operator n at time step i is stored to memory by adding a storing node at time step j. Following the condition, as shown in Fig. 5(a), only one y-variable (y 8,1,2 ) is generated for operator 8.
• z n,i,j , i ∈ [S n , L n ], j ∈ [i+T m , L n ]: The value of operator n at time step i, previously stored to memory, is loaded from memory by adding a loading node at time step j. Following the condition, as shown in Fig. 5(a), only one z-variable (z 8,1,4 ) is generated for operator 8.
• n pe : The maximal number of PE used. Following this condition, the number of operators with the same modulo time steps should be less than or equal to n pe . According to the time step of scheduling, the ILP variables of operator n can be classified into M n groups. As shown in Fig. 5(a), node 8 has 3 possible time steps and its variables are classified into 3 groups, G1, G2 and G3. As these variables would build a huge solution space, we further reduce the space by the following constraints: C1: Operator's scheduled time step is unique: Similar to traditional scheduling, each operator should be only scheduled at one time step. This constraint can be represented as follow: x n,i,i = 1, ∀n ∈ R (1) VOLUME 9, 2021 C2: Routing node exclusivity: This constraint is a supplement for the first one. Once an operator is scheduled at a time step, routing nodes generated for the operator at other scheduled time steps should be eliminated. It can be represented as follows: C3: Storing and loading node concurrence: As long dependence can be cut off and routed by data memory, the memory accessing nodes, storing and loading, should be of concurrence. It can be represented as follows: C4: Functional Unit routing and memory routing exclusivity: Once a pair of memory routing nodes are inserted for an operator, Functional Unit routing nodes should not be inserted during the latency of the memory routing. It can be represented as follows: C5: Dependence constraint: Each operator should be scheduled earlier than its sub-operator and the operator's routing nodes should be inserted earlier than the child node that may have the latest scheduling time step. They can be represented as follows: As presented in Fig. 5(d), operator 8's scheduling time step should be simultaneously earlier than its sub-operator 5 and 9. Meanwhile, operator 8's routing nodes just needs satisfy only one constraint that earlier than sub-operator 9.
C6: Functional Unit resource constraint: At each time step, the total number of nodes, including operator and inserted routing nodes, should be no more than the number of Functional Units of the target CGRA. It can be represented as follows: x n,i,j + y n,i,j + z n,i,j ≤ n pe , C7: Memory bank resource constraint: At each time step, the total number of memory accessing nodes, including storing and loading nodes, should be no more than the number of banks (N b ) of the data memory. It can be represented as follows: C8: Finding suboptimal solutions: As one scheduling result cannot guarantee that the corresponding P&R would succeed, we need to generate multiple suboptimal solutions. As the work in [24], a suboptimal solution of ILP can be generated by adding extra constraints from obtained solutions. Then, the new ILP problem is solved again to find another solution. As our problem just contains 0-1 variables, we can use the binary cut method, which involves only one extra constraint and no additional variables. From an obtained solution S * composed of variables set {x n,i,j }, {y n,i,j } and {z n,i,j }, we can add an extra constraint as follows: where A = {v|v = 0, ∀v ∈ S * } and B = {v|v = 1, ∀v ∈ S * } are binary cut set from an obtained solution S * . By adding this constraint, we can find the k solutions with the same objective value. C9: Bound constraint: As the suboptimal solution above will put more and more constraints into the ILP problem and make the ILP solver more and more complicated, the extra bound constraint is introduced to simplify the formulation once the objective for ILP is changed. In that cases, all the previously added constraints for getting suboptimal solutions could be removed. The bound constraint is presented as follows: (10) where O is the objective of ILP formulation that is introduced in the next subsection, and O lb is the last value of objective once the objective is changed by ILP solver.
B. MAPPING FLEXIBILITY EVALUATION
As the maximal width of the folded DDG will affect the placement flexibility (how many PE positions a node could be placed) while the number of long dependencies will put strong constraint that source node and target node must be placed on the same PE, we will evaluate the enhanced scheduling from these two factors. The first factor can be easily presented by the ILP variable n pe . For the second factor, directly formulating the number of long dependence in scheduling, especially in graph modification supported scheduling, is of high cost and difficult to solve. Alternatively, we use the number of routing nodes inserted to indirectly evaluate the number of long dependencies. The more routing nodes are inserted, the fewer long dependencies it may have. Therefore, the second factor is presented as follows: x n,i,j + α(y n,i,j + z n,i,j ) (11) where α is the weight factor for memory routing nodes. As memory accessing nodes will cut off long dependencies and exclude more PE routing nodes inserted (Equation (4)), we give greater weights for memory accessing nodes. Empirically, we find that our method will generate good results when α is set to 10. Objective Function: Considering both factors above, we present the objective function of ILP in linear form as follows: min x n,i,j ,y n,i,j ,z n,i,j ,n pe O = β · n pe − n ins (12) where β is equal to the upper bound of n ins and it can be trivially calculated by traversing each long dependence in the original DDG. The first item of RHS indicates that minimizing n pe is of high priority as it could save more PEs and improve routability for every modulo time step of folded DDG. The second item of RHS indicates that maximizing n ins , in the premise of keeping the maximal width of the folded DDG, is encouraged to reduce the number of long dependencies.
With the objective function and the 9 constraints, we can use sophisticated ILP solvers, such as PULP [25], to obtain one scheduling solution. Then, the new DDG could be easily generated by adding extra nodes or edges according to the values of the binary variables. By iteratively adding suboptimal constraint in Equation (9), a series of solution candidates could be generated. Therefore, our approach could make global routing strategy exploration at both scheduling and rescheduling stages. Fig. 6 presents the overall mapping flow based on our enhanced scheduling. 1) The mapping flow starts with an input DDG given its MII, where the interconnection problem is preprocessed with re-computation or routing [15]. 2) Then, we analyze the extended mobility ( M ) for each operator in the DDG. 3) According to M , we define all the variables referring to scheduling and routing. 4) Next, we construct all constraints listed above where constraint C8 and C9 are not included in the first round. 5) With the objective defined in Equation (12), we perform ILP solver [25] to find an optimal scheduling. If there is no solution found, we will increase II and goto step 4. Otherwise, we record the objective value O i in this i th iteration and further judge if it is changed or not. If yes, 6) we will update the extra bound constraint (C9) and 7) clear all previous suboptimal constraints (C8) and goto step 4. Otherwise, 8) we will generate the modified DDG according to determined decision variables and 9) perform P&R based on a clique searching algorithm [11]. If the P&R succeeds, the whole mapping is done and a valid mapping with II is obtained. Otherwise, 10) we will add a suboptimal constraint C8 based on the current solution and goto step 4.
C. OVERALL FLOW
From the flow, we note that the P&R is the most time-consuming stage. In case the P&R time is too long, an upper bound of running time is set which is usually several hours. Therefore, all things we have done in this work are trying to improve the capability of P&R from the perspective of DDG form involving operator scheduling and graph modification. With the routability enhanced scheduling, the number of scheduling-P&R iteration will be greatly reduced. As a result, it is helpful to find a mapping with less II and even less compilation time.
IV. EXPERIMENTAL RESULTS
In order to evaluate the effectiveness of our approach, we conduct a series of experiments, including the performance in terms of II, the overall compilation time and its breakdown.
A. SETUP 1) BENCHMARKS
We conducted experiments by mapping 12 selected performance-critical loops from three benchmarks: 1) MiBench [26], 2) Spec2006 [27], and 3) PolyBench [28] suite for broad representation. In Table 1, we describe the DDGs converted from these loops in detail, where N op , N dep , N M op , N M dep , D M max and D M min indicate the number of operators, the number of dependencies, the number of operators with mobilities, the number of long dependencies, the maximal dependence length (i.e., the number of control steps traversed by a dependence) and the minimal dependence length, respectively.
2) TARGET ARCHITECTURES
The target architecture [16] is a 4 × 4 CGRA with mesh-plus routing style. In mesh-plus routing CGRA, FUs are connected in a mesh network, where each FU is connected to its VOLUME 9, 2021 immediate neighbors and its near neighbors with 1 hop. In our setup, each PE includes 1 FU and 2 local registers and the number of memory banks is 4.
3) BASELINES
We evaluated RESMap against other state-of-the-art approaches, including RAMP [15] and BufAware [9]. In our approach, the parameter T l is set to T c + 1 (the length of the critical path plus 1) to make every operator in DDG have the opportunity of routing.
4) DEVICES AND SOFTWARE
Our experiments over three approaches are conducted on the same Linux workstation with an Intel Xeon 2.4 GHz CPU and 64GB memory. In order to compare the effectiveness of our approach with RAMP and BufAware, the upper bound of each scheduling-P&R iteration is set to 2 hours. If the running time of one iteration is beyond the upper bound, BufAware will directly increase II while RAMP and RESMap will perform rescheduling and try again.
B. RESMap GENERATES BETTER MAPPING
From Fig. 7, the first observation is that RESMap achieves lower II values than RAMP in 10 out of 12 kernels and BufAware in 6 out of 12 kernels. On average, our RESMap gets 17.8% and 10.5% less II than RAMP and BufAware. In another word, RESMap reaches up 1.12× and 1.22× performance speedup, as compared to BufAware and RAMP, respectively.
As compared to RAMP, the improvement is mainly from two aspects: 1) RAMP does not give the ideal initial scheduling before P&R. Consequently, the poor initial scheduling can hamper P&R flexibility and it cannot find a valid P&R with the given II. For example, without full routing exploration in initial scheduling, RAMP cannot find a valid P&R for kernel Susan, Lane, Inner, Csettle, Dummies, Jfdctlt, Adi, Couplong and R3000 at the MII for each. Whereas, given a modified DDG from global exploration of routing strategies at the initial scheduling stage, RESMap can easily find a valid P&R for each of these kernels at their MII. 2) During rescheduling, RAMP focus on unmapped nodes, without considering the whole DDG. Thus, this partial routing exploration at the rescheduling stage may not generate a good enough DDG for P&R. For example, RAMP cannot find a valid P&R with partial routing exploration at the rescheduling stage for kernel Susan with II = 4, Lane with II = 4 and Lulesh with II = 14.
As compared to BufAware, the improvements of RESMap also comes from two aspects: 1) BufAware just generates a feasible solution considering the constraints of resource, interconnection, and path sharing. However, without evaluation, this solution may not be the optimal or near-optimal one. Therefore, the DDG generated from the initial scheduling in BufAware may not be good enough. For example, BufAware cannot find a valid P&R for kernel Jfdctlt and Adi at their MIIs, even that routing exploration at the initial scheduling stage is performed. 2) BufAware lacks routing exploration at the rescheduling stage. For example, both BufAware and RESMap cannot find a valid P&R for kernel Susan with II = 4, Coupling with II = 7, Lane with II = 4, and Lulesh with II = 14. In those cases, BufAware directly increases II and starts the next iteration. Instead of sacrificing performance, RESMap generates a suboptimal DDG at the rescheduling stage and it finally gets valid mappings without increasing II. As a result, RESMap outperforms both RAMP and BufAware on performance.
C. REDUCED COMPILATION TIME Fig. 8 presents the accumulated compilation time (in logarithmic form) and the number of scheduling-P&R iterations till a valid P&R is found with a tuned II in different approaches. If it is failed to find a valid P&R for a generated DDG, the upper bound of time will be added to the total compilation time. For kernel Newmdlt, as all the three approaches can get valid P&R at the first iteration, the compilation time of the three approaches is almost of the same order of magnitude. For other kernels, as RESMap generates P&R-friendly DDGs, it is more prone to find valid P&Rs with less compilation time. As a result, RESMap achieves 28.7% and 50.2% compilation time reduction, as compared BufAware and RAMP, respectively.
As compared to RAMP, the large number of iterations brings more compilation time, which mainly comes from the poor routing exploration of the initial scheduling. For kernel Inner, Csettle, Dummies, Jfdctlt, Adi, Couplong and R3000, RAMP is failed to find a valid P&R for the first DDG generated from the initial scheduling. Therefore, much of the compilation time is spent on the trivial P&R of the first iteration. As a result, on average, RESMap outperforms RAMP on compilation time.
As compared to BufAware, although the comprehensive routing exploration at the scheduling stage could generate an optimized DDG form for P&R, it cannot ensure the success of the P&R. When it is failed to map the optimized DDG, it will wait for the timeout of the current iteration, increase II and restart another iteration. Consequently, it brings more compilation time. For kernel Jfdctlt and Adi, BufAware is failed to get valid P&R in the first iteration while RESMap succeeds in the first iteration. Therefore, the compilation time (the accumulated time till a valid P&R is found) of RESMap is within 2 hours while those of BufAware is beyond 2 hours. For kernel Susan, Coupling, Lane, and Lulesh, as both BufAware and RESMap cannot find valid P&Rs in the first iteration, BufAware starts the next iteration with increased II and RESMap starts the next iteration without changing II. Therefore, the compilation time in these kernels is compatible. As a result, on average, RESMap achieves compatible compilation time, as compared to that of BufAware.
In order to fully demonstrate the breakdown of the compilation time under the same II, Fig. 9 presents the scheduling time and P&R time of RAMP, BufAware and RESMap over DDGs with different mobility sums (the sum of mobility of all nodes in a DDG), where 5 kernels are selected and reordered according to the mobility sum from low to high. As the horizon axis is shown in Fig. 9, the underlined number indicates the mobility sum of the corresponding DDG while the other number indicates the total number of nodes in the original DDG. As the compilation time consists of DDG processing and P&R and they have a difference of orders of magnitudes, Fig. 9 presents the P&R time in logarithmic form (left vertical axis) and scheduling time (right vertical axis), respectively.
From Fig. 9, we first note that the scheduling time of RESMap increases sharply with increasing mobility sum while other approaches increase slightly with it. That is because the ILP formulation of RESMap not only considers the time assignment of original operators but also considers the graph modification referring to routing strategy exploration. However, compared to the P&R time, the scheduling time is negligible. As RESMap makes full routing exploration and generates a P&R-friendly DDG form, it can finally find 5 valid mappings while RAMP and BufAware can only find 1 and 3 valid mappings out of 5, respectively. From the perspective of P&R time, we can note that the complexity of P&R is mainly dependent on the size of DDG. As shown in Fig. 9, the P&R time of different approaches decreases greatly when the sizes of DDGs vary from 75 to 25. As a result, the enhanced scheduling in RESMap has negligible complexity growth in the whole mapping flow, but it offers a P&R friendly DDG form to improve the mapping success rate.
V. CONCLUSION
In modulo scheduling on CGRA, inadequate routing exploration of DDG leads to poor P&R solutions. To tackle the problem, a routability-enhanced scheduling method is proposed by formulating traditional scheduling and graph modification as an integer linear programming formulation. It explores routing strategies at both the initial scheduling stage and rescheduling stage and exposes a larger optimization space that is prone to find a global optimized DDG for P&R. Our experimental results show that RESMap generates mapping which leads to an average of 1.12× and 1.22× better performance, 28.7% and 50.2% compilation time reduction, as compared to 2 state-of-the-art heuristics. | 7,805 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
No difference of survival between cruciate retaining and substitution designs in high flexion total knee arthroplasty
The aim of this study was to compare the long-term implant survival and outcomes in patients with high-flexion cruciate-retaining (CR) or high-flexion posterior cruciate-substituting (PS) knee implants. A total of 253 knees (CR group: 159 vs. PS group: 94) were available for examination over a mean follow-up of 10 years. Clinical outcomes were assessed including the Hospital for Special Surgery score, Knee Society score and Western Ontario and McMaster Universities Osteoarthritis Index score at the final follow-up. Radiologic measurements were also assessed including the hip-knee-ankle angle and radiolucent lines according to the KSS system at the final follow-up. The survival rate was analyzed using the Kaplan–Meier method. At the final follow-up, the mean total HSS scores were similar between the two groups (p = 0.970). The mean hip-knee-ankle angle at the final follow-up was similar between groups (p = 0.601). The 10- and 15-year survival rates were 95.4% and 93.3% in the CR group and 92.7% and 90.9% in the PS group, respectively, with no significant difference. Similar clinical and radiographic outcomes could be achieved with both the high-flexion CR and high-flexion PS total knee designs without a difference in survival rate after a 10-year follow-up.
www.nature.com/scientificreports/ were obtained from the retrospective study design, an informed consent was not needed according to the ethics committee of Chonnam National University Hwasun Hospital institutional review board. All methods were performed based on the relevant guidelines and regulations, and all experimental protocols that we used were approved by Chonnam National University Hwasun Hospital institutional review board. All patients who underwent TKA at our institution were enrolled in a prospective data registry. In our institution, all patients are treated in accordance with a highly standardized regimen, and clinical data are collected prospectively. From January 2000 to December 2010, a total of 414 consecutive primary TKAs among 367 patients were identified for the study. Two senior surgeons performed all the TKA procedures using the same technique. Patients with a history of open knee surgery, those with a severe deformity of the knee joint (varus alignment > 20°, flexion contracture > 30°, and valgus alignment > 10°), those with a knee joint disease other than primary osteoarthritis, and those undergoing revision TKA for diseases other than primary osteoarthritis were excluded. During the follow-up, 120 patients (140 knees) were unable to be contacted, and 2 patients died. Furthermore, patients who underwent revision or experienced other complications severely affecting the clinical outcomes were also excluded. Finally, the cohort included 143 patients (159 knees) in the CR group and 87 patients (94 knees) in the PS group for the analysis of long-term results (Fig. 1). The minimum follow-up duration was 8 years in this study (mean, 11.1 years; range, 8-17 years).
The CR group consisted of 10 men and 133 women with a mean age of 69.6 ± 7.1 years (range, 53-91 years) at the time of surgery. The PS group consisted of 5 men and 82 women with a mean age of 68.3 ± 7.2 years (range, 52-87 years) at the time of surgery. No significant differences were found between the two groups in terms of age (p = 0.206) and sex (p = 0.711) at the time of surgery. The mean preoperative non-weight-bearing ROM, Hospital for Special Surgery (HSS) score, Knee Society score (KSS, pain and function subscales), and Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) score were similar between the two groups (Table 1).
We used the same surgical procedure in both groups, except for retaining the posterior cruciate ligament (PCL) in the CR group and sacrificing the PCL in the PS group. All TKAs were performed with a longitudinal midline skin incision, followed by a medial parapatellar arthrotomy after patellar eversion. Then, the incision was extended into the quadriceps tendon by approximately 3-4 cm. In both groups, the gap balance technique was used for bone cutting and selecting the prosthesis size. To restore the knee to its premorbid state, care was taken to ensure that the thickness of the bone and cartilage removed was equal to the thickness of the prosthesis in all TKAs. Tibial cuts were performed using an extramedullary guide, with a posterior slope of 7° in the CR group and 3° in the PS group. The intramedullary rod was set at 4-6° valgus cut for femoral alignment in all TKAs. We resected the distal femoral condyles first, followed by the posterior femoral condyles, to ensure that the thickness of the bone and cartilage removed was equal to the thickness of the femoral component to be inserted. We removed osteophytes by rongeur and performed patellar denervation using an electrocautery device, but neither of groups resurfaced the patella. All prostheses were fixed with tobramycin-laden cement (Simplex P; Stryker, Mahwah, NJ, USA). www.nature.com/scientificreports/ Similar postoperative pain control and rehabilitation treatment were provided in both groups. In both groups, passive motion was not performed immediately after surgery, and rehabilitation protocols, including full-weight bearing, were followed on the first postoperative day.
In all patients, physical examination and knee scoring were performed before surgery, at 6 months and 1 year after surgery, and annually thereafter. At the final follow-up, all the clinical results, including ROM, KSS, and WOMAC score, were compared. All the data were collected and evaluated by a surgical assistant from the surgical team to control bias. Supine anteroposterior and lateral radiographs and standing anteroposterior scanograms were obtained preoperatively and at each follow-up. An independent observer who was blind to the clinical results evaluated the radiographic outcomes, such as hip-knee-ankle (HKA) angle and radiolucent lines. The position of radiolucent lines was identified according to the recommendation of the KSS system 7 . The intraclass correlation coefficient was used to assess the intra-observer reliability for the evaluation of the HKA angle and incidence of radiolucent line, with twice assessment differs by two weeks from each other (ranges between 0.81 and 0.87).
Statistical analysis.
A statistical power analysis was considered based on the sample size calculation performed in the previous study 8 . Detecting a difference of 7° (standard deviation ± 11.2°) in flexion between CR group and PS group, 41 patients per group provided the power of 0.8 and the confidence level of 0.05. This study included 143 patients in the CR group and 87 patients in the PS group, which was enough for the requirement.
Statistical analyses were performed using SPSS for Windows (release 11.0; SPSS Inc., Chicago, IL, USA). Differences in parametric data, such as functional outcomes, between the two groups were analyzed using the independent t-test. Non-parametric data, such as ROM and follow-up duration, were compared with the Mann-Whitney U-test. The chi-square test was used to evaluate the overall non-numeric data in the two groups. The cumulative survival rate at the 10-and 15-year follow-up evaluations was compared between the two groups using the log-rank test. Differences of p < 0.05 were considered statistically significant.
Results
The average preoperative ROM in the CR and PS groups was 123.9 ± 16.0° and 122.8 ± 15.6°, respectively (p = 0.544) ( Table 1). At the final follow-up, the average ROM was 131.1 ± 17.4° and 129.5 ± 14.8°, respectively (p = 0.223). The preoperative clinical scores showed no significant differences between the two groups ( Table 1). The clinical results at the final follow-up had no statistical significant between the CR and PS groups regard of HSS score (p = 0.970), KSS pain score (p = 0.565), KSS function score (p = 0.125), and WOMAC score (p = 0.338). (Table 2).
According to the radiologic results, the HKA angle was similar between the CR and PS groups preoperatively and at the final follow-up (p = 0.312 and p = 0.601, respectively) ( Table 3). In the coronal and sagittal radiographs, radiolucent lines were identified according to the zones reported by Ewald 7 . A radiolucent line was found in the tibial compartment in 10 (6.3%) knees and in the femoral compartment in 11 (6.9%) knees in the CR group, and in the tibial compartment in 6 (6.4%) knees and in the femoral compartment in 10 (11.5%) knees in the PS group, with no significant difference between the groups (p = 0.538). In the CR group, the radiolucent line in www.nature.com/scientificreports/ the tibial compartment was in coronal zone 1 in nine cases and in sagittal zone 1 in one case. In the PS group, the radiolucent line in the tibial compartment was in coronal zone 1 in four cases and in sagittal zone 1 in two cases. No difference was found in the tibial-side radiolucent line between the CR and PS groups (p = 0.978). For the femoral compartment, six knees showed a radiolucent line in femoral zone 1 and five knees in femoral zone 4 in the CR group, while eight knees showed a radiolucent line in femoral zone 1, one knee in zone 2, and four knees in zone 4 in the CR group. No difference was observed in the femoral-side radiolucent line between the two groups (p = 0.257). ( Table 3) No evidence of dislocation of bearing or massive osteolysis (lesions > 1 cm) was found. However, one case of aseptic loosening (progressive lucency of > 2 mm) was detected in the CR group at postoperative 12 years.
In the CR group, the estimated survival rate according to the Kaplan-Meier survival analysis was 96.4% (95% confidence interval: 96.1-96.6%) at 10 years, with an overall revision rate of 3.0% (5 of 166 knees), and 94.3% (95% confidence interval: 94.0-94.6%) at 15 years, with an overall revision rate of 4.2% (7 of 166 knees). In the PS group, the estimated survival rate was 93.5% (95% confidence interval: 93.0-94.0%) at 10 years, with an overall revision rate of 5.9% (6 of 102 knees) and 91.7% (95% confidence interval: 91.1-92.3%) at 15 years, with an overall revision rate of 4.2% (7 of 102 knees). In the Kaplan-Meier survival analysis, the endpoint was revision of the tibial and/or femoral component for aseptic loosening and/or osteolysis (except infection cases). The cumulative survival rate at the 10-and 15-year follow-up had no significant differences between the CR and PS groups (p = 0.239 and p = 0.352, respectively) 9 (Fig. 2). Among the patients who were kept on follow-up, 12 in the CR group and 10 in the PS group presented complications (Table 4). There were three cases of infection in the CR group and two cases of infection in the PS group, which all required revision TKA. Aseptic loosening was found in seven patients from the CR group, with two cases of focal osteolysis. In the PS group, aseptic loosening was observed in three patients, with two cases of osteolysis (Fig. 3). However, one patient in the CR group refused to undergo implant revision. Of the two cases of osteolysis in the PS group, one was due to infection and one was isolated osteolysis. Additionally, one case of polyethylene wear and one case of quadriceps rupture were found in the PS group, with two cases of periprosthetic fracture in each group.
Discussion
We found that the high-flexion CR and PS designs showed similar long-term outcomes in terms of ROM, HSS, KSS pain, KSS function, WOMAC score, and KSS radiographic outcomes. Moreover, the survival rate at 10 and 15 years showed no difference between the two designs.
While CR prosthesis in TKA provided paradoxical anterior motion of the femur, reducing the ROM 10 , the PS-prosthesis, creates a posterior roll back of the femur during flexion, increasing the ROM 11 . However, most of studies showed comparable results in ROM between the CR and PS designs in TKA , including our previous study 8,12 . www.nature.com/scientificreports/ However, in the present study, the maximal flexion was similar between the two groups (CR: 131.1°, PS: 129.5°) after a 10-year follow-up and was comparable to the previously reported value 8,12 . On the contrary, a few studies have demonstrated that the CR TKA design has a more significant functional advantage than the PS TKA design, especially in high flexion 13 . This contradiction between biomechanical and functional outcomes suggests that the differences between the two groups may be either nonexistent or undetected by the current outcome measurements.
Although all clinical outcome parameters significantly improved after TKA with the use of either the CR or PS prosthesis, the clinical outcomes including the KSS, HSS score, and WOMAC score did not significantly differ between the two groups both preoperatively and postoperatively, similar to most other studies comparing CR and PS TKA 14,15 . In contrast, the other study reported better clinical outcomes with PS TKA than with CR TKA after 10 years follow-up 16 . They deduced that it was the degenerative change of PCL with time as a cause of worse outcomes in CR TKA… In recent studies on the clinical outcomes of TKA, the CR and PS TKA designs were shown to achieve similar clinical results. Several studies have compared the function of standard and highflexion TKA designs, including our previous study 17,18 . Similar to the previous studies reporting no difference between the standard CR and PS knees in TKA, a few studies have shown the same clinical outcomes between high-flexion CR and PS prostheses in TKA after short-term follow-up. Kim et al. found no significant differences in KSS, HSS score, and patient satisfaction between the two groups 19 . However, another study reported that highflexion CR TKA showed slightly better results than high-flexion PS TKA, the difference was not significant 13 .
The design of high-flexion prostheses differs in several major aspects from the design of standard prostheses. Major feature of high-flexion prostheses is the extend the radius and thickness of the posterior condyle of the femoral component to increase the contact area between femoral and tibial articulation during high flexion. However, Han et al. reported high the incidence of early aseptic loosening (38%) of high flexion PS knees in TKA at a mean of 23 months 4 . After then, early aseptic loosening, as a major concern of high-flexion prosthesis, has been previously investigated and reported.. They suggested that if deep knee flexion (ROM > 120°) is achieved and accompanied by migration of the femoral component, asymmetrical loading between the medial and lateral compartments of components of a total knee replacement may contribute to loosening and failure of the implant, which occurs at the implant-cement interface 4 24 . In the present study, 11 (6.9%) knees in the CR group and 10 (11.5%) knees in the PS group showed a radiolucent line in the femoral side, and 10 (6.3%) knees in the CR group and 6 (6.4%) knees in the PS group showed a radiolucent line in the tibial side. No significant differences were observed between the two groups. The present findings indicated that neither of the two high-flexion prostheses seemed to have caused a higher occurrence of a radiolucent line or loosening in the long-term follow-up results.
Multiple studies are steadily continuing with respect to the survival rate of the NexGen high-flexion system. Previous mid-to long-term follow-up studies showed satisfactory outcomes. Rhee et al. reported that NexGen LPS-Flex showed satisfactory outcomes with a 99.2% implant survival rate after 5 years 25 . Kim et al. also reported a 98.4% survival rate for NexGen LPS-Flex with a mean follow-up of 15 years 23 . Our study showed a similar implant survival rate. We documented successful results for total knee replacements with either a CR or a PS prosthesis. We focused on the correct flexion and extension gaps, well-balanced ligaments, and good cementing technique for a high success rate at 10 years. However, our study had several limitations that should be considered. The first limitation is that this was not a randomized controlled study. Nevertheless, we adopted strict patient selection criteria and matched patients for body mass index, preoperative ROM, age, sex, and follow-up period (Table 1). Moreover, the study was also designed such that the same gap balance technique was used for the implantation of the CR and PS TKA components, except for either retaining or substituting the PCL and posterior tibial slope. Second, As the severely deformed knee is one of the indications for PS TKAs, the exclusion in this study might affect the clinical results in PS group. However, to compare the long-term follow-up between two groups, we had to exclude the patients with contraindications of CR TKAs and minimize the effects of pre-operative variables regarding postoperative clinical results. Third, the analysis of patients with a specific implant system (NexGen) might not be generalizable to patients with other arthroplasty systems. Nevertheless, the present study is the first to compare the long-term outcomes of high flexion CR and PS TKA prostheses. www.nature.com/scientificreports/ Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 4,118.8 | 2021-03-22T00:00:00.000 | [
"Medicine",
"Engineering"
] |
The influences of the number of concrete dowels to shear resistance based on push out tests
To reduce the depth of floor-beam structures and to save the cost of headed-shear studs, many types of shallow composite beam have been developed during the last few years. Among them, the shallow-hollow steel beam consists of web openings, infilled with in-situ concrete (named concrete dowel) has been increasingly focused recently. In this new kind of structure, this concrete dowel plays an important role as the principal shear connector. This article presents an investigation on the shear transferring mechanism and failure behavior of the trapezoid shape concrete dowel. An experimental campaign of static push-out tests has been conducted with variability in the number of web openings (WOs). The results indicate that the mechanical behavior of concrete dowel could be divided into crushing, compression, and tension zones and exhibits brittle behavior. The longitudinal shear resistance and specimen's stiffness are strongly affected by the number of considered WOs
INTRODUCTION
ue to the growing demand for shallow floor beam structures, numerous slim floor system has been developed [1], including Slimfoor [2], Asymmetric Slimflor Beam [3], Ultra Shallow Floor Beam [4][5][6] and DELTABEAM [7], which are used in commercial and residential buildings, hospitals, schools, etc. The composite action of these structures is the most important factor to consider. Unlike a conventional composite beam system, where the composite action is provided by shear connectors welded to the top flange of the steel beam and embedded in the concrete slab, the composite action of the new type shallow floor is primarily provided by the concrete dowel shear connectors (concrete dowel) between the hollow steel beam and concrete. This latter element ensures the simultaneous working between steel and concrete parts, to be designed as a single unified structure. D In recent years, the new shallow composite beam has been extensively studying [8][9][10][11][12][13]. The steel beam consists of web openings, infilled with in-situ concrete (named concrete dowel). In this innovative structure, the concrete dowel plays an important role as the principal shear transfer connector. The behavior of concrete dowel could be estimated based on theoretical calculation method for the shear resistance or experimental investigation method with push-out tests on [14]. The shear behavior of concrete dowel has been addressed in many research works, including empirical/analytical, numerical, and experimental analyses. Huo and D'Mello [8] conducted a series of push-out tests and analytical analysis on the circular--shaped concrete dowels. Hosseinpour et al. [12] investigated the shear resistance of circular, square or rectangular shapes by both experiment and numerical modeling. Limazie and Chen [15] used finite element analysis to investigate the shear resistance and failure behavior of the dowel shapes under direct longitudinal force, as well as the influence and the geometrical requirements of its parameters such as size and spacing of the web opening on the behavior of the shallow cellular composite beam. So far, to the authors' best knowledge, numerous shapes of web opening in the steel beam have been studied when designing the shallow composite beam. It is however limited to circular, square, or rectangular shapes. In current research, a new trapezoid shape is proposed for the web opening that led to introducing a new type of the shallow-hollow composite beam. As the shear transferring behavior of concrete dowel is the essential element of steel-concrete composite structure, this paper investigates the longitudinal shear resistance and failure behavior of trapezoidal-shape concrete dowels through a series of static push-out tests.
NEW SHALLOW-HOLLOW COMPOSITE BEAM
n a conventional composite beam, a steel-concrete composite beam is formed by a concrete slab attached to the upper flange of H (or I) steel beam by shear connectors. Normally, the composite action can be archived by means of headedshear studs, welded through the deck to the upper flange. In order to save the cost of studs, avoid welded time on site, and reduce the depth of floor-beam structure, the concrete dowel connector, which uses in-situ concrete filled the opening in web of steel beam was proposed. Compound steel beam has three parts: bottom, middle plate, and top part. The bottom and top could be welded or folded shape, in order to avoid honeycomb or voids in concrete, the cross-section of top part should be trapezoid, and the bottom could be rectangle. Both sides of the beam have "wing" belong to the middle plate to support the metal decking. With the aim of increasing longitudinal shear resistance, web opening in both sides of the top part is created. This web opening shape could be circle, rectangular or trapezoid. To increase the steel material-saving and reduce manufacturing cost, the trapezoid shape is proposed for the web opening of composite beam. When in-situ concrete at the web openings is hardened, the trapezoid concrete dowels are created and connected on both sides of the steel web. In this research, the rebar was not set through the WO. The 3D configuration of the new shallow-hollow composite beam is illustrated in Fig. 1.
I EXPERIMENTAL CAMPAIGN: MATERIALS, SPECIMEN, AND PUSH-OUT TESTS
o investigate the influence of concrete dowels on shear strength and failure mechanism, three test groups T1G, T2G, and T3G are classified for the push-out tests in the experimental campaign. The number-letter in the name of the specimen indicates the amount of web opening (WO) in the steel section. Each test group had 3 specimens leading to a total of 9 test specimens. The dimension of single WO and the thickness of the steel section are similar in all the specimens. Normal concrete was used to cast the specimen. To determine the compressive strength of the concrete, cylindrical samples with dimensions of 160 mm in diameter and 320 mm in height were prepared and subjected to uniaxial compression test. The compressive strength and tensile strength of the cylinder concrete sample are 33.8 MPa and 2.93 MPa, respectively (which is equivalent to C30/37 to EN1992-1-1 [16]). All test specimens consisted of steel sections and concrete parts as depicted in Figs. 2 and 3. The width of the concrete in specimens is 230 mm, similar for all nine specimens whereas the length of the specimen varied with the number of WO considered. The name and dimensions of specimen configuration are presented in Tab. 1. The tee section used steel grade S235 with yield strength of 235 MPa. In all cases, no-debonding grease is applied on to the steel section to avoid friction and shear bond behavior. T Figure 3: Specimen preparation process: steel section greasing, formwork, and concrete casting.
All specimens were cast and cured in the same conditions at the Laboratory of the Hanoi University of Civil Engineering. The push-out tests were performed according to EN-1994-1-1 [14]. Hydraulic jack with capacity of 1000 kN and load-cell was used to control the loading. The load-slip relationship is automatically recorded via TDS530 data logger. The load is applied to the specimen by 25 mm thickness end steel-plate which was welded on the top of the steel tee section. LVDTs (Linear Variable Differential Transformer) are positioned as depicted in Figs. 4 and 5 to measure the slips at the surface between the steel section and concrete part, as well as to control the test's centricity.
Load-slip relationship and failure mechanism
he applied load was measured by load cell and relative slips of steel and concrete were determined through LVDTs. The axial load versus slip curves obtained from experiments are displayed in Fig. 6(a-c). The load-slip behavior clearly showed that the shear resistance is affected the number of WOs considered, i.e., concrete dowels in pushout tests. As increasing the number of WOs, the ultimate load and relative stiffness of the specimen increase as well. According to EN1994-1-1 [14], all the archived slip values are smaller than 6mm, so the load-ship relationships indicate that the concrete dowel connector exhibits brittle failure. The brittle behavior of concrete dowel came from the inherent material characteristic of concrete. The relative vertical displacement between the steel section and concrete part (slip) is relatively small. The slip value, corresponding to the ultimate loads, ranges from 0.2 mm to 0.7 mm, inversely proportional to the number of WOs. The failure shapes of test groups T1G, T2G, and T3G are presented in Fig. 7. This figure shows that the failure mechanism of concrete dowel of all test groups is quite similar. The top part of concrete dowel is in compression zone. Especially, at T the acute angle of the trapezoid opening section, the concrete was crushed due to the longitudinal shear force. The crushing section is almost smooth. Furthermore, Fig. 7 indicates that the lower part of the concrete in the trapezoid section is convex (in the transverse direction of the web of the steel section), suggesting that the tensile splitting failure occurred at the lower part of the concrete dowel. It can be seen that the shear connector's bearing capacity is composed primarily of the compressive resistance of the concrete part of the dowel (which is in contact with the upper trapezoid's edge) and the tensile splitting resistance of the lower concrete part in the transverse direction. Those observations are in agreement with the failure profile shown experimentally and numerically by [8,13]. Based on experimental observations through a total of nine static push-out tests, a schematic illustration of the failure mode of a single trapezoid shape concrete dowel is given in Fig. 8. The failure mechanism suggests that the compression zone is not homogeneous on the upper edge of the trapezoid opening, i.e., section 2-2 is different from section 3-3. A small zone, where the concrete is totally crushed, is observed. At the lower part of the section, the concrete is initially found in triaxial stress state. As the shear force increases, the shear-induced failure of the specimen is triggered by the splitting of the concrete, characterized by tension zone as described in the figure.
Ultimate loads
The values of ultimate loads of all specimens and the mean values for each test group are presented in Tab. 2. The average ultimate loads for T1G, T2G, and T3G test group are 109.9 kN, 189.9 kN, and 264.8 kN, respectively. It is shown that the ultimate loads increase with the number of concrete dowels. The percentage of the difference between T3G, T2G, and T1G (reference group test) are 72.8% and 140.9%. The finding suggests that exists simultaneous behavior of dowels in the specimen. In order to evaluate the load-slip relationships for all test groups, the mean load-slip relationship has been determined for each group test and displayed in Fig. 6(d). The mean curve reveals that the slip stiffness (which is defined as the ratio of the ultimate load and the corresponding slip), the ultimate load is both increased with the number of dowels considered. Besides, the higher number of WO used, the smaller relative slip is observed. When considering the relationship between the slip and the number of WO, it seems to exist a plateau as marked in Fig. 6(d).
Simultaneous behavior of concrete dowels
The values obtained by dividing the ultimate load of the specimens by the number of the dowels are considered as the shear capacity of the single dowel. The average shear capacity values for all test groups are summarized in Tab. 3. The tested results in Tab. 3 indicate that when increasing the number of dowels, the ultimate load increased due to the simultaneous behavior of concrete dowels. On the contrary, the average shear strength decreased. It means that the simultaneous behavior between the shear connectors is not accumulated. In the other words, the ultimate load is not simply equal to the product of the shear capacity of a single dowel multiplied by the number of dowels in specimens. In such case, the influence of the number of concrete dowels must be taken into account. The coefficient of simultaneous working between dowels in the sample is defined as the ratio of the shear capacity of T2G, T3G groups, and T1G group. The value of this coefficient indicates that as the number of WO increases, the coefficient of simultaneous behavior decreases. This latter confirms again that the influence of the number of concrete dowels on the shear resistance behavior is not negligible.
Besides the shear capacity values, the slip stiffness of the dowel is considered. As presented in Tab. 4, it is found that the slip stiffness was influenced by the number of dowels in specimens, the average slip stiffness for groups T2G and T3G is greater than that of group T1G about 92% and 148%, respectively.
CONCLUSION
n experimental campaign has been conducted and presented in this paper to investigate the failure behavior of concrete dowel subjected to push-out test. Nine specimens divided into three groups were considered. Similar conditions are employed in all groups except the number of WO. Thanks to the observation, the following conclusion could be drawn: -The shear failure of all specimens is likely to suffer brittle failure due to the brittle behavior of the concrete. -Post-failure analysis was performed indicating that the failure mechanism of the dowel involves crushing in a small compression zone and tensile splitting of concrete in the rest area of WO. -Compared to T1G group, we found that the ultimate load of T2G and T3G group was increased by 72.8% and 140.9% when the number of dowels increased twice and three times. -By evaluating the simultaneity of dowel behavior, it is found that the simultaneous behavior between the shear connectors is not accumulated. The influence of the number of dowels is not negligible. The coefficient of the simultaneous working of the dowels decreases with the increase of the dowel number.
-Finally, the slip stiffness of the shear connector was influenced by the number of dowels in specimens. The slip stiffness is increased with increasing the number of dowels. It should be noted that the present research is purely experimental. Mechanical analyses were done based on the concrete behavior and compared to other push-out tests' results in the literature. In the future work, more experiments supported by numerical modeling (finite or combined finite-discrete types) element modeling by systematically varying the geometrical dimensions and concrete grade will be performed. These will be served as ingredients for deriving an empirical equation in predicting the shear resistance of concrete dowel. | 3,319.4 | 2021-12-22T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Agri-Food Contexts in Mediterranean Regions: Contributions to Better Resources Management
The agri-food frameworks have specific characteristics (production units with small dimensions and in great number with implications in the respective markets) that call for adjusted approaches, even more so when they are considered in Mediterranean contexts (where global warming will have relevant impacts). In fact, the Mediterranean regions and countries have particular specificities (due to their climate conditions) that distinguish them from their neighbours. This is particularly true in Europe, for example, where the southern countries present socioeconomic dynamics (associated with the respective public debt) that are different from those identified in the northern regions. From this perspective, it seems pertinent to analyse the several dimensions of the agri-food systems in the Mediterranean area. To achieve these objectives, a search was carried out on 26 December 2020 on the scientific databases Web of Science Core Collection (WoS) and Scopus for the topics “agr*-food” and “Mediterranean”. These keywords were selected after a previous literature survey and to capture the agri-food contexts in Mediterranean regions. The keyword “agr*-food” was considered in this way to allow for a wider search (including “agri-food”, “agro-food”, etc.). Considering only articles (excluding proceeding papers, book chapters, and books, because in some cases it is difficult to access the entire content of the document), 100 and 117 documents were obtained from the WoS and Scopus, respectively. After removing the duplicated studies and taking into account the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) approach, 137 documents were surveyed through a literature review. As main insights, several dimensions embedded in the concept of agri-food were highlighted, from those related to heritage subjects to natural assets. On the other hand, the following subtopics were identified: agri-food dynamics and sustainability, agriculture and agri-food systems, agri-chains and food consumption, and food production and composition impact on agri-chains. Stressing the gaps in the scientific literature, related to the topics here addressed, there are possibilities to better explore the several dimensions and solutions offered by the new developments associated with smart agriculture and agriculture 4.0, specifically for the Mediterranean contexts and their challenges. Finally, to complement the PRISMA methodologies, an MB2MBA2 (Methodology Based on Benchmarking of Metadata, from scientific databases, and Bibliometric Assessment and Analysis) approach is suggested to carry out systematic literature reviews, based on bibliometric analysis.
Introduction
The several dimensions (economic, social, and environmental) of agri-food systems are interrelated with other domains, such as those associated with chains and territory, where, for example, the heritage, socioeconomic dynamics, and natural assets have their importance [1].
On the other hand, agri-food contexts, due to their specificities, are often subject to several public interventions, namely through agricultural policies. This is particularly relevant in European Union (EU) countries, due to the different processes of enlargement and the consequent diversity of realities amongst member-states and regions [2].
In association with agricultural policies, institutions appear. Amongst the agri-food organisations, cooperatives appear with a determinant contribution to support in overcoming the particularities of the sector [3]. The cooperatives are crucial to technically support the agri-food stakeholders and help them to concentrate and add value to farm production.
Other approaches to dealing with the characteristics of the sector are the alternative agri-food networks that have appeared over the last decades, across several countries, as an interesting substitute for the traditional normalised systems towards more sustainable and healthy food markets [4].
In addition to its internal particularities, the agri-food sector always deserves special attention because of its environmental externalities and contributions to global warming [5]. In fact, the impacts on the environment from farming activities are having a real influence on the air, soil, and water quality. Achieving a sustainable and healthy agri-food sector seems to be a concern for several stakeholders [6], as well as the interrelationships of this sustainability with rural development [7].
Material and Methods
Considering the motivations previously described, the main objective of this research is to analyse the several dimensions of the agri-food systems in Mediterranean contexts. For this purpose, 100 and 117 articles (excluding conference papers, book chapters, and books) were obtained from the Web of Science Core Collection [8] and Scopus [9], respectively, in a search carried out on 26 December 2020, without any restriction for the publication year. For this search, the topics "agr*-food" and "Mediterranean" were considered. To allow for a wider search, the "agr*-food" topic was also considered, following Türkeli et al. [10]. This expression allows for the consideration of documents for the several forms considered by the researchers to express the agricultural and food systems, such as agri-food, agro-food, etc. The topic "agr*-food" is represented in this study by the expression agri-food. This expression appears, in general, more frequently than agro-food in WoS and Scopus; nonetheless, specifically for the Mediterranean topic, there are no great differences. These 217 documents were assessed through the PRISMA approach [11], and with the support of the Zotero software [12], the duplicated studies were removed, leaving 137 that were surveyed through literature review. To aid in the organisation of the literature review in subsections, a previous bibliometric analysis was carried out with the VOSviewer software [13,14], considering keywords and terms as items.
The bibliometric analysis is a relevant support to better structure the literature review [15] and provide interesting findings to better understand the scientific trends [16]. In turn, systematic literature reviews are adjusted approaches to assess the state-of-art of the research associated with the topics addressed [17]. In addition, the agri-chains need to deal with new challenges [18] in the coming future [19].
This research follows the approach described before; nonetheless, there are other methodologies followed by other studies, such as, for instance, Sharma et al. [20], that it is important to highlight here. In this study, an MB2MBA2 (Methodology Based on Benchmarking of Metadata, from scientific databases, and Bibliometric Assessment and Analysis) approach with the following phases is suggested (following, for example, Martinho [15] and Kent Baker et al. [21]): -Selection of the more adjusted scientific databases to work upon, considering the topics to be addressed; -Removing the duplicated documents and the not relevant ones; -Assessment of the information obtained from the database(s) selected to identify better methods to be considered in the bibliometric analysis; -Survey, through a literature review of the total documents or, in case of a great number of studies, the most representative ones as a sample of the total results obtained in the search. Figure 1 and Table 1, obtained through the VOSviewer software [14] with bibliographic data, consider co-occurrence as links and keywords as items. In the co-occurrence links, the relatedness of the keywords is based on the number of documents in which they appear together [13]. To obtain this figure and this table, 1 was considered the minimum number of occurrences (number of documents in which a keyword appears) of a keyword [13]. In this figure, the size of each circle associated with each keyword represents the number of occurrences, and the distance between each item (keyword) is related to the level of relatedness.
Bibliometric Analysis
-Survey, through a literature review of the total documents or, in case number of studies, the most representative ones as a sample of the total tained in the search. Figure 1 and Table 1, obtained through the VOSviewer software [14] w graphic data, consider co-occurrence as links and keywords as item co-occurrence links, the relatedness of the keywords is based on the numbe ments in which they appear together [13]. To obtain this figure and this table, 1 sidered the minimum number of occurrences (number of documents in which a appears) of a keyword [13]. In this figure, the size of each circle associated keyword represents the number of occurrences, and the distance between (keyword) is related to the level of relatedness.
The data presented in Figure 1 and Table 1 highlight the importance of sustainability and agriculture, in their several dimensions, for the agri-food contexts in the Mediterranean regions. On the other hand, the relevance of Europe was shown, namely by Spain and Italy, in the Mediterranean agri-food systems. Specifically, in Table 1, it is possible to identify five great clusters, in terms of diversity of keywords, among the top 50 documents having more occurrences. Cluster 1 displays keywords related to sustainability, where Italy and Spain appear with high occurrences and a recent average publication year, showing that they are current topics. Cluster 2 appears with keywords associated with diet and consumption, Cluster 3 with items such as Europe and waste, Cluster 4 with agriculture, and Cluster 5 with Mediterranean productions, such as olives and fruit. The remaining shorter clusters present keywords that are related to those described for the larger ones. Figure 1 shows all the clusters, and Table 1 shows only those associated with the top50 most relevant items.
Considering text data and terms such as items, Figure 2 and Table 2 were found. In this case, binary counting was considered, and the occurrences represent the number of documents in which a term appears at least once [13]. One was considered the minimum number of occurrences of a term. The size of the circles is related to the number of occurrences and the distance between items is associated with the relatedness. In this case, Figure 2 and Table 2 also show the importance of sustainability and agriculture in the agri-food systems; nonetheless, they further highlight the relevance of the agri-chains' behaviours, namely in terms of food choice and consumption, food production and composition, and the associated dynamics. After exploring Table 2 in greater depth, it is evident that there are some larger clusters that deserve further analysis. For example, Cluster 3 highlights the importance of food production and agriculture, Cluster 5 shows the interrelationships between food production and biodegradability, Cluster 13 shows the role of the University and research for agri-food systems, and Cluster 46 reveals the interrelationships between food production and heritage.
Sustainability 2021, 13, x FOR PEER REVIEW 5 of 18 the larger ones. Figure 1 shows all the clusters, and Table 1 shows only those associated with the top50 most relevant items. Considering text data and terms such as items, Figure 2 and Table 2 were found. In this case, binary counting was considered, and the occurrences represent the number of documents in which a term appears at least once [13]. One was considered the minimum number of occurrences of a term. The size of the circles is related to the number of occurrences and the distance between items is associated with the relatedness. Considering the information highlighted here and following, for example, Martinho [15,22,23], who carried out an organised literature review based on previous bibliometric analysis, the literature review will be carried out for the following subtopics: agri-food dynamics and sustainability; agriculture and agri-food systems; agri-chains and food consumption; food production and composition impact on agri-chains.
Systematic Literature Review
Considering the bibliometric analysis previously performed and a prior general overview of the literature, this section will be organised into the following subsections: agri-food dynamics and sustainability; agriculture and agri-food systems; agri-chains and food consumption; food production and composition impact on agri-chains.
Agri-Food Dynamics and Sustainability
The agri-food sector has relative importance in some European Mediterranean regions, specifically some from Spain [24] and Italy [25]. The by-products of several agri-food activities bring about serious challenges to management [26], more so in some sectors [27], but may provide interesting alternatives for consideration in diverse activities from a sustainable perspective. The use of these by-products as feed for livestock production is an example [28]. The production of bioenergy is another case [29], from the viewpoint of a circular economy [30], where, for instance, the residues from olive oil production have a great potential for bioethanol extraction [31]. This use of the olive oil production residues has environmental advantages compared to the disposal into soil [ . Organic fertilisers may be obtained from plant residues, animal waste, and manure [46]. In general, agri-food residues may be considered to replace scarce resources, such as water [47], through wastewater treatments [48], to produce fuels [49] and heat [50] and to obtain Sustainability 2021, 13, 6683 7 of 17 bioproducts [51]. Sometimes, there are advantages, for the characteristics of the products obtained, to jointly treating different residues through well-designed mixtures [52].
Sustainability has a multidimensional nature [53], and its several dimensions are impacted upon by various factors [54], some of them being difficult to control [55]. The concerns with the agri-food sector in Mediterranean regions [56] and, specifically, regarding sustainability, are old between international institutions [57] and the researchers [58], including in forestry [59]. In this way, for a sustainable development, the mitigation of environmental impacts is a determinant [60] for which bilateral cooperation is sometimes fundamental [61], but the socioeconomic and financial domains also deserve some specific attention. In fact, the financial variables and their relationships with profitability and performance are relevant dimensions to consider and manage in the agri-food sectors [62].
Agriculture and Agri-Food Systems
There are strong relationships between the agricultural sector and the agri-food systems, even a sector upstream of these frameworks, in an interrelationship that involves agriculture, food, and the environment [63]. A sustainable agriculture includes land preservation [64], despite the increased use of agricultural land for non-food production [65]. This alternative use of land may provide an interesting approach to marginal land [66]. In turn, some agricultural crops, such as olive trees, also have a cultural dimension [67]. The agricultural and land policies, including the Common Agricultural Policy [68], have a great impact on these interlinkages [69], as well as agricultural institutions, such as cooperatives [70]. For the design of adjusted agri-food policies, the stakeholders' [71] and policymakers' [72] involvement is fundamental. Agricultural activities are complex systems involving several factors, some of which are biological [73], and others are interrelated with workforce characteristics [74], for example.
The combination of agricultural and forestry activities in a complementary way may bring about interesting contributions, under certain, specific conditions, to the role of the agroforestry sector in agri-food contexts [75]. The Mediterranean area is a global region possessing great biodiversity [76], and there is some call for special attention in order to avoid its extinction [77] and another to bring specificities and potentialities to the agri-food systems [78] in Europe [79]. Some of this biodiversity has spread to other continents [80] and some has come from outside the Mediterranean area [81]. There are great opportunities to improve forest management for a more integrated development, specifically in rural areas [82]. A more integrated rural development is a concern for several countries and institutions around the world [83] that call for innovative approaches, namely where there is an increase in land abandonment [84]. The agricultural sector, in parallel with its contribution towards food security, has social, cultural, and environmental dimensions. Nonetheless, these contributions, in the EU, differ among Central, New Eastern, and Mediterranean countries [85].
Urban agriculture has appeared, supplying new perspectives for the agri-food systems within cities. However, its contribution is not limited to food produced locally. Indeed, urban agriculture also has social and ecological functions [86], in some cases different from those identified for the traditional farming sector. Beyond urban agriculture, local production and consumption is a trend that has grown in the last decades with advantages for the producers, consumers [87], and the whole of society, from the perspective of shorter value chains [88]. Certification is another approach that may support farmers in dealing with the agricultural market's specificities [89]. In any case, certification should be seen as a strategy that must be combined with others, namely those related to sustainability and heritage [90].
Agri-Chains and Food Consumption
The agri-food chains are becoming more competitive around the world, which calls for new firm strategies [91], which are not always socially fair. The agri-chains are complex systems involving several processes and operators. For example, olive oil production involves the following phases [92]: olive tree care, fruit harvest, fruit processing, and oil packaging. The supply agri-chains include various steps from cultivation to its end consumption, passing through processing and distribution [93]. The agri-food sector is one of the most important in the world [94]. In agricultural products, the production phase is that which has more environmental impact [95]. This information about the environmental implications associated to each agri-food product should be communicated to consumers in a clear way to better inform their choices [96].
The great concerns in the agri-chains are health and the environment [97], namely the impacts from waste, water, and energy use as well as greenhouse gas emissions [98]. However, in some cases, there is, still, antagonism between indicators related to these two dimensions (health and environment) [99]. The associated policies may bring forth relevant contributions for more balanced agri-chains [100]. Unhealthy food has implications for human welfare [101], especially among children. For these cases, more adjusted agri-food policies are called for [102] as well as innovative solutions that promote healthy food consumption [103]. Another concern is associated with the losses and waste across the food chains [104], including that which is on a nutritional level [105].
The Mediterranean diet (MD), classified as Intangible Cultural Heritage by UNESCO in 2013 [106], is a food label [107]. It is, in general, considered a healthy eating habit that helps to prevent diabetes, cardiovascular diseases, and cancer [108]. Olive oil is one of the most important ingredients of the MD [109], considering its antioxidant properties [110]. The antioxidant properties are also present in other products, such as garlic [111]. However, this healthy diet has changed [112] over recent decades [113], and, in certain circumstances, has been compromised by the pressures from globalisation [114], which promotes the consumption of animal and processed foods [115]. In addition, the MD is more environmentally sustainable [116] and has a cultural dimension [117]. The MD combines tradition, sustainability, and fields of innovation [118] and may be considered a Mediterranean lifestyle (ML) [119].
The agri-chains became, in fact, more globalised over the last few decades, and the successive European Union enlargement contributed further to these processes, with benefits for some sectors and fewer advantages for others [120]. These enlargements increased the diversity of realities inside the EU [121]. In these processes of globalisation, the strategy is to potentiate competitive advantage [122]. In some contexts, this globalisation process is compromised by infrastructural constraints, such as those related to the conditions for transport [123]. Transportation conditions have a relevant influence on the agri-food sector's competitiveness [124]. In any case, the agri-chains are dynamic systems that have changed over the years [125] in adjustment processes towards these new circumstances [126].
There is a historical agri-food trade between the EU Mediterranean countries and their neighbours from North Africa and the Neighbouring East, in the Mediterranean basin [127]. The relative importance of countries outside the EU has increased over the last decades [128]. The Euro-Mediterranean (EUROMED) integration has had its implication in this scenario [129], but the lack of industrial diversification in non-EU Mediterranean countries may be a constraint [130]. The EUROMED partnership has opened up great opportunities, but needs to go ahead [131]. Nonetheless, the trade flow intensities of agri-food products vary in function of the group of countries considered [132], due to their economic and structural specificities [133], special relationships [134], and international frameworks [135]. External shocks, such as the global financial crisis, have impacted the trade pattern between Mediterranean countries [136], as well as, agreements for tariff liberalisation in the region [137]. The chain organisation, transparency and security, infrastructure and conditions of logistics, and transaction costs matter greatly within these frameworks [138].
Food Production and Composition Impacts on Agri-Chains
Processed food production produces several residues that have environmental impacts [139]. The major challenge for the future will be to reduce emissions without com-Sustainability 2021, 13, 6683 9 of 17 promising the need for food [140]. In addition, food production requires great quantities of resources. One example is the requirement of energy that may be provided by alternatives and renewable sources largely available in Mediterranean countries such as solar resources [141]. Another example is the need for water with its associated costs [142], including those related to dam maintenance and conservation [143]. In these contexts, to reuse and recycle must become the key buzzwords [144], albeit with the correct approach [145], in order to avoid risks to human health [146]. Other buzzwords are agroecology [147], efficiency [148], and optimisation [149] in the use of resources to reduce the costs and the carbon footprint. The agroecology perspective within the agricultural sector may improve the circularity of the agrarian metabolism and reduce the metabolic rift [150]. New technologies [151], including those related to information, technology, and communication [152], and new approaches [153] in the diverse activities [154] may bring direct and indirect additional contributions to agri-food systems and their sustainability [155].
On the other hand, consumers are currently more interested in knowing about the food production processes and the foodstuff's composition [156]. To achieve these consumer preferences, the food industry has searched for alternative products in their processes, such as, for example, hydrosols, which have been used as a natural antimicrobial [157]. In addition, some foodstuffs, due to their composition, are considered functional food or nutraceuticals, due to their pharmacological characteristics [158]. These new tendencies have impacts on agri-food dynamics.
In every sector, including in agri-food contexts with their specificities [159], the cooperation between firms is crucial to better deal with market challenges; however, these alliances are often unstable and asymmetric, more so when they occur among production units of different sizes [160].
Main Insights from the Literature Review and Discussions
The main insights are presented in Table 3 and reveal the importance of an adjusted management of the by-products as a way to reduce the environmental impacts and find innovative and alternative uses from the perspective of circular economy. Innovative approaches to deal with the increased carbon footprint and that allow improvements in sustainability are determinants for a more balanced development. Table 3. Public policies and production assets as main axes.
Documents Main Insights
[26] By-products bring about serious challenges to management [61] Bilateral cooperation is fundamental and may bring relevant contributions for the several dimensions of the sustainability [64] A sustainable agriculture includes land preservation [69] The agricultural policies are important drivers of the agri-food contexts [75] The combination of agricultural and forestry activities may bring about interesting contributions [83] A more integrated rural development is a concern for several countries and institutions [85] The realities, in the EU, differ among Central, New Eastern, and Mediterranean countries [86] Urban agriculture also has social and ecological functions [94] The agri-food sector is one of the most important worldwide [99] There is antagonism between indicators related to the health and environment dimensions [106] The Mediterranean diet (MD), classified as Intangible Cultural Heritage by UNESCO in 2013 [127] There is a historical agri-food trade between the EU Mediterranean countries and their neighbours [129] The Euro-Mediterranean (EUROMED) integration has had its implication in the respective countries [141] The requirement of energy may be provided by alternatives and renewable sources largely available in Mediterranean countries In these contexts, the agricultural policies and institutions may bring relevant contributions and play a relevant role, namely to promote interrelationships between the agricultural and forestry sectors in a more integrated rural development. This is a great task considering the diversity of realities in the Mediterranean framework.
The Mediterranean Diet as food label and lifestyle and Euro-Mediterranean integration are good signs for a deeper cooperation between the Mediterranean countries with advantages for the respective agri-food sectors and contexts.
Conclusions
The bibliometric analysis, with bibliographic data for the co-occurrence link and keyword items, reveals that, in the top 50 documents with more occurrences, the dimensions related to sustainability and European countries, specifically, Spain and Italy, have their pertinence for the agri-food systems in the Mediterranean area. The same occurs for other dimensions, such as the Mediterranean diet, food waste, Mediterranean production (olive oil, fruits, and wine), agri-chain dynamics, and the interrelationships with heritage. This bibliometric assessment highlights that an analysis of the agri-food frameworks in the Mediterranean region should consider the following groups of dimensions: agri-food dynamics and sustainability, agriculture and agri-food systems, agri-chains and food consumption, and food production and composition impacts on agri-chains.
The literature review was carried out considering these four aggregated dimensions. The interrelationships between the agri-food dynamics and sustainability show the importance of good management of the by-products, specifically to mitigate environmental impacts, reduce the carbon footprint, reduce costs, and find innovative solutions for the agri-food systems in dealing with the changes caused by climate change and global warming. The reuse of water is an interesting example of circular economy that mitigates environmental consequences and allows for the sourcing of an increasingly scarce resource. In the relationships among the agriculture and agri-food systems, the scientific literature highlights the importance of the agricultural sector in agri-chains. The performance of agriculture is conditioned by its particularities, the institutional framework, and the agricultural policies in place. The agricultural policies that impact the farming sector are particularly relevant, namely, in the European Union, in the context of the Common Agricultural Policy. Another aspect is the complementarity between agriculture and other activities, principally forestry, for an integrated rural development and the preservation of biodiversity. Urban agriculture, shorter value chains, and quality certification also appear as crucial fields in the interrelations between agriculture and agri-food systems. Regarding agri-chains and food consumption, the findings reveal that agri-chains are becoming more competitive, where the concerns for environmental and health impacts seem to be increasing; nonetheless, there is a long way to go in what calls for more adjusted policies. The Mediterranean diet/lifestyle appears here as a food label representing healthy eating with a cultural dimension, where globalisation is seen simultaneously as a threat and process with great opportunities to potentiate competitive advantages. These opportunities from international trade could be better promoted in a deeper Euro-Mediterranean (EUROMED) integration, allowing for the increase in commercial exchanges among the European Union and their neighbours from North Africa and the Neighbouring East. Relative to food production and composition impacts on agri-chains, the buzzwords are reuse, recycle, agroecology, efficiency, and optimisation. In addition, modern consumer preferences and interests concerning the composition of foodstuffs condition the food industry to find alternative products for their processes, such as hydrosols, to produce functional food or nutraceuticals.
In terms of practical implications, there are several insights here that may be considered by the several stakeholders (farmers, industrial producers, retailers, and policymakers) to improve agri-food system performance in the Mediterranean region. In theoretical terms, an MB2MBA2 (Methodology Based on Benchmarking of Metadata, from scientific databases, and Bibliometric Assessment and Analysis) approach is suggested to carry out a systematic literature review. On the other hand, a deeper analysis of the new opportunities created by the smart agriculture practices for the Mediterranean frameworks is proposed, as highlighted, for example, by Lezoche et al. [161].
For future research and considering the great diversity of realities inside the Mediterranean area, a meta-data analysis by country based on the findings highlighted in this study is suggested. In addition, it is suggested to consider more keywords such as food and agriculture. There are other databases such as Google Scholar, ScienceDirect, Emerald, and Taylor and Francis that may be, also, considered.
Funding: This work is funded by National Funds through the FCT-Foundation for Science and Technology, I.P., within the scope of the project Refª UIDB/00681/2020. | 6,395.2 | 2021-06-12T00:00:00.000 | [
"Economics"
] |
Learners’ Critical Thinking About Learning Mathematics
ABSTRACT
INTRODUCTION
The word 'critical' can have different meanings depending on the frame of reference of its use (Ernest, 2016). A situation can be 'critical' or at a point of crisis, when an action can dramatically improve or deteriorate the conditions. Secondly, criticism or criticizing can be a form of conveying disapproval, disagreement or negative comments about an argument, situation, decision etc. Additionally, 'critical' can be understood in terms of critique (being critical), as contrary to being 'uncritical'. In this sense, being critical includes analyzing the merits, demerits and consequences of any belief, judgement, choice, opinion, product, context etc., be it socio-cultural, political or personal (Ernest, 2016). In this paper, the word 'critical' is used to mean the opposite of being 'uncritical', and refers to the learners' knowledge of evaluating their personal beliefs, inferences, choices, etc. critically in order to take informed decisions and action(s) for their personal and societal betterment.
Adopting a critical stance while making choices in life is essential for learners. They should be aware and capable of critically analyzing their viewpoints and situations in cognitive, social and personal spheres of their life to survive and succeed in a complex society (Facione, 1990). In addition, learners are envisioned as future critical citizens of the society, using their critical thinking potential to promote justice and democracy (Skovsmose, 1998). Consequently, teaching and learning of critical thinking practices is often recommended in the education research literature (Bybee & Fuchs, 2006;Saavedra & Opfer, 2012). A number of reports issued by elected commissions (Ludvigsen et al., 2015; Partnership for 21st Century Skills (P21), 2009) and the educational policy documents of some countries also mention development of critical thinking competences as one of the fundamental aims of education. For example, the Norwegian Education Act states that, "[Through education] students and apprentices must learn to think critically and, act ethically and environmentally consciously. They must have co-responsibility and the right to co-operation" (Opplaeringsloven, 1998) (translated and added italics). An interpretation of this statement can be that learners should learn to think critically through and about their education, take responsibility and have the right to co-operate in decisions regarding their education. Moreover, learning critical thinking abilities is also mentioned as an educational ideal (Siegel, 1980), and learners' moral right "because in the end students must choose for themselves; there is no escaping this truth" (Norris, 1985, p. 40). Therefore, the requirement and importance of acquiring critical thinking competence for learners to achieve both individual and societal good is well established.
Mathematics is strongly represented as a fundamental school subject in countries across the globe. Mathematics education, therefore, has a vital role in educating children to become critically thoughtful, responsible and co-operative beings acting in the society. Accordingly, developing critical thinking competence among mathematics learners has been a concern for mathematics education since decades (Jansson, 1986;Kuntze et al., 2017). Evolving "critical thinking skills [through mathematics education] is an implicitly hoped-for outcome of using the NCTM's Standards" (Gutstein, 2003, p. 66). However, neither NCTM Standards nor mathematics education research literature explicitly specify or limit the aspect(s) of learning mathematics, concerning which, learners' critical thinking should be developed. Consequently, mathematics education researchers have understood and used the term critical thinking diversely in different contexts to discuss development of learners' critical competences.
Learners' critical thinking in mathematics education research literature have been discussed mainly as -a set of cognitive skills used to draw logical conclusions and take informed decisions while solving mathematical problems (Aizikovitsh-Udi & Cheng, 2015;Kuntze et al., 2017;O'Daffer & Thornquist, 1993); and secondly as -an attitude to understand and reflect over the role(s) of mathematics and mathematics education in socio-political and cultural contexts to promote justice and democratic concerns in the society (Gutstein et al., 1997;Skovsmose, 1994bSkovsmose, , 1998. The first point emphasizes learning critical thinking in mathematics to acquire mathematical procedures for problem-solving and finding unbiased logical results. The second point roots critical thinking in the spirit of Critical Mathematics Education (CME) (inspired by critical pedagogy) Freire (1972); (Skovsmose, 1994a) to consider mathematics and mathematics education as objects of reflection and critique in society. This classification highlights the attention directed to imparting critical thinking among mathematics learners regarding the cognitive and social aspects of their mathematics learning process respectively, whereas the personal aspect seems to be missing. Therefore, in this paper we focus on learners' practice with critical thinking regarding their personal beliefs about their mathematics learning process, and their potential to participate in this process.
Thinking critically in social and cognitive aspects of one's life cannot be complete without, or compensate for being critical in one's personal life. Moreover, applying critical thinking cognitively, in the process of solving mathematical tasks and understanding social complexities through learning mathematics, cannot be taken to be the same as thinking critically about learning mathematics in personal life. Young mathematics learners worldwide need to make a personal choice concerning their mathematics learning early in their educational pathway. In Norway, as in many other countries, 14-15-year-old learners decide if, and what specific direction (vocational or theoretical) of mathematics they want to pursue in their upper secondary school by the end of their compulsory school years (i.e., after tenth grade). Developing learners' critical thinking faculties through mathematics education can be helpful to make a well-reflected choice of this kind. Thus, developing learners' critical abilities through mathematics education is mentioned as a central aim in the recently revised Norwegian mathematics curriculum (specified under 'fagrelevans og sentrale verdier' [subject relevance and central values]). In the Norwegian mathematics curriculum, it is stated that, "Critical thinking in mathematics involves critical evaluation of reasoning and argumentation, and can equip the learners [with the competence] to make their own choices and to address important questions concerning their own [personal] lives and the society" (Norwegian Directorate of Education and Training, 2020) (translated). Therefore, the scope of learning critical thinking through mathematics education can be seen as directly related to learners' personal lives and choices. The Norwegian mathematics curriculum emphasizes that mathematics education should evolve learners' critical attitude towards the decisions they make in their personal lives; and gives a personal dimension to learning critical thinking through mathematics education in addition to the cognitive and social dimensions. Consequently, developing a tendency to think critically about their personal beliefs concerning mathematics and its learning process may help learners to attain a meta-perspective of learning mathematics and to make informed personal choices about their mathematics education.
Statement of Problem and Research Question(s)
Learners' practice with thinking critically about their personal beliefs concerning their mathematics learning process are seldom investigated in mathematics education research literature. This research paper addresses this research gap. By practice with thinking critically, we mean learners' tendency to critically observe their personal beliefs about mathematics and its learning process to gain a meta-perspective of it in their own lives. Therefore, in this paper learners' practice with thinking critically and give suggestions about their own mathematics learning process is explored based on their expressed beliefs regarding this process. The research question, "What can learners' expressed mathematics related beliefs reveal regarding their practice with thinking critically about and potential to give suggestions concerning their mathematics learning process?" is investigated in this paper. The scope of learners' mathematics related beliefs was narrowed down with the help of three sub-questions, namely -what subject content do learners find interesting to learn in mathematics?; why they learn mathematics in their opinion?; and how their mathematics teaching may be improved?
Defining Beliefs
Learners' first-hand experiences, opinions and beliefs are used to explore their critical orientation while they talk about their mathematics learning process. Learners' beliefs about mathematics and mathematics education play an important role in learning of the subject and are well-researched (Leder, Pehkonen, & Törner, 2002;Maaß & Schlöglmann, 2009). Furinghetti and Pehkonen (2002) mention that it is difficult to point out a single universal definition of beliefs because of different goal-orientations and contexts in which this term is used. However, Leder et al. (2002) cite Bar-Tal (1990), describing beliefs to be viewed as, "… units of cognition. They constitute the totality of an individual's knowledge including what people consider as facts, opinions, hypotheses, as well as faith. Accordingly, any content can be the subject of a belief." (p. 12). Beliefs can be formed through a variety of sources, be it direct personal experiences, inferences about a context or information provided by outside sources etc. (Bar-Tal, 1990). Various types, classifications, structures, and assessment methods of beliefs discussed in the literature are beyond the scope of this paper. Therefore, adapting Bar-Tal's definition, learners' mathematics and mathematics education related beliefs are understood as their conscious or unconscious opinions, thoughts, ideas, perceptions or hypotheses about their mathematics learning process which they consider to be true.
Defining Critical Thinking
Analogous to the word critical, critical thinking is also defined differently in educational, socio-political and psychological research contexts. Critical thinking has acquired several definitions 1 , over its research history. In 1988, the American Philosophical Association (APA) founded a panel of 46 experts (including educationalists, philosophers and psychologists) to develop a consensus definition and fundamental skills comprising critical thinking for educational instruction and assessment purposes. The panel's consensus presented a total of six core critical thinking cognitive skills with 16 sub-skills (Figure 1), and affective dispositions (not focused in this paper) including habits of mind of a good critical thinker in their conclusive report. The consensus defined critical thinking as, "...[a] purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based" Facione (1990, p. 3). Therefore, critical thinking evolved as an ability comprising core cognitive critical thinking skills and affective critical thinking dispositions. (Facione, 1990, p. 12) Facione (1992) simplified APA's consensus definition of critical thinking as a process to "make a purposeful, reflective judgement about what to believe or what to do -precisely the kind of judgement which is the focus of critical thinking" (p.17, italics added). Analogously in this paper, critical thinking is understood as an ability to reflect over one's beliefs, circumstances and actions for making purposeful, reflective judgements and choices about what to believe and how to act responsibly for improving one's life, without harming others. In mathematics education context, besides being the ability to solve mathematical problems logically and reflecting over mathematics' role in society, critical thinking can be comprehended as a process tool for learners to consciously reflect upon and gain a meta-perspective about their own mathematics teaching-learning process. Therefore, learners' critical thinking about their own mathematics learning process was analyzed by using the self-examination and self-correction sub-skills of the sixth core cognitive critical thinking, self-regulation (Figure 1). The APA consensus' definitions of sub-skills self-examination and self-correction were adapted to analyze learners' interview responses.
LITERATURE REVIEW Learners' Beliefs and Mathematics Education Research
In mathematics education, beliefs have emerged from being a hidden variable in the mathematics learning process to being a thoroughly investigated topic of research (Goldin et al., 2009;Leder et al., 2002). The research constituting beliefs in mathematics education is rich and diverse, particularly if one also includes the studies using terms like ideas, conceptions, perceptions, attitudes, values etc. in place of the term beliefs. Research regarding beliefs about mathematics and mathematics learning process have involved both learners and teachers (both in-service and pre-service) as informants, and may concern either cognitive domain (for example, a polygon having three sides is a triangle), or the affective domain (for example, mathematics is difficult) (McDonough & Sullivan, 2014). However, most of the studies concerning beliefs are placed under the umbrella of affective issues in mathematics education research (Leder & Grootenboer, 2005;Zan et al., 2006). In this study, learners' beliefs about mathematics are explored to get an insight into their practice with thinking critically about and suggesting changes in their mathematics learning process. Consequently, this study takes learners' perspective in focus within the affective domain of enquiry related to their mathematics learning process.
Op't Eynde, De Corte, and Verschaffel (2002) define learners' mathematics-related beliefs as, "… the implicitly or explicitly held subjective conceptions students hold to be true, that influence their mathematical learning and problem solving" (p. 16). Research studies enlightening learners' mathematics-related beliefs conducted within the affective domain may be classified as focusing on either positive or negative affect towards mathematics (Hannula et al., 2016;Zan et al., 2006). Taking the positive affect on one hand, learners' beliefs have paved the way to explore their interest, enjoyment, liking etc. towards mathematics, which is usually seen to be correlated with increased motivation, engagement and achievement (Hannula et al., 2016). Whereas, considering the negative affect on the other hand, learners' beliefs have helped to explore their mathematics anxiety, stress, fear etc. which are seen to be correlated with the feelings of disaffection, failure and loss of interest in learning mathematics (Zan et al., 2006). Therefore, the literature regarding learners' mathematics-related beliefs includes a large spectrum of studies exploring their beliefs about topics such as -the nature of mathematics (Young-Loveridge et al., 2006), attitude towards learning mathematics (Grootenboer & Marshman, 2016), self-efficacy (Kele & Sharma, 2014), mathematics teaching (Mapolelo, 2009), the use-value of mathematics (Kollosche, 2017;Onion, 2004;Pais, 2013), their mathematical identities (Andersson et al., 2015;Bishop, 2012), emotional disaffection (Nardi & Steward, 2003), anxiety (Young et al., 2012) towards mathematics; and more.
Learners' beliefs regarding the nature of mathematics report that they seldom think about this topic (Kloosterman, 2002), they view mathematics as a useful and important (Grootenboer & Marshman, 2016), but also as a difficult and boring subject (Nardi & Steward, 2003). They usually hold a positive attitude towards learning the subject, however, only a few wish to become mathematicians (Grootenboer & Marshman, 2016). When it comes to the specific questions of what, why and how in mathematics learning process, more studies have examined learners' beliefs concerning the why of learning mathematics, rather than what to learn in mathematics or how. In our literature search, Lindenskov (2010) came up as the only study that took learners' perspective over what content learners' themselves are interested to learn in mathematics. The results suggest that if given an opportunity, learners clearly mention what they understand or not while learning mathematics and devote their attention towards related curriculum accordingly. Thus, Lindenskov (2010) suggests that if given the opportunity learners' can critically evaluate their mathematics curriculum and also draws implications of her study for the role of learners in CME research. Concerning the question of how, Mapolelo (2009) and Nardi and Steward (2003) reported that learners experience mathematics classes to be tiring, lectureoriented and the use of mathematical language seemed to be a barrier in learning the subject. However, the learners not only see the problems but, if allowed, can also point out effective instruction strategies to improve the learning outcome of their mathematics classes (Clare & Sue, 2013). Moving to the question of why, the investigations Onion (2004), Sealey and Noyes (2010), Pais (2013), and Kollosche (2017) uncover that though learners believe mathematics to be an important subject to learn, yet they struggle to respond when asked about where they use and why they need to learn advanced mathematics at school.
These studies indicate that the learners are not used to think over what it takes to learn or why knowing mathematics is important (Kloosterman, 2002), and find it difficult to answer the questions about what, why and how of their mathematics learning process. In addition, they hold inconsistent beliefs about mathematics and mathematics education, indicating that thinking critically about personal mathematics learning experiences is not a part of a regular mathematics class and despite being potential, their voices are often not encouraged or heard (Clare & Sue, 2013). Though evident, the reasons for this inconsistence in learners' beliefs have not been the focus of research. Learners' beliefs opened a gateway to study their motivation, engagement, affection etc. towards mathematics but their contrary beliefs about mathematics and possible explanation for them are not explored in the research literature. In this paper, learners' beliefs are therefore studied in light of their critical thinking skills in order to look for the possible reasons of these logical inconsistencies existing in learners' beliefs about mathematics. The subskills self-examination and self-correction of the sixth critical thinking skill self-regulation from critical thinking skills framework (Figure 1) are used to analyze learners' expressed beliefs about mathematics (details in Data Analysis Framework section).
Learners' Critical Thinking and Mathematics Education Research
Analogous to the research concerning beliefs in mathematics education, the research related to learners' critical thinking is also wide and diverse. Critical thinking is discussed from being a cognitive toolkit in order to solve mathematical problems in a logical and deductive manner (Jablonka, 2014;O'Daffer & Thornquist, 1993), to an attitude which learners as future citizens of mathematical society should be able to adopt in order to strive for a just and equal social structure (Gutstein, 2003;Skovsmose, 1998). This section elaborates these variations concerning critical thinking research in relation to mathematics education and suggests the connection between learners' beliefs and critical thinking in light of the research question asked in this study.
Critical thinking for mathematical problem-solving and its limitations
The "first wave" (McLaren, 1994, p. ix) of psychological and educational research concerning critical thinking presented it as the cognitive processes resulting in analyses of information objectively, argumentation, drawing conclusions, deductive and logical reasoning etc. (Ennis, 1964). The requirement of being objective, logical and rational while thinking critically had similarities with the absolutist view of mathematical knowledge (Ernest, 1985(Ernest, , 1991. This view of mathematics education in which being objective was essential for the knowledge to be seen as mathematical, provided convenient grounds for the partnership of critical thinking and mathematical problem-solving. Gutstein et al. (1997) also highlight this correlation by drawing on NCTM Standard (National Council of Teachers of Mathematics [NCTM], 1989) using mathematics as reasoning by citing "… students understanding and applying reasoning processes, creating and judging mathematical arguments, and validating their own thinking and answers" (p. 712).
This partnership leads to testing learners' critical thinking skills quantitatively (using mathematical tasks as test items), and usually in experimental settings (pre-and post-tests, control and treatment groups). Standardized tests, such as, Ennis-Weir Critical Thinking Essay Test (E-W), California Critical Thinking Skills Test (CCTST) and Cornell Level X Critical Thinking Test (CL-X) 2 got developed for the purpose. Likewise, CCTST-N 3 and James Madison University's Quantitative Reasoning (QR) Test 4 were developed to assess learners' critical thinking skills (named as QR, numeracy or quantitative literacy skills), specifically in mathematics. Since these tests measure cognitive critical thinking skills such as, proposing hypotheses, analysis, evaluation, drawing inferences and conclusions, most of the questions (usually multiple-choice) posed included graphs, tables, numbers and data to reach a decision. Mathematics education researchers also employed these tests (Aizikovitsh-Udi & Cheng, 2015) and contextual mathematical problems (Firdaus et al., 2015;Palinussa, 2013) to assess learners' critical thinking skills.
Using critical thinking to solve tailored (contextual) mathematical problems (in classroom/critical thinking tests) can indicate learners' successful implementation of critical thinking within a cognitive domain, usually divorced from their social, political, cultural contexts and personal life-choices. Jablonka (2014, p. 122) mentions that, even cognitive critical thinking "does not automatically emerge as a by-product of any specific mathematics curriculum …". Hence, learners' critical thinking ability may not transfer from a particular area of the discipline to social and personal spheres of their lives by itself. To gain a meta-perspective of their mathematics learning process, understand and participate in improving it, learners should be acquainted with applying critical thinking to reflect the role of mathematics and mathematics education in the social, political, cultural aspects of their lives, and in making their personal-life choices.
Critical thinking for understanding mathematics and mathematics education in socio-political and cultural contexts and its limitations
By introducing the "second wave" of critical thinking, critical pedagogy researchers reminded that all critical thinking is carried out by someone in some social context (McLaren, 1994, p. x). They insisted that if one is to use critical thinking in their lives, a complete de-subjectification and de-contextualization of any daily life situation cannot be possible. Hence, achieving full objectivity while thinking critically by discarding thinker's social, political, cultural contexts, and personal beliefs and/or prejudices is illusionary. Moreover, creativity and imagination may seem like opposites to critical thinking, and any creative solutions to a problem would be inacceptable if critical thinking should only propose logically deducible solutions.
Critical pedagogy, through education, aims to make learners conscious of thinking critically to reflect over their socio-political and cultural settings, and their personal beliefs to understand and rectify hidden hegemonic ideas, power, privilege, injustices and inequalities in society (Freire, 1972). Thus, 'second wave' critical thinkers propose an alliance of critical thinking and critical pedagogy, in which critical thinking should enable learners to critically observe, reflect, understand and analyze their social contexts and personal beliefs from a distance; and take planned actions to promote social justice, equity and peace in the society. This alliance also influenced mathematics education research. CME (Skovsmose, 1994a) and the socio-political turn (Gutiérrez, 2013) in mathematics education make it clear that imparting critical thinking competence among mathematics learners is important to promote social justice, equity and critical citizenship through mathematics education (Gutstein et al., 1997). The need of making mathematics learners critically reflect and critique mathematics in the spirit of critical pedagogy is argued, though not particularly using the term critical thinking. Gutstein (2003Gutstein ( , 2006 and Sriraman and Knott (2009) provide examples of using mathematics education with critical pedagogy to promote learners' socio-political and cultural consciousness about mathematics and mathematics education's role in society; and how it can help promoting social justice, critical citizenship and equity. This way learners can be made aware of the role mathematics and mathematics education play in their socio-political and cultural surroundings and how it indeed shapes their thoughts, lives and society. Also, Skovsmose (1994b), seemingly supports the 'second wave' of critical thinking, rejecting the possibility to "… reduce 'reflection' and 'critical thinking' to 'logical awareness' ... informal logic and to criticism 'inside the disciplines' …" (p. 217). He recommends CME to develop mathemacy in learners -a competence parallel to literacy (Freire, 1972) gained by critical education, as advocated in critical pedagogy. Evolving learners' critical reflective skills for attaining mathematical, technological and reflective knowledge 5 are favored to develop mathemacy (Skovsmose, 1994b). Learners are expected to critically reflect the use of mathematics in society using their reflective knowledge about mathematics gained through CME. Similar to literacy, the aim of mathemacy is to empower learners to be critical citizens through mathematics education.
However, critical socio-political awareness alone is not sufficient to bring social justice and equality, since to act consciously for bringing change in personal lives or society, people need self-awareness prior to socio-political awareness (Freire, 1972). In mathematics education context, Gutstein (2003) says that being critical about use of mathematics in socio-political contexts such as statistical data and results can help learners only read the world using mathematics. This reading can promote equity and justice theoretically, whereas to write (to act for changing) the world using mathematics, learners need to be active carriers of that change. Therefore, though learners may understand the role of mathematics and mathematics education in their socio-political and cultural contexts using critical thinking, it is not obvious that they can think critically about their own mathematics learning process for taking reflected decisions in their personal lives.
Critical thinking for gaining a meta-perspective of learning mathematics in personal life and its potential
Previous sections describe the first and second waves of research concerning critical thinking. McLaren (1994, p. xii), however, describes a potential "third wave" of critical thinking aimed at making critical thinkers aware of the importance of being critical to their personal beliefs. Such critical thinkers observe their own beliefs, choices, decisions and actions in their personal lives and societal circumstances critically to reflect on how they themselves influence and are affected by inequalities and injustice prevalent in society. Simplified, the 'third wave' of critical thinking can be seen as being a critical thinker in personal sphere of one's life.
In mathematics education, research regarding critical thinking is not precise. Different perspectives highlight learning and applying critical thinking in cognitive and social spheres of one's life through mathematics education (see Introduction). However, learners' potential of critical thinking to make reflected choices and take informed, well-reasoned decisions about learning mathematics in their personal lives does not seem to be carefully researched. This focus on researching learners' application of critical thinking skills only in the cognitive and social spheres of their life while learning mathematics has failed to consider the opportunity to find out if and how they adopt a critical attitude towards learning mathematics in their personal lives. Consequently, the potential and practice of learners in applying critical thinking skills to their personal mathematics learning process remains undiscovered. Learners are expected to make choices in mathematics education, especially at the transition points (at the end of their compulsory school years) of their educational pathway where they take important decisions regarding education for their future lives. Such choices should not be made uncritically. Therefore, McLaren's third wave of critical thinking applies to mathematics education as well. Learners should use critical thinking to gain a meta-perspective of learning mathematics in their personal life, so that they not only decide what to believe, but also can figure out how to act to improve their mathematics learning process. These meta-perspectives about their mathematics learning process can be different. A call for learners to achieve such meta-perspectives is also scattered around in the literature concerning CME. We outline three types of non-exhaustive and overlapping meta-perspectives below.
Firstly, learners' critical thinking about personal beliefs, choices and decisions concerning mathematics learning in their lives may involve critically understanding what content they are interested/not interested in learning, how mathematics is being taught to them, why they are/are not learning mathematics, and the purpose it serves in their present and future lives. Learners choose if they want to study mathematics or not and eventually the direction(s) they wish to pursue in further mathematics education. Such decisions involve critical thinking about personal reasons, desires and ways of learning mathematics or not. This meta-perspective would allow them to not only "grasp a meaning but also to have the possibility of negotiating a meaning about the content of their [mathematics] education" (Skovsmose, 1994b, p. 93) (parentheses and italics added). Acquiring such a meta-perspective can give learners the possibility to co-operate with teachers to influence and evaluate their teaching strategies and take decisions to make mathematics classes meaningful for themselves. This way learner cans develop a meta-language to have a say, take charge, start acting and participating (Skovsmose, 1994b(Skovsmose, , 2001 in their mathematics classrooms. Similar concerns are found in previous studies encouraging to extend the focus of CME to involve students more profoundly (Clare & Sue, 2013;Powell & Brantlinger, 2008), give them freedom to decide their own curriculum in mathematics (Lindenskov, 2010) and know its relevance in their personal lives (Kollosche, 2017).
The second type of meta-perspective, involves learners' thinking critically and understanding their strategies of learning mathematics and the ways to improve them. For instance, while doing mathematics, they can critically reflect upon their patterns of study, learning strategies, and regulate their personal practicing methods, studying structures, schedules etc. that work for them. By gaining such a meta-perspective, learners can improve their learning strategies leading to feel success in learning mathematics and enhance their mathematics learning experience on their own (Malmivuori, 2006).
In the third type of meta-perspective, learners become aware of critically understanding their personal beliefs about mathematics and mathematics education and the reasons for holding them. First and second meta-perspectives allow learners to understand how they can influence their mathematics learning, while the third meta-perspective encourages learners to think critically about how learning mathematics influences them and the society. Several papers forward concerns about learners acquiring an inferiority complex while struggling to learn and succeed in academic mathematics under CME research. Greer and Mukhopadhyay (2012) raise the issue of hegemony of mathematics education in society and "intellectual violence" (p. 236) exerted on learners by the dominant societal status and teaching structure of mathematics. D'Ambrosio (2010) also pointed out the responsibility of mathematics and mathematics educators to address the problem of survival with dignity for all individuals in the society. The need to critically consider the interplay between their personal beliefs about mathematics, and how these beliefs inturn influence and reflect in the dominant structures of society through mathematics education is addressed for both struggling and successful learners of academic mathematics. This self-awareness can make learners conscious to reflect over their role 6 in hegemonic structures and able to act for changing the status quo of dominant ideologies and traditions of mathematics education in society in and respect people holding different views.
Gaining these meta-perspectives through critical thinking can provide learners with a potential to improve their mathematics learning process for the betterment of their personal lives and the society. They can figure out better ways to learn mathematics, co-operate with their teachers to improve their mathematics teaching, and prioritize their choices while learning mathematics in their present and future lives. In addition, aim of the third wave of critical thinking to make the thinker aware of his/her own ideological situatedness in the society and work for emancipation (not adaption) can be realized through this potential. However, in this study, learners' meta-perspective of first type is explored since their expressed beliefs regarding what, why and how of their mathematics learning process are analyzed to identify their practice with thinking critically about this process.
METHOD
A qualitative approach is adopted to investigate learners' personal beliefs and reasons for learning mathematics (Creswell, 2014). Presented data was collected as part of the research project Local Culture for Understanding Mathematics and Science (LOCUMS, 2016); aiming to explore the role of practical tasks rooted in students' own interests and local culture to learn mathematics and science. Here, we analyze parts of the data collected under sub-project of LOCUMS carried out in middle Norway. Two schools which included learners from diverse cultural backgrounds were chosen as the sites of data collection in accordance with the research aims of LOCUMS. Four classroom interventions were planned which took place in 8 th (two interventions) and 9 th (two interventions) grades of two schools located in middle-Norway. Each intervention included three steps of data collection. In first step, learners responded to a paper-pen questionnaire, second step included learners working to solve practical group tasks (4-5 learners in each group) and the final step included face-to-face individual interviews with selected learners. This paper is based upon the data collected in questionnaires and interviews. In total, 74 learners from these two schools participated in the questionnaires and interventions, and 20 learners were selected for conducting semi-structured interviews. Both the questionnaires and interviews were conducted in Norwegian, audio-recorded, and later transcribed for analyses.
The questionnaire was designed by deriving the inspiration from the ROSE survey (Schreiner & Sjøberg, 2004). Different parts of the questionnaire were devoted to different themes which LOCUMS focused on. The questionnaire started for example with asking for the introduction of the learner and moving ahead to their personal life interests, leisure time activities, thoughts about culture, their multicultural classrooms, connection with mathematics and science, future education and job perspectives, school environment, social participation and a particular section for the immigrant learners. Each of these parts included both five-point Likert-scale statements and some open-ended questions where learners could write down their opinions freely. Since LOCUMS was directed to both mathematics and science subjects, the statements in the questionnaire include both these subjects. This paper discusses the results from learners' responses to the section "your connection with mathematics and science" in the questionnaire part. This section included 24 Likert-scale statements concerning learner's relation with mathematics and science subjects. Out of these 24 statements, 12 statements were about both mathematics and science, eight concerned only science, and four only mathematics. Learners had to respond to the extent they agreed with each given statement on a scale from strongly disagree to -strongly agree (see Figure 2).
Due to the focus of LOCUMS on both mathematics and science education, the statements in this section of questionnaire were about both these subjects. In addition, the statements carried both positive and negative connotations, for example, some statements dealing with mathematics were, 'I like to learn mathematics'; 'I think what I learn in mathematics is a waste of time'; 'I am simply not good at doing mathematics' etc. Since statements dealing with both mathematics and science made it difficult to separate learners' beliefs about mathematics from those about science, their questionnaire responses were used as a starting point to design the interviews. Any incoherence noticed in their questionnaire responses laid the foundations for preparing interview questions directly related to their beliefs regarding mathematics, and its learning process. Consequently, the Figure 2. An excerpt of statements from the questionnaire questionnaire responses are presented here only as a precursor to the interview analyses, which is the main source of information for this study.
An interview guide was developed in accordance with selected learner's responses on the questionnaire statements about mathematics and science, and their performance and engagement levels in the intervention. Each interview lasted for about one to one and a half hour, where the questions were divided into different sections. These sections started with an individual introduction and moved to enquire into the learner's general views about education, school environment, culture and cultural differences 7 . Further, learners' beliefs in-depth about their learning process in mathematics and science subjects separately were explored; and the concluding section enquired their experiences with the interventions. Since the research question deals with learners' practice with thinking critically about and give suggestions concerning their personal mathematics learning experience, the interview questions focused on their choices, decisions and personal experiences in mathematics classroom are presented here. Specific issues such as, what they are/are not interested in learning, why they learn and how they are taught mathematics, were discussed with learners. In this way, learners were made to self-examine their opinions and suggest ways, which can enrich their mathematics learning experience.
Our informants include learners studying in 8 th and 9 th grades (13-14 years old) who will soon reach a transition stage in their educational pathway (i.e., in 10 th grade) where they prioritize and take important decisions concerning their further mathematics education. The whole data set consists of 74 responded paper-pen surveys, video recordings of learners solving group tasks and audio recordings from 20 individual (10 boys, 10 girls) interviews with selected learners. All the 74 learners got and answered the questionnaires (no sampling), whereas the 20 informants for in-depth semi-structured interviews were selected by following the principle of maximum variation. Based on a learner's questionnaire responses and his/her performance and participation level in the classroom intervention, a representative selection of five learners (one from each group in a classroom) after each intervention was made. The selected learners included students who liked/did not like, were highly, moderately or not much interested in learning mathematics, and were highly, moderately or not so enthusiastic while participating in the intervention. Attention was also paid to interview the learners from different cultural backgrounds. Majority (15) of the interviewed learners were Norwegian but some of them had different mother tongue (Sámi (2), Eritrean (1), Arabic (1) and Pashto (1)) and diverse cultural backgrounds. All except one (interviewed in Pashto) of them had been in Norway for more than three years and understood Norwegian well during the interviews despite having a different mother tongue. However, to investigate their critical thinking in-depth, this study focuses on learners' responses, not the variation in their culture and language.
DATA ANALYSIS FRAMEWORK
Both questionnaire and interview data were analyzed to explore learners' critical thinking about their personal choices and decisions concerning learning mathematics in their lives. Preliminary analysis of questionnaires highlighted learners' inconsistent responses to statements concerning their mathematics learning. For instance, a learner strongly agreeing that she/he likes learning mathematics but strongly disagreeing that she/he is interested in what she/he learns in mathematics was interpreted as an incoherent response. These conflicting responses were followed up in the interviews. Both questionnaire statements revealing learners' incoherent responses and selected interview excerpts are presented here. Questionnaire responses are analyzed in MS Excel, and interview excerpts are analyzed descriptively using the sub-skills, self-examination and self-correction of the sixth core cognitive critical thinking, self-regulation (Figure 1).
While holding certain beliefs, making specific choices or decisions in lives, people often have personal reasons and justifications for doing so while being unconscious of these reasons. Therefore, when asked to explain their beliefs, choices and decisions, the process of clarifying can make them conscious of those underlying reasons, assumptions, bias etc. Critical thinking, as defined, presumes existence of beliefs, hypotheses or opinions, but being critical involves questioning those beliefs and ideas logically. Beliefs are the objects of reflection for critical thinking and self-regulation critical thinking skill provides the opportunity to analyze our personal beliefs rationally (Facione, 1990). Self-regulation focuses on the process of self-consciously questioning and evaluating reasons for the judgements and decisions made by oneself (i.e., self-examination), and correcting one's reasoning or beliefs in case it reveals any bias or erroneous assumptions (i.e., self-correction). Since the incoherence of learners' beliefs concerning their mathematics learning process is focused in this study, the skills of self-examination and self-correction are employed to analyze learners' interview responses. Learners' beliefs are enquired along with the reasons and justifications for holding those beliefs to explore if this inconsistence may be related to their practice with thinking critically. The definitions of selfexamination and self-correction were operationalized as follows: • self-examination is understood as "… an objective and thoughtful meta-cognitive self-assessment of one's opinions and reasons for holding them'; 'judging the extent to which one's thinking is influenced by deficiencies in one's knowledge, or by stereotypes, prejudices, emotions or any other factors constraining one's objectivity or rationality." (Facione, 1990, p. 19), and • self-correction, is understood as, " where self-examination reveals errors or deficiencies, to design reasonable procedures to remedy or correct, if possible, those mistakes in one's opinions and their causes" (ibid.).
Our aim in these interviews was to explore learners' practice with critical thinking about learning mathematics. These definitions allowed us to explore learners' beliefs, assumptions, thoughts, opinions etc. presented in the interview conversations to highlight their practice with critical thinking skills of self-examination and self-correction about learning mathematics.
Ethical Considerations
This project was reported to the Norwegian Centre for Research Data (Norsk senter for forskningsdata, NSD) and permission to collect data was obtained with a clearance number 50556/3/AMS. Both young learners and their parents and/or guardians were informed in written about the project and their written consent was obtained to collect data involving their children. They were made aware that all data will be treated confidentially. It was also conveyed that their participation is voluntary and they can withdraw their consent whenever they wish without giving any reason.
Questionnaire Analyses
This section presents analyses of learners' responses to the statements of the section "your connection with mathematics and science" in the questionnaire. There were 24 statements in this section, out of which eight statements dealt only with science and 16 with both mathematics and science. In 11 of these 16 statements learners were asked about their achievement levels in mathematics and science, and the remaining five statements enquired about their interest, liking and importance of learning these subjects for further studies and career opportunities. Since learners' achievement level is not the priority of this paper, some analyses of learners' responses to only these five statements are presented here. These statements being -'I like to learn mathematics'; 'I am interested in what I learn in mathematics and science'; 'To learn mathematics and science is important for me because it will improve my career opportunities', 'Mathematics and science are important subjects for me because I need them when I will study further'; and 'I think that what I learn in mathematics is waste of time'.
On comparing learners' responses to these statements, we observed an inconsistence in their answers. A comparison analysis of opinions of several learners appeared to be incoherent and unsure. For example, if a learner strongly agreed/agreed to both the statements, 'I like to learn mathematics' and, 'I think what I learn in mathematics is waste of time' or vice-versa 8 , it was interpreted as an incoherent response. Another example is a learner strongly agreeing/agreeing that, 'I like to learn mathematics', but strongly disagreeing/disagreeing that, 'I am interested in what I learn in mathematics and science'; or the opposite 9 . Therefore, we collected learners' responses to these five statements from all 74 questionnaires and compared them with each other to find out the proportion of incoherent answers for each pair of the statements. Table 1 shows these statement pairs and the number of incoherent responses from the learners. This analysis promoted our interest in enquiring if practice with thinking critically can be a possible explanation of the 'mismatch' between learners' responses, which formed the bases of our research question for this paper. Table 1 is presented as a forerunner leading towards the interview process and research question. Table 1, 18% (13 out of 74) learners stated that they like to learn mathematics, but are not interested in learning the same, or vice-versa. 32% hold divergent views about liking to learn mathematics, and the importance of learning mathematics and science when it comes to future career opportunities. Simultaneously, responses of 31% of learners are incoherent when it comes to liking to learn mathematics and the importance of learning mathematics and science for further studies. 28% responded that what they learn in mathematics is waste of time despite they like to learn mathematics, or they do not think that learning mathematics is a waste of time, yet they do not like it. Further, 22% give disconnected responses to being interested in learning mathematics and science in relation to its importance for future careers. 23% have paradoxical views regarding their interest in learning mathematics and science, and the need for learning these subjects for further studies, whereas, 20% reported that though they are interested, what they learn in mathematics seems to be waste of time. The results for the last three pairs of statements in Table 1 seem to be more intelligible. Only 5% of students have diverging views regarding importance of learning mathematics and science for future careers, and the need of these disciplines for further studies. Just 11% state that learning mathematics is waste of time, which seems to be correlated with that learning mathematics and science is important for their future career. Learning mathematics seemed to be a waste of time for only 5% of learners as they report that they need to learn mathematics and science to study further.
As depicted in
This preliminary analysis of questionnaire responses brought the confusion apparent in learners' beliefs to our notice which provoked our curiosity to know if learners consciously and critically examine and correct their personal beliefs about learning mathematics. Though majority of the learners (over 50% for all the statement pairs) provide coherent answers, but the lack of harmony and indecisiveness seems evident for about 30-40% of learners. The questionnaire responses, however, did not help us to understand why learners were indecisive or held divergent opinions. Therefore, semi-structured interviews were conducted to delve deeper into learners' application of critical thinking while interacting about their mathematics learning process. In interviews, learners were asked to follow-up and justify any incoherent answers in-depth. Their responses were then analyzed using self-examination and self-correction to explore their use of critical thinking for making choices and decisions about learning mathematics in personal lives.
Learners' Interview Responses about What, Why and How of Learning Mathematics
This section presents selected interview excerpts in which learners were asked about what they think is interesting to learn in mathematics, why they think it is important to learn mathematics and how to improve mathematics teaching. Being semistructured, the interviews were conducted like informal conversations and the questions asked from different learners were similar but not identical. In order to explore learners critical thinking in terms of self-examination and self-correction, they were often asked to provide reasons for their responses. In the following text, different learner's response (indicated by using a number as a subscript under letter 'L' in the transcripts below) to same question is used as the unit of analysis. A descriptive analysis of learners' interview responses is presented below. Learners' responses starting with the phrases "I" think, view, believe, use etc. have been chosen to evaluate their application of self-examination and self-correction skills. First, representative interview responses of learners concerning what is interesting and what seems to be waste of time for them to learn in mathematics are presented and discussed. Later, learners' responses concerning why they learn and how they suggest to improve mathematics teaching are explored. An interview excerpt concerning the interest in learning mathematics follows: L4: maybe not everything but …it is smart to be able to do that … Here, the learner quickly expresses what content of mathematics she likes to learn; however, it seems difficult to answer if there is any content in mathematics which seems useless for her to learn. This was common for most of the interviewees. 16 out of 20 learners promptly mentioned what specific content in mathematics they are interested in learning. Moreover, when justifying their choices 12 out of 20 were able to provide reasons for their interest in learning that specific content, may it be just for fun, further studies or future job. These numbers give an impression coherent to Lindenskov (2010) interpretation that leaners usually self-examine and can justify their personal interests in learning specific mathematics content. However, learners seemed to be confused and unsure when asked to name the topics which may not be useful for them. In the interview excerpt presented above, the learner quickly concludes that she does not find any topic in mathematics to be useless to learn. The paradox, however, is that though the learner thinks that not everything is useful to learn, yet she could not mention any topic in mathematics that is useless to learn. This unawareness of critically evaluating both perspectives indicates a negligence of self-examining one's personal opinions in-depth. This dilemma was also clear in other interviews, for instance:
L6: I don't just find any use of it… I don't know…
In this extract, learner easily picks up what is especially interesting for her to learn in mathematics, but when asked about the content she does not find useful, the words 'I don't know' appear frequently. Only on being urged and given time and to reflect, she came up with a topic which did not seem useful for herself. Only eight out of 20 could name topics they think are waste of time to learn for them and just seven could give a reason for why learning that specific topic in mathematics is in vain for them.
Learners' responses and frequent use of the words 'I don't know' indicate their self-doubt, unfamiliarity and lack of critically self-examining their personal interests in learning mathematics. Even if learning everything they are taught does not appear to be correct, possible or useful, but not trying to do so is experienced to be negative since mastering mathematics is experienced to signify being clever and smart. Therefore, an unconscious resistance to gain an overall critical meta-perspective of one's own mathematics learning also surfaces. These contrasting responses reveal that learners are not used to think critically about the questions related to the mathematical content they do not think is interesting to learn. However, when asked to be critical and clarify their own opinions mentioned earlier in the interview, they do mention the content which does not seem personally beneficial for them. Considering both, what content is and is not interesting for them, is important for learners to make reflected personal choice regarding what field of mathematics they want to pursue in the future, and opt out of the content which is boring for them.
Being able to use and apply acquired knowledge in practical lives and professions legitimates to a great extent why that knowledge is gained. Therefore, learners were asked to justify why they learn mathematics by providing examples of using mathematics in their lives. While mentioning these uses, 17 of 20 learners named only the application of elementary mathematics in real-life (i.e. basic arithmetic calculations while shopping, cooking etc.), whereas only three of them presented examples of using mathematics they learn in their current grade. For instance: I: but like in general? …, is it something then you can use mathematics for? L8: eh… not for a lot of things but maybe … or at least little bit… if you have to calculate something so…eh…mass density or you should find out … or if you have to make a bit like makeshift or MacGyver [a TV hero]… it's like to make something out of the things you find and then it becomes something…you can calculate how things work out like if you had to for example make an electric cycle from only the things you find in a garage… so you need to calculate things and know how the things work out … such that it could work… L8's argument of being able to do machine makeovers represents a situation where one may need 8 th and 9 th grade mathematics in a real-life situation, however, the learner imagines a fictional character (i.e., MacGyver), not himself using mathematics in own real-life. Two more learners provided such examples including: to read geometrical and numerical data using maps and a compass; and applying statistics to any data collected, though specifying that he uses statistics only at school. Though specifying that they have not used mathematics in this way yet in their own everyday life, these three learners display some potential to self-examine their personal beliefs about why they learn mathematics. Uses of elementary mathematics provided by the remaining 17 learners include examples such as: L6: yes… I use it like when I am at a shop so I use to pay by cash and to find out how much I'm going to get back and different things like this and then I use it when I make food …then one needs to know like liters and deciliters and such things then… L6's response associates "the relevance of mathematics with learning skills in elementary mathematics" (Kollosche, 2017, p. 637) (italics in original), without considering why and where this learner needs advanced mathematics such as algebra or geometry in his/her life. Estimation of price using basic arithmetic operations is curriculum of primary school, not what they learn in 8 th and 9 th grade mathematics. Lack of self-examining and self-correcting one's beliefs about why one learns mathematics is also visible in the following response: L5: it's not the most important thing but …like… eh… it is no…I just mean that people should know about it and be able to do it but I don't know why …we learn it at the school so one thinks that one will use it… L5 is evidently influenced by the widespread belief that mathematics is a useful subject to learn and everyone is going to need it later in their lives. The phrase 'we learn it at the school so one thinks that one will use it…' illustrates learner's trust in mathematics curriculum adopted in their classroom to an extent that she does not feel the need to critically examine the information received by the teachers or other adults. Consequently, she is not conscious of demanding reasons, justifications and real-life applications of mathematics learnt in school. Moreover, though she could not mention any uses of advanced mathematics in her life, yet she denied that learning it may not be so useful for her. This instance not only illustrates the lack of self-examination of one's personal beliefs about learning mathematics, but also the denial to self-correct those beliefs if found unjustified, which again can hinder her in developing a meta-perspective of first type about learning mathematics. Similar seems to be the case for the other 17 learners who could only provide elementary uses of mathematics despite being in 8 th or 9 th grade, but still believed that learning advanced mathematics is and will be useful for them. This unconsciousness to self-examine and self-correct (if needed) their beliefs can inhibit evolving self-regulatory critical thinking skills among learners and sustain the incoherence of their beliefs regarding their own mathematics learning process.
Finally, the interview questions focused on how learners are taught and what changes they suggest improvement in their mathematics lessons. Several learners mentioned that mathematics lessons at times are either difficult, boring or obligatory for them. This general expression of learner dissatisfaction encouraged questioning the learners about their personal experiences of and suggest changes in mathematics teaching to improve it. Learners were not acquainted with answering such questions and it was not easy for them to spontaneously suggest changes in their mathematics lessons. However, when persuaded, 15 out of 20 learners proposed changes in their mathematics teaching.
L4: maybe a bit more [practical] activities in mathematics but otherwise I don't think so of anything…
Learner L12 replied that she has never thought about bringing any changes in their mathematics teaching, symbolizing unawareness to think critically and self-examining their mathematics teaching. Similarly, L4 gave an expression of being 'used to having taught like this', thus thinking some other approach than the usual way seems to be difficult for her. However, when asked to reconsider, she could recommend more practical mathematics lessons since much sitting and getting instructions from the teachers was mentioned as being boring (Nardi & Steward, 2003). Changes proposed by these 15 learners ranged from including more practical activities to teachers using more real-life based tasks, alternative explanations and giving more time for slow learners to catch-up in the lessons. However, lack of practical activities in mathematics lessons was a recurring concern and also frustrating for some learners as illustrated: Here, L8 suggests including more practical activities in order to get over the monotony of mathematics lessons. The changes mentioned above include general suggestions, but one of the learners even managed to sketch a complete lesson plan for teaching a topic she suggested in the interview. She expressed that learning to make budget for a family in an Excel worksheet can be useful instead of using one to two weeks to cram formulas for calculating volumes and areas of geometrical figures, which are easily accessible on their smartphones. Her reply follows: [NOK] is worth … and such… L5's reply presents an apt instance of using critical thinking to evaluate and examine her beliefs and in both personal and social domains of learning mathematics in her life. This critical outlook indicates learner's potential to acquire a meta-perspective of first type about her own mathematics learning not only to make choices, decisions and suggest improvements in her mathematics learning experiences, but also to critically observe mathematics' role in the society. These extracts show that though learners possess the capability to think critically and suggest changes in mathematics lessons, but their potential is hidden and they are unsure about doing so due to lack of training and experience. Further, it seems unlikely that learners themselves will initiate any such discussion and communicate their ideas to mathematics teachers unless asked. These examples indicate that learners are likely to gain a meta-perspective regarding what, why and how to improve their mathematics lessons by evaluating and suggesting improvements in these lessons, provided their self-examination and self-correction critical thinking skills are encouraged and facilitated. Our data however does not indicate that learners' views are different or appear to be influenced by variations in their cultural backgrounds.
Lack of adopting such a critical attitude towards learning mathematics in their personal lives makes it difficult for learners to gain the meta-perspective of first type so that they can make personal life choices and decisions concerning their mathematics education for further studies or career. This inexperience in decision-making indicates learners' uncritical obedience to the predecided mathematics curriculum and the traditional or dominant ways of teaching and learning mathematics at school. These speculations about learners' unconsciousness can be grounded in their contrasting questionnaire responses, combined with interview answers such as, 'I don't know' and 'I'm used to having it like this so…I have done it all these years so, I think it's a fine way to learn…'. However, though being inexperienced in thinking critically, the learners do not lack the potential to do so. When given time, encouragement to clarify their responses, and asked to reflect over their choices, they seem willing to explain their thoughts in depth and can also investigate and understand the reasons of their beliefs. Some of them have an advanced potential to contribute in improving their mathematics teaching-learning process, but they are not likely to initiate or communicate these thoughts to their teachers if not encouraged to do so.
Learners investigated in this study are young so it is not surprising that they are unsure and 'don't know' their choices exactly, but it can be problematic if they have 'never thought about it' until now. They will soon reach a transition stage in their educational pathway (i.e., starting higher secondary after 10 th grade), where choosing the right direction (theoretical or vocational) for further mathematics learning will be important for their lives. Our study suggests that these learners lack the practice in thinking critically about their learning process of mathematics and struggle to mention their personal interests, reasons and favorable strategies to learn mathematics. These findings indicate a mismatch between what is expected of the learners at this age in terms of deciding the direction of their future mathematics education, and what prerequisites they have acquired in terms of critical thinking to assess the options available to make a well informed and reasoned choice concerning mathematics education in their personal life. Regardless of their achievement levels in mathematics, and whether or not they decide to study mathematics further, mathematics education should be experienced as meaningful, not causing inferiority complex or a feeling of unworthiness among these learners. Therefore, more awareness and attention needs to be diverted for inspiring them to be critical towards their mathematics related beliefs so that they can suspect the widespread claims prevalent in the society about mathematics instead of just accepting or leaving them unquestioned.
TRUSTWORTHINESS AND LIMITATIONS
The trustworthiness of this study is addressed using the criteria credibility, transferability, dependability and confirmability (Bryman, 2016). The credibility is achieved by triangulating the two research methods -questionnaires and semi-structured interviews. The paradox in learners' responses is evident in both questionnaires and the interviews. Moreover, the findings of the study are in coherence with the previous studies reviewed in the paper. The criterion of transferability is taken care by providing a thick description of the context of this study, the details of informants and an elaborated account of data analyses procedure. Dependability is assured by keeping the log and regular meetings of the research team through all phases of the research process. A language and quality check of questionnaire, intervention design and semi-structure interview guide was done by fellow researcher having experience in conducting similar studies. In addition, the analyses of questionnaire and interview responses were also discussed with the research team to review the findings. Furthermore, though complete objectification cannot be obtained but conformability of the study is ensured by keeping an objective outlook during the phases of data collection and analyses. The researchers neither had any involvement in learners' mathematics and science lessons on daily bases, nor any control over their achievement in these disciplines, before or after this project. In this way, it was avoided to exert any influence on the learners to provide falsified or have hidden any information while answering the questionnaire and interview questions. Researchers' subjective values or preferences therefore, can be assumed to have little influence on the conduct of the research and the findings derived from it.
Despite adopting the measures to establish the trustworthiness of the findings, this study has limitations such as, lack of similar previous research, and the constraints on time and resources to well-design instruments for exploring other possible reasons of the incoherence in learners' beliefs. Employing the lens of critical thinking faculties, self-examination and self-correction does not seem to be a familiar approach to visualize the incoherence observed in learners' mathematics related beliefs. Therefore, scarcity of similar previous research can be considered as a limitation for the way this study was designed and conducted. In addition, the overarching focus of LOCUMS on both mathematics and science subjects restricted the amount of time and resources available. Consequently, it became difficult to develop professionally advised questionnaires to access learners' self-examination and self-correction critical thinking skills, specifically for their mathematics learning experiences. Though investigating mathematics and science beliefs simultaneously can be considered as a multidisciplinary approach, the study's accountability and reliability within mathematics education might be increased by adopting such measures. Nevertheless, these limitations open up the possibilities for further research on developing such instruments and interview guides to advance the concerning field of investigation.
SUMMARY AND CONCLUSION
Learners' mathematics related beliefs are investigated under the domain of affect research in mathematics education. Most studies concerning learners' beliefs explore their motivation, engagement, anxiety, stress etc. towards learning mathematics through their beliefs, but a few studies are found enquiring their beliefs regarding the what, why and how of their mathematics learning process. These investigations illustrate that the learners' hold incoherent beliefs about mathematics and mathematics education, however, this incoherence has not been the center of attention for further research in mathematics education. We take this incoherence in learners' mathematics related beliefs as a starting point to explore their practice with thinking critically about and their potential to suggest changes in their mathematics learning process. Specifically, the critical thinking skills of selfexamination and self-correction are analyzed. Therefore, along with their beliefs learners were also asked to provide the justifications and reasons for holding them.
Our literature review highlighted that research concerning critical thinking in mathematics education mainly discusses it as a tool to learn cognitive practices of inference, drawing conclusions etc. within a discipline; and sometimes to highlight sociopolitical power imbalances. However, APA's consensus definition of critical thinking 10 , critical pedagogy research's argument for second and third wave of critical thinking; and recent mathematics education research challenge the limited view of critical thinking. The importance of developing and exploring learners' application of critical thinking in personal domain of their lives, along with the cognitive and socio-political is recognized in this paper. We propose that learners should acquire habit of critical thinking to gain a meta-perspective about their personal mathematics learning process so that they can make choices and decisions about learning mathematics in their future lives. In this way, they may become aware to participate and take actions for making mathematics learning process meaningful for them in accordance with the central values of the Norwegian mathematics curriculum.
Consequently, in this study learners' practice with thinking critically and their potential to suggest changes in their mathematics learning process is explored by analyzing their expressed mathematics related beliefs. The incoherence evident in the questionnaire responses of 8 th and 9 th grade learners is presented and self-examination and self-correction (Figure 1) skills are used to interpret their interview responses to explore their application of critical thinking with respect to their personal mathematics learning process. Interview excerpts including incidents where learners apply, and do not apply self-examination and self-correction skills to their mathematics related beliefs are represented. Based on our interpretations of the questionnaire and interview data, we conclude that the inconsistence evident in learners' mathematics related beliefs can be related to the lack of practice in thinking critically about their mathematics learning process. Both questionnaire and interview responses indicate that these learners have limited or no experience in thinking critically about learning mathematics, and specifically in applying the skills of self-examining and self-correcting on their mathematics related beliefs. This leaves an impression that learners' critical thinking faculties in learning mathematics seems to be common in classrooms whereas, critical thinking about learning mathematics does not seem to get much attention. Moreover, learners do not seem aware of their right to observe their mathematics learning process critically in order to take a co-responsibility and co-operate in improving their own mathematics learning experience as per the Norwegian Act of Education. Thus, learners' potential to cooperate with their teachers to influence their mathematics education is intimidated, hence, being hardly visible and utilized.
The focus on critical thinking in mathematics concerning just the cognitive and social aspects do not provide a holistic picture of interaction between mathematics education, the learner, and the society. Therefore, it is significant to make mathematics learners aware from young age of being critical to their personal beliefs about mathematics, their mathematics learning process, and the role mathematics education plays in the society to make them reflect and consciously take decisions regarding mathematics in their personal and social lives. Therefore, we encourage that learners should learn to argue for and justify the legitimacy of their personal choices about their own mathematics learning process in mathematics classrooms while learning mathematics. Such critical thinking can stop learners from uncritically feeling obliged to follow a pre-decided mathematics curriculum, and use their potential to influence their mathematics learning process for the best of themselves and the society. We recommend that young mathematics learners are encouraged to and get training in thinking critically about their mathematics learning process along with gaining critical thinking skills for problem-solving in mathematics. Such training can equip them with a meta-perspective of their mathematics education in order to gain a holistic view of it and make reflected choices concerning mathematics in their personal lives. More research on this area is welcome and needed. | 15,402.8 | 2021-06-23T00:00:00.000 | [
"Education",
"Mathematics"
] |
Quantum many-body spin ratchets
Introducing a class of SU(2) invariant quantum unitary circuits generating chiral transport, we examine the role of broken space-reflection and time-reversal symmetries on spin transport properties. Upon adjusting parameters of local unitary gates, the dynamics can be either chaotic or integrable. The latter corresponds to a generalization of the space-time discretized (Trotterized) higher-spin quantum Heisenberg chain. We demonstrate that breaking of space-reflection symmetry results in a drift in the dynamical spin susceptibility. Remarkably, we find a universal drift velocity given by a simple formula which, at zero average magnetization, depends only on the values of SU(2) Casimir invariants associated with local spins. In the integrable case, the drift velocity formula is confirmed analytically based on the exact solution of thermodynamic Bethe ansatz equations. Finally, by inspecting the large fluctuations of the time-integrated current between two halves of the system in stationary maximum-entropy states, we demonstrate violation of the Gallavotti-Cohen symmetry, implying that such states cannot be regarded as equilibrium ones. We show that the scaled cumulant generating function of the time-integrated current instead obeys a generalized fluctuation relation.
The study of anomalous transport has been at the forefront of theoretical interest in the recent years.Among the most emblematic examples is the discovery [41] of universal superdiffusive transport of Noether charges in integrable models with nonabelian symmetries [42].The precise determination of the dynamical universality class remains an open question: despite the dynamical twopoint function of the charge density coinciding with the scaling function of the Kardar-Parisi-Zhang equation (KPZ) at late times [33,[42][43][44][45][46][47][48][49][50], it has been shown that the full probability distribution of net charge transfer is not compatible with the behavior of fluctuations predicted by the KPZ equation [51].This discrepancy has *<EMAIL_ADDRESS>been supported by a recent experiment using superconducting quantum processors [52].
Reliable extraction of transport coefficients and statistical properties of macroscopic fluctuating observables is in practice hindered by the complexity of simulating strongly interacting quantum dynamics on classical computers.This difficulty can fortunately be overcome in integrable models, where the underlying quasiparticle structure often permits derivation of exact results valid on hydrodynamic scales.Central to this endeavour are the tools of generalized hydrodynamics (GHD) [53,54] and ballistic (macroscopic) fluctuation theory [55,56].
The studies so far have largely investigated anomalous properties of spin or charge transport in interacting many-body systems with unbroken space-time symmetry, i.e., with microscopic dynamics invariant under the time-reversal (T ) and space-reflection (P) symmetries.Instead, we aim to systematically examine the properties of intrinsically chiral microscopic dynamics, in which both P and T symmetries are explicitly broken.Such a dynamics is prototypical of quantum ratchets [57][58][59][60].Our goal here is to devise a many-body analogue of a ratchet, and to investigate how the absence of P and T symmetries impacts the charge transport.To this end, we introduce a class of unitary circuits with a brickwork design in the form of a staggered lattice consisting of two alternating spins s 1 and s 2 , schematically presented in Fig. 1.Crucially, in quantum ratchet circuits under consideration the space-reflection symmetry is broken at the level of local unitary gates, rather than by the initial conditions [53,54,[61][62][63][64][65][66][67] or by transport-inducing nonunitary boundary processes [41,[68][69][70][71][72][73][74] (see also reviews [32,75] and references therein).More specifically, the circuits consist of two-site unitary gates which lack the space-reflection symmetry as a direct consequence of the nearest-neighbor spin exchange.Mostly for reasons of simplicity, we devote this work to many-body quantum ratchets built out of rotationally symmetric (i.e., SU(2)invariant) local unitary gates, including both generic (i.e., ergodic) dynamics and exactly solvable (i.e., integrable) instances.As an application, we then characterize spin transport on the ballistic (Euler) scale at a finite magnetization density.3)], representing a quantum many-body spin ratchet with time flowing upwards.Each two-body local unitary gate involves a permutation and thus swaps the adjacent spin spaces, yielding chiral dynamics of spin species: spins s1 propagate east (i.e., towards the right) and spins s2 west (i.e., towards the left).The (Floquet) propagator U given by Eq. ( 1) corresponds to two layers of the circuit.
As a consequence of broken space-reflection and timereversal symmetries, quantum many-body spin ratchets display two notable universal features.Most remarkably, we demonstrate that the dynamical spin susceptibility exhibits a universal non-zero drift velocity, depending only on the size of spins and average background magnetization density.
We support this statement by deriving a closed-form expression for the drift velocity using GHD in the integrable ratchets, and by extensive numerical simulations in the nonintegrable ones.In addition, we demonstrate that the drift velocity arises purely from the spin-exchange part of the local unitary gate, enabling us to obtain a general analytic form as a function of both spins and chemical potential.By numerically studying the spreading of spin fluctuations in the co-moving drift frame, we generically find the anticipated superdiffusive scaling with dynamical exponent z = 3/2, as observed also in several other models which possess continuous nonabelian symmetries, and which exhibit dynamical criticality [33,[41][42][43][44][45][46][47][48][49][50][51][76][77][78].
Our second main result concerns the breaking of timereversal symmetry.We find that in quantum many-body spin ratchets, the Gallavotti-Cohen relation associated with macroscopic current fluctuations is violated in Gibbs states.
While we rigorously demonstrate this violation only in the integrable unitary circuits, we conjecture it to remain a general feature of nonintegrable many-body spin ratchets as well.In the integrable ratchet specifically, we compute the third scaled cumulant associated with the time-integrated current density using the results of ballistic (macroscopic) fluctuation theory [55,56].In Gibbs states away from half-filling, we obtain a non-zero value.Distinctly to time-reversal invariant dynamical systems which always obey the Gallavotti-Cohen relation [56,79], this fluctuation symmetry is no longer satisfied in quantum spin ratchets.To put it simply, the probability for measuring a large value of current depends on the direction of the flow.This consequently means that Gibbs states are in fact not equilibrium states of quantum ratchets.We instead deduce a generalized fluctuation relation, connecting large current fluctuations in one direction to fluctuations in the opposite direction in a spatially-reflected system.
The remainder of the paper is structured as follows.In Sec.II we introduce quantum many-body ratchets.This is followed by Sec.III, where we introduce an integrable quantum ratchet and describe the associated integrable structure.Having discussed the setup, we focus on dynamical properties of the model, which are presented in Sec.IV.First, in Sec.IV A we define the relevant observables and related continuity equations.Second, in Sec.IV B we compute the first moment of the dynamical structure factor and its drift velocity.Furthermore, we obtain the same expression from a permuting unitary circuit and verify it against tensor network simulations.Lastly, in Sec.IV C we discuss large scale fluctuations as an alternative probe of transport.We conclude with a discussion of our results and open questions in Sec.V. Several appendices at the end present the details of the calculations.
II. QUANTUM SPIN RATCHET CIRCUITS
We consider an inhomogeneous quantum spin chain of length L ∈ 2N, made out of two spins of not necessarily equal sizes s 1 and s 2 (s 1 , s 2 ∈ N/2), arranged in an alternating fashion.With each spin we associate a local Hilbert space We study a discrete-time two-step unitary evolution of quantum states where U e and U o denote the even-step and odd-step propagators and t ∈ N is time.The one-step propagators are composed from local two-site unitary maps U : H s1 ⊗ H s2 → H s2 ⊗ H s1 , each acting on two adjacent lattice sites, namely where the subscript indices refer to the the pair of sites on which U acts nontrivially, and periodic boundary conditions have been adopted by identifying 1 ≡ L+1.Combined together, the unitary maps are arranged in a brickwork architecture as shown in Fig. 1.We are particularly interested in local quantum unitary maps of the form where denotes the permutation of two spins, while V ∈ End(H s1 ⊗H s2 ) is an arbitrary unitary gate which preserves the ordering of the local degrees of freedom, and which may even differ between the pairs of sites.Such circuits can be viewed as quantum many-body spin ratchets [57][58][59][60]: due to permutations, different species of spin get propagated in opposite directions, resulting in a dynamics that breaks spacereflection symmetry.As described in Appendix A, such a dynamics can be experimentally realized using local quantum unitary gates acting on several copies of identical qubits or qudits.For example, in the case s 1 = 1 and s 2 = 1/2, one can realize U as a quantum unitary gate acting on three qubits (or spins 1/2).
A. Isotropic spin ratchets
In this paper we specialize to quantum ratchets composed of SU(2) symmetric gates.This choice is primarily motivated by the recent discovery of anomalous transport properties in integrable models invariant under a continuous nonabelian symmetry [33,42,44].All of the models explicitly considered so far, however, exhibit both space-reflection (P) and time-reversal (T ) symmetry.Our aim is thus to investigate whether absence of P and T symmetries has any profound effect on spin transport properties.Additionally, there exist a class of integrable SU(2) symmetric quantum circuits [21,24] (and generalizations thereof to other symmetries [80][81][82][83]) which, owing to their particularly simple structure, permit analytical calculations of correlation functions, transport coefficients, and quench dynamics.As outlined below, we will focus on a simple one-parameter family of gates V that, depending on the choice of parameters, encompasses both integrable and ergodic circuits.As such, it is particularly convenient for examining the effects of integrability breaking.
Specifically, global SU(2) symmetry is ensured by realizing unitary gates V in terms of an operator-valued function of the spin magnitude J ∈ End(H s1 ⊗H s2 ), which is defined through the Casimir invariant Note that the eigenvalues j ∈ {|s 1 − s 2 |, . . ., s 1 + s 2 } of operator J determine the dimensions 2j + 1 of the irreducible components (spin multiplets) in the Clebsch-Gordan decomposition of H s1 ⊗H s2 .To enable analytical calculations, we choose a one-parameter family of unitary gates V = R s1,s2 (λ) with λ ∈ R, where obeys the Yang-Baxter equation discussed in Section III.
Here, Γ denotes the Euler Γ-function and j max = s 1 + s 2 is the maximal eigenvalue of J.The unitary gate (5) is symmetric and it satisfies the following normalization conditions: In particular, the second property in Eq. ( 7) implies that, for large λ, the unitary map U = P s1,s2 R s1,s2 (λ) reduces to the permutation operator P s1,s2 .We will consider three instances of quantum ratchet circuits constructed from the R-matrix gate (5): 1. Setting λ = τ uniformly in all local gates V = R s1,s2 (τ ), the dynamics is integrable.Indeed, V then corresponds precisely to the fused quantum R-matrix associated with an alternating spin chain [84][85][86], whereas the full propagator (1) belongs to an infinite hierarchy of commuting transfer matrices (as detailed out in Section III below).
2. For a non-uniform choice of λ staggered in time, i.e., with λ = ±τ alternating between the adjacent horizontal layers of the lattice shown in Fig. 1, the propagator (1) consists of two different, noncommuting transfer matrices.Hence, the resulting dynamics is not integrable.
B. PT symmetry
The full quantum many-body spin ratchet depicted in Fig. 1 manifestly lacks symmetry under spatial reflection P, ℓ → L − ℓ + 1 (with the exception of the homogeneous lattice with equal spins s 1 = s 2 which is not of our interest).Under the action of P the two spins get interchanged, s 1 ↔ s 2 , and consequently P(U ) ̸ = U in the general case of s 1 ̸ = s 2 .Similarly, the spin ratchet circuit breaks also the time-reversal symmetry T .The latter is understood as an adjoint mapping T (U) ≡ SKUKS −1 combining a unitary operator S and an antiunitary conjugation K, such that We will show that the integrable version of the ratchet circuit with identical gates nonetheless obeys the joint PT symmetry.More generally, this is true for any brickwork ratchet circuit composed of identical gates (3) in which the unitary V is symmetric, e.g., see Eq. ( 6).The breaking of the space-reflection symmetry then naturally implies the breaking of the time-reversal symmetry.
To demonstrate that the integrable ratchet circuit is invariant under the PT symmetry, we first note that, for large τ , the local unitary map U becomes a permutation, for which the exchange of spins coincides with a matrix transposition, i.e., As it turns out, the same property still holds for any finite value of τ and, in general, for any symmetric unitary gate V in Eq. ( 3)-see Appendix B. To find the inverse of the full propagator U = U e U o , one can thus make use of the following sequence of transformations: 1. spatial reflection P (acting as a transposition-see Eq. 9) at the level of local unitary gates; 2. a conjugation with an antiunitary matrix K, KU K = U * , which, in combination with P, inverts the unitary gates, namely KP(U )K = U −1 ; 3. a one-site lattice shift, exchanging the order of the two consecutive time steps: e .The composition of the last two transformations constitutes an antiunitary map which may be regarded as the time-reversal transformation T .In summary, ratchet circuits composed of identical unitary gates (3) with V T = V , a particular example being the integrable one, represent PT -symmetric systems that lack both P and T symmetries (see Appendix B for a detailed proof of the PT symmetry).
III. INTEGRABLE QUANTUM RATCHET
In this section we detail out the structure of integrable quantum ratchets.To this end, we fix all free unitary gate parameters λ to the same value τ ∈ R. Integrability then follows from the fact that the R-matrix (5) satisfies the Yang-Baxter equation, over a three-fold product space H s1 ⊗ H s2 ⊗ H s3 , for an arbitrary triple of integer or half-integer spins {s k } 3 k=1 , and for any two complex parameters λ, µ ∈ C [84][85][86].As shown in Appendix C, the Yang-Baxter equation leads to a family of commuting transfer matrices which includes the full time-step propagator (1) as a particular instance.Moreover, such transfer matrices can be simultaneously diagonalized using the algebraic Bethe ansatz [85].Leaving the technical details of this procedure to Appendix C, we here outline a simple way of establishing the existence of commuting transfer matrices for an integrable ratchet circuit depicted in Fig. 1.As our starting point, we consider the Yang-Baxter equation (10) multiplied from the left-hand side by P s1,s2 1,2 .Then, by introducing λ ± = λ ± τ /2 and recognizing the quantum gate (3) acting on the sites 1 and 2, we have the identity or pictorially, in terms of diagrams, where τ is a free parameter of the unitary map U and λ ± are the spectral parameters of the two R-matrices.Using this diagram twice, applying it from below in Fig. 1, one can straightforwardly verify that the entire ratchet circuit commutes with a row transfer matrix of the form where the horizontal red line encloses a loop, indicating the partial trace over an (auxiliary) space H s3 of spin s 3 .
We have thus established commutativity, of the propagator (1) with a staggered transfer matrix graphically represented in the diagram (13).The arrow direction on top of the product specifies the spatial ordering of the matrix product.In our convention, the physical (lower) indices of the R-matrices increase from the left-hand towards the right-hand side.
A. Magnon dispersion relation
A distinguished role is played by the transfer matrices in which the auxiliary space corresponds to either of the two alternating spins in the chain, that is s 3 ∈ {s 1 , s 2 }.In particular, in Appendix C we show that where we have defined a one-site lattice shift in the backward direction, Π s1,s2 : (H s1 ⊗H s2 ) ⊗L/2 → (H s2 ⊗H s1 ) ⊗L/2 , reading Π s1,s2 = P s1,s2 1,2 P s2,s1 1,3 P s1,s2 1,4 P s2,s1 1,5 In analogy with the light-cone lattice discretizations of certain integrable quantum field theories [85,[87][88][89], Eqs. ( 16) can be interpreted as elementary lattice shift operators along the light-cone directions, i.e., the northwest and south-west direction, respectively.In this view, the propagator U realizes a two-site lattice shift in the "time" (i.e., north) direction, and similarly we introduce T as a two-site lattice shift in the backward (i.e., west) spatial direction: Expressing the lattice shifts in terms of transfer matrices allows us to infer their eigenvalues using the algebraic Bethe ansatz.In particular, the unimodular eigenvalues of T and U correspond to quasimomentum and quasienergy, respectively.Eigenstates of integrable quantum spin chains can be described in terms of elementary spin-wave excitations called magnons.An eigenstate involving N magnons is parametrized by a set of rapidities, {λ j } N j=1 .To ensure periodicity of the wavefunction, the rapidities have to obey the Bethe equations, which impose the condition that the total phase acquired by a quasiparticle upon scattering with other quasiparticles, while traversing the spin chain, is trivial.They read where S(λ−µ) = (λ−µ+i)/(λ−µ−i) denotes the scattering amplitude associated with a two-magnon scattering and p(λ) is the single-magnon quasimomentum.In terms of the single-magnon quasimomentum p (2s) of a homogeneous Heisenberg spin-s chain, which (for notational convenience) we label by an integer number 2s ∈ N, the single-magnon quasimomentum p(λ) in a ratchet circuit decomposes as This implies that the two-site lattice shift in the backward spatial direction acts on the eigenstates as where we have singled out a factor of 2 in the eigenvalue, in order to associate the total quasimomentum N j=1 p(λ j ) with a one-site lattice shift.For the propagator in the temporal direction we obtain a similar expression where ε(λ) denotes the single-magnon quasienergy (see Appendix C).Analogously to the single-magnon quasimomentum (21), we find ε(λ) to be the difference of the single-magnon quasimomenta of the homogeneous chains with different spins, namely
B. Homogeneous chain and integrable Trotterization
In the case of a homogeneous chain with s 1 = s 2 = s, the R-matrix obeys an additional property R s,s (0) = P s,s .For small values of τ , we can then expand the quantum gate U = P s,s R s,s (τ ) around the identity.In the scaling limit τ → 0 and t → ∞, with the product tτ fixed, the Floquet dynamics with the propagator (1) therefore yields a continuous time-evolution with a time parameter proportional to tτ .Such a configuration of quantum gates is a quantum circuit corresponding to an integrable Trotterization of the continuous-time dynamics generated by the Heisenberg spin-s model [21,80].This highlights that our approach can be seen as a nontrivial generalization of unitary circuits used in state-ofthe-art quantum computation experiments [30,52,90].
C. Semiclassical limit
The integrable quantum many-body spin ratchet admits a semiclassical limit.Instead of local unitary gates U , the classical version of the many-body spin ratchet is made out of SO(3)-symmetric symplectic two-body maps Φ τ that depend on a time-step parameter τ ∈ R + .To ensure integrability, we introduce a classical Lax operator L(S, λ) as a matrix-valued function on the local phase space S r (a 2-sphere of radius r) of the classical spin S of length r.In terms of Pauli matrices σ = (σ x , σ y , σ z ), the Lax matrix takes the form Upon replacing the R-matrices in Eq. ( 11) with classical Lax operators, setting s 1,2 = r 1,2 s, and subsequently taking the s → ∞ limit, conjugation with the gate U becomes equivalent to the symplectic map Φ τ : S r1 × S r2 → S r2 × S r1 .The latter acts as , where with 2 )/4.The Yang-Baxter equation ( 11) thus becomes equivalent to the discrete zero-curvature condition For spins of equal length r 1,2 = 1, the semiclassical limit was derived in Ref. [91], and the resulting dynamics has been investigated in Ref. [77] (see also Refs.[92,93]).
IV. TRANSPORT PROPERTIES AND HYDRODYNAMICS
Chiral spin dynamics at the microscopic level profoundly affects transport properties on a macroscopic scale.To set the ground for their investigation, we first introduce the conserved U(1) charge (i.e., total magnetization) and the associated current.We then proceed by exploring the consequences of broken spacereflection symmetry using the tools of the thermodynamic Bethe ansatz.By employing generalized hydrodynamics [53,54], we examine the properties of the dynamical two-point correlation function of the magnetization density.Afterwards we investigate the structure of large-scale current fluctuations [56,79] in the stationary maximum-entropy ensembles and discuss the impact of the time-reversal symmetry breaking on the associated large-deviation rate function.
A. Magnetization density and current
The magnitude of the total spin J, entering the unitary gate (3) through the R-matrix (5), commutes with the total projection of the two-site magnetization S z 1 +S z 2 onto the z-axis.Accounting for the additional permutation, we thus have the relation where the extra lower indices ℓ − 1 and ℓ designate the lattice sites on which the local spin densities act nontrivially.For equal spins, s 1 = s 2 , Eq. ( 29) implies conservation of local magnetization along the z-axis at the level of individual quantum gates.In the general case with s 1 ̸ = s 2 , however, only the total magnetization remains globally conserved, representing a global U(1) conserved charge of the ratchet circuit.Owing to the staggered structure of the circuit, there are in fact two independent local continuity equations associated with it [80].In particular, defining magnetization densities on odd and even bonds as respectively, the associated magnetization current densities j (o) and j (e) satisfy the following continuity equations Using Eq. ( 29), the first continuity equation (32) implies while the second one yields j 2ℓ U o .Explicit expressions for the spin current densities are rather cumbersome and we do not report them here.
B. Dynamical structure factor and drift velocity
To characterize spin transport, we investigate the hydrodynamic relaxation of the dynamical spin susceptibility (structure factor) [94] S(x, t) ≡ ⟨q(x, t)q(0, where ⟨•⟩ c denotes the connected part of the correlation function and q(x, t) denotes the time-evolved density of the total conserved magnetization Q = dx q(x).For convenience, we have passed from lattice to continuous space-time variables (omitting the precise identification for the time being).The local continuity equation therefore reads where j(x, t) is the current density associated with q(x, t).By assuming that the ensemble averages of q(x, t) vary slowly on large spatio-temporal scales, the late-time relaxation of S(x, t) can be computed with the aid of hydrodynamics.In what follows, we will consider Gibbs states at a finite magnetization density set by the chemical potential µ.Note that, since ratchet circuits are Floquetdriven systems in which temperature is not defined, such states are of the form the corresponding ensemble average being In a generic ratchet circuit, which lacks additional conservation laws, ϱ µ is in fact the most general form of a Gibbs ensemble.Owing to the lack of P symmetry in a general quantum many-body spin ratchet with s 1 ̸ = s 2 , the dynamical structure factor S(x, t) is not symmetric under the spatial reflection, i.e., S(x, t) ̸ = S(−x, t).Consequently, S(x, t) will feature a finite hydrodynamic drift with velocity normalized by the time-independent static spin susceptibility To fully quantify the asymmetry, we will consider the asymptotic time-scaled centered moments of the dynamical structure factor defined as Note that the first centered moment trivially vanishes by definition, i.e., S (1) = 0.The second centered moment, however, is the Drude weight, S (2) = D ≥ 0. It quantifies ballistic spreading of local density disturbances in the frame moving with the drift velocity v d .Higher centered moments S (n) quantify deviations from Gaussianity.We do not consider them explicitly herein.
Hydrodynamics in an integrable ratchet
On large space-time scales, the dynamical structure factor of an integrable ratchet can be accurately described by means of the generalized hydrodynamics [53,54].Specifically, on the ballistic (Euler) hydrodynamic scale, characterized by large x and t with their ratio x/t fixed, the dynamical structure factor S(x, t) admits a mode resolution in terms of quasiparticle excitations.The latter are accessible within the thermodynamic Bethe ansatz (TBA) which describes the thermodynamic eigenstates of the model with a finite set of state functions per each mode-see Appendix D for information concerning the thermodynamic limit of the Bethe ansatz equations (19).Similarly to other integrable spin chains, the integrable spin ratchet features quasiparticles (magnons) that undergo elastic scattering.Due to attractive interaction, magnons can form bound states with m quanta of magnetization, commonly referred to as the bare charge q m (λ) ≡ q m = m [95].Bare quasimomenta of such bound states are parametrized by the rapidity λ ∈ R. Owing to mutual interaction, bare quantities of quasiparticles undergo nontrivial renormalization called dressing.Dressing corresponds to a linear transformation depending on both the state occupation function and the scattering data.For instance, q dr m denotes the dressed magnetization of a bound state, as detailed out in Appendix D 2.
On ballistic hydrodynamic scale, the mode resolution of the structure factor takes the form of a weighted sum of delta peaks [94,96], propagating with effective mode velocities [97] where f ′ (λ) ≡ ∂ λ f (λ) denotes the derivative on the rapidity.In Eq. ( 40) we have introduced mode susceptibil- in which ρ tot m (λ) denote the total densities of available states in the rapidity space, while n m (λ) ≡ ρ m (λ)/ρ tot m (λ) are the occupation fractions.The drift velocity (37) now becomes where the static spin susceptibility (38) admits the following mode decomposition: In a system with an unbroken P symmetry, the occupancies n m (λ) and the total state densities ρ tot m (λ) entering the mode susceptibilities χ m (λ) are even functions of the rapidity λ.On the other hand, v eff m (λ) is an odd function of λ and, as a result, the drift velocity vanishes.This is the case in homogeneous circuits with s 1 = s 2 .Instead, in ratchets with s 1 ̸ = s 2 , P symmetry is absent, and it is thus not surprising that we find a finite v d .Strikingly, however, we observe that v d is universal.To corroborate this statement, we compute v d explicitly in integrable spin ratchets, using the mode decomposition (42).In the Gibbs state (36), the mode occupation functions n m become flat, i.e., they lose dependence on the rapidity λ, in turn simplifying the calculation.The latter involves an explicit solution of the infinite-temperature TBA equations of the spin-s Heisenberg model which, to the best of our knowledge, has not been obtained before (see Appendix D 3).The solution simplifies at half-filling, i.e., at µ = 0, where the right-hand side of Eq. ( 42) can be evaluated explicitly analytically, yielding a remarkably simple expression which notably depends only on the SU(2) Casimir invariants s(s + 1) of the local spin degrees of freedom.Note, moreover, that the above form of v d reflects that of the effective velocity (41) determined by the quasiparticles' quasienergies and quasimomenta.The latter are, respectively, a difference and a sum of the quasimomenta in the homogeneous Heisenberg chains with spins s 1 and s 2 , similarly to the eigenvalues ( 24) and ( 21) of the time-shift and space-shift operators.Specifically, they read In the semiclassical limit s 1 , s 2 → ∞, the drift velocity of the symplectic ratchet (27) reduces to a function of spin lengths ), which we have verified by direct numerical simulations.
Exact drift velocity in the noninteracting limit
Recall that as τ → ∞, the integrable ratchet reduces to a simple brickwork circuit composed of permutation gates.In this limit we can compute the drift velocity (44) exactly by evaluating the dynamical structure factor at coarse-grained integer coordinates ℓ ∈ Z, associated with pairs of lattice sites (2ℓ, 2ℓ + 1), and at a discrete time t = 1.Specifically, let us consider at half-filling (µ = 0), where we have taken q(ℓ) ≡ q (e) 2ℓ /2, see Eq. (31).Since the time evolution is represented by a sequence of permutations, we observe that Uq(0)U −1 = (S z 1;3 +S z 2;−2 )/2, and hence the drift velocity reads simply Using that, at zero magnetization density, one has ⟨(S z ) 2 ⟩ = s(s + 1)/3, we find precisely the Casimirdependent drift velocity (44).In fact, the simplicity of the circuit composed of permutation gates allows us to generalize this computation away from half-filling, i.e., for a general chemical potential µ ̸ = 0. There, linear response relates the gradient of the coarse-grained magnetization profile q(ℓ, t) = U −t (q (e) 2ℓ /2)U t to the dynamical charge susceptibility, S(ℓ, t) ∝ lim δµ→0 (δµ) −1 ⟨q(ℓ + 1, t) − q(ℓ, t)⟩ δµ [45].Here, ⟨•⟩ δµ denotes the expectation value in a bipartite state with chemical potentials µ L/R = µ ± δµ on the left/right-hand side of the system.Exploiting this relation, we obtain where This result remains valid for any value of τ > 0, which is corroborated by a numerical evaluation of the exact hydrodynamic formula (42) away from half-filling.For instance, setting s 1 = 1, s 2 = 1/2, and µ = 5/4, with arbitrary τ , we obtain v d ≈ 0.326338 from Eq. ( 47) and v d ≈ 0.32633 from the GHD expression (42).In the numerical evaluation of the latter, we have integrated over rapidities λ ∈ [−8×10 3 , 8×10 3 ] and truncated the sum over modes to m max = 500 terms.
The fact that the drift velocity takes a simple universal form (44) (or Eq. ( 47) for µ ̸ = 0) suggests that it might result purely from the spin-exchange operator entering the local unitary map U of a quantum many-body spin ratchet.As shown in the following, the drift velocity formulae ( 44) and ( 47) indeed continue to hold even in generic (i.e., chaotic) spin ratchets.We stress again that the drift in the ratchet circuit is a direct consequence of the broken parity symmetry.Curiously, a similar effect has been reported in the context of entanglement spreading in cellular automata defined on staggered lattices made out of local Hilbert spaces of different dimensions [98].There, a background velocity depending only on the logarithms of the local Hilbert space dimensions has been observed.
Universality of drift velocity and spreading of correlations
To verify the conjectured universal formula for the drift velocity we performed large-scale tensor-network simulations on integrable and disordered (non-integrable) ratchet circuits.The dynamical charge susceptibility S(x, t), shown in Fig. 2a for the integrable ratchet with s 1 = 1, s 2 = 1/2, τ = 1, and at µ = 0, features a nonzero first moment corresponding to a drift velocity v d (µ = 0) = 5/11.The drift velocities extracted from numerical simulations, shown in Fig. 2b for different integrable and nonintegrable instances, all coincide with the GHD results from Eq. (44).Likewise, the drift velocities away from half-filling (cf.Fig. 2c) are in excellent agreement with the analytic formula derived in the noninteracting limit, and reported in Eqs.(47) and (48).
Finally, to extract the algebraic dynamical exponent z, defined via the asymptotic growth of the second moment of S(x, t), ) estimated from tensor network simulations of an integrable ratchet with τ = 0 (blue circles), τ = 1 (orange crosses), and of a disordered system (green pluses), compared against the analytical prediction (47).We observe excellent agreement for all µ in both integrable and non-integrable circuits.we also investigate the dynamics in the moving (i.e., center of mass) frame.In integrable ratchets, we generically expect ballistic growth with exponent z = 1.This is however the case only in generic states, i.e., away from half-filling.At µ = 0, the SU(2) symmetry of the state gets restored, affecting the type of spin transport in a profound way.Specifically, for τ > 0 we observe the anticipated fractional dynamical exponent z = 3/2, in line with the general predictions for integrable models invariant under nonabelian symmetries-see Fig. 3.The special point τ = 0 is however exceptional and exhibits a qualitatively different behavior.This fact can already be recognized at the level of bare quasiparticle dispersion relations.Accordingly, by repeating the scaling analysis of Refs.[99,100], we infer an anomalous type of diffusion, with a singular diffusion constant D diverging logarithmically with time, D ∼ log t.Such a law has already been observed previously in integrable spin models [100].We note that such a mild divergence cannot be reliably resolved with accessible numerics (see the inset in Fig. 3) which instead hints at a normal diffusive scaling (z = 2).
C. Large-scale current fluctuations
As an alternative dynamical probe of spin transport we now consider the scaling of fluctuations of the timeintegrated current density [see Eq. ( 35)] across a site (say, at x = 0) in the middle of an extended system, Here, we have subtracted the finite average value of the background current inherent to our ratchet systems (⟨•⟩ denotes the ensemble average) [101].We again study the transport in maximum-entropy stationary states described by the density matrix (36).For concreteness we focus on integrable ratchet circuits, where large-scale fluctuations can be characterized exactly.
The time-integrated current J t is a macroscopic fluctuating variable, whose values J are distributed according to a probability distribution P(J|t).Typical values of J are of the order J ∼ O(t 1/2z ), where z is the dynamical exponent governing the decay of the density and current two-point functions.Generically, the distribution of typical values tends to a Gaussian at late times, i.e., it complies with the central-limit behavior.On the other hand, in certain systems featuring dynamical criticality [51,102], one finds it converging to a universal non-Gaussian asymptotic distribution.In the integrable ratchets, such a critical behavior is expected in the unbiased ensemble at µ = 0.
Here, we consider µ > 0, and instead examine the structure of large fluctuations.Particularly, we are interested in the fluctuations of the time-integrated current on the largest ballistic scale, with J ≃ j t at large t (for 0 < j < ∞).For asymptotically large times t, rare events are expected to obey a large deviation principle where ≍ signifies the asymptotic logarithmic equivalence for large t and I(j) is the large-deviation rate function.If the rate function I(j) is differentiable, the Gärtner-Ellis theorem states that its Legendre-Fenchel transform F (ζ) ≡ max j [ζj − I(j)] is the scaled cumulant generating function (SCGF) of the time-integrated current (50) [103], Provided that a certain regularity condition is satisfied, the derivatives of the SCGF at ζ = 0 correspond to the scaled cumulants of the time-integrated current [104], i.e., n , where We stress that such a regular behaviour of the SCGF is not guaranteed.A notable counterexample are models exhibiting dynamical criticality [51,52,105].The latter are characterized by cumulants c n (t) = ⟨J n t ⟩ c that do not all scale with the same power of t.In such a case, the corresponding scaled cumulants (53) are ill-defined.
The integrable spin ratchet circuit considered here exhibits dynamical criticality at µ = 0.In the following, we instead specialize to the regular regime µ ̸ = 0 where all scaled cumulants (53) exist and are given by the derivatives of the SCGF at ζ = 0.In this case, c (sc) n are accessible within the GHD.In particular, the first scaled cumulant is trivially zero, c (sc) 1 = 0.This is simply due to the subtracted average current ⟨j⟩ in Eq. ( 50) which only changes the linear slope of the SCGF (52) (thereby removing the effect of the broken P symmetry), whereas all higher scaled cumulants remain unaffected.The second scaled cumulant, also known in the literature as the Drude self-weight [96], denoted by D self , corresponds to the growth rate of the first absolute moment of the dynamical structure factor, It admits the following mode resolution, which can be evaluated numerically.We have also compared it to direct numerical calculation of the absolute moment (54) using our tensor-network (TN) simulations.For example, setting s 1 = 1, s 2 = 1/2, µ = 1, and τ = 1, the GHD formula (55) yields c (sc) 2 ≈ 0.1407 (obtained with the cutoff m max = 20 and using integration over the compact rapidity domain λ ∈ [−5×10 2 , 5×10 2 ]).On the other hand, the TN simulation results in c (sc) 2 ≈ 0.14(0).Similarly, in the limit τ → 0 and with other parameters unchanged, we have c (sc) 2 ≈ 0.1262 from the GHD and c (sc) 2 ≈ 0.126(1) from the TN simulation.The hydrodynamic mode expansion of higher cumulants can be systematically derived from the ballistic fluctuation theory [55,56], or by using diagrammatic techniques [106].For example, the third scaled cumulant can be computed as a sum of two terms (see, e.g., Refs.[55,56,106]) with hydrodynamic mode resolutions Here, is the sign of the effective velocity, are statistical weights, and we have introduced where the screening operation is defined as f scr (λ) ≡ f (λ) − f dr (λ) (see Appendix D 2).
Breakdown of the Gallavotti-Cohen relation
In dynamical systems with time-reversal symmetry, the scaled cumulant generating function obeys the Gallavotti-Cohen relation (GCR) [56,79].In the absence of thermodynamic forces it reads and it indicates a lack of directionality: the probability of observing a large time-integrated current J ∼ O(t) does not depend on its direction.As established in Ref. [56], the time-reversal symmetry of the Euler-scale hydrodynamics is sufficient for the validity of GCR.It nonetheless remains an open question whether it is also a necessary condition.Our aim here is to address this question in the present context of quantum spin ratchets which, while lacking T symmetry, obey the PT symmetry.Specifically, we take a look at the third scaled cumulant and compute it using Eqs.( 56) and (57).A non-zero value of c (sc) 3 implies the violation of GCR.By numerically evaluating the hydrodynamic mode expansions in Eq. ( 57), we find that c (sc) 3 is indeed nonzero.For example, setting s 1 = 3/2, s 2 = 1/2, with τ = 1 and µ = 1, we obtain c and taking the cutoff m max = 20).Combining both terms yields a nonzero third scaled cumulant.Violation of the GCR in a (grand-canonical) Gibbs ensemble (36) indicates that, in ratchets, such stationary maximum-entropy ensembles do not describe a thermodynamic equilibrium.
Generalized fluctuation symmetry
In turns out that integrable ratchet circuits nevertheless exhibit a generalized fluctuation symmetry relation.Owing to the absence of P symmetry, we can establish a relation between the current fluctuations in the ratchet and its spatially reflected counterpart, in which the alternating spins s 1 and s 2 are exchanged.Specifically, denoting the SCGF of the ratchet circuit with alternating spins s 1 and s 2 by F (s1,s2) (ζ), we have This can be inferred from the expressions for the scaled cumulants, which have all been conjectured to admit a diagrammatic expansion [106].In those expressions, the only functions that depend on spins s 1 and s 2 are the dressed derivatives of the quasiparticle dispersion relations ε m (λ) ≡ ε (s1,s2) m (λ), and their signs σ m (λ) ≡ σ (s1,s2) m (λ).Here, we have introduced the upper index to denote the alternating spins in the ratchet circuit.The derivatives of the bare quasienergies satisfy the symmetry which can be verified already for the derivative of the single-magnon quasienergy (24), the latter written in terms of the single-magnon quasimomenta (20).Since the dressing operation maps as and does not itself depend on spins s 1 and s 2 (see Appendix D 2), it preserves the relation in Eq. ( 62).The generalized fluctuation symmetry (61) then readily follows from the following observations: 1.The hydrodynamic formulae for all scaled cumulants involve one dressed derivative of the dispersion under the integral over the rapidity λ.
Changing the integration variable as (−λ) → λ
does not affect the hydrodynamic formulae for the scaled cumulants.
V. DISCUSSION
With the aim to characterize the dynamical effects of broken space-time symmetries, we have introduced and studied a family of quantum circuits made out of SU(2)symmetric unitary gates with an adjustable free parameter, acting on spins of unequal sizes.While the constructed models explicitly break the space-reflection and time-reversal symmetries, they preserve the combined PT symmetry.Depending on the choice of parameters, the circuit can be made ergodic or integrable.The latter, in particular, generalizes the integrable Trotterization of the isotropic Heisenberg spin-1/2 chain [21].The breaking of the P symmetry induces a chiral spin dynamics, due to which we can view our circuit as a many-body analogue of a quantum ratchet.We outline how such ratchets can be experimentally implemented by encoding higher spins using multilevel trapped ions [107].
Quantum spin ratchets have two defining dynamical properties.Firstly, they exhibit a drift in the dynamical structure factor, which we have quantified by analytically deriving a simple universal formula for the drift velocity.The latter depends only on the size of spins and on the magnetization density in the initial Gibbs ensemble, but not on the microscopic details of the local unitary gates.In the integrable circuits, we have retrieved the formula in a fully analytic manner, in the scope of the generalized hydrodynamics, as a result of a nontrivial resummation over the spectrum of quasiparticle excitations.This required a closed-form analytic solution of the thermodynamic Bethe ansatz equations of the Heisenberg spin-s chain, which, as far as we are aware, has not been reported previously.
The second key property of quantum many-body spin ratchets concerns the anomalous nature of the macroscopic current fluctuations in Gibbs states, which is manifested at the level of the full counting statistics of the time-integrated spin current density.In the absence of thermodynamic forces, probabilities of large current fluctuations in time-reversal symmetric systems do not depend on the direction of the flow, as encapsulated by the Gallavotti-Cohen fluctuation symmetry.In stark contrast, in quantum many-body spin ratchets the fluctuation symmetry no longer holds in Gibbs states.This consequently indicates that such states do not represent thermodynamic equilibrium states of many-body ratchets.Instead, we demonstrate that in quantum many-body ratchets the Gallavotti-Cohen relation is superseded by a generalized fluctuations symmetry: the large current fluctuations in one direction are connected to fluctuations in the opposite direction, in a spatially reflected system.
Our study of the dynamics in quantum many-body spin ratchets opens up several interesting research directions.Particularly, there remain several questions concerning the universality of the drift velocity: -As demonstrated, the drift velocity is insensitive to integrability, so far as the functional form of the local unitary gate is preserved.We remind that breaking of integrability has been achieved through the choice of gate parameters.It however remains unclear whether relaxing the functional form of the unitary gates (while preserving the SU(2) symmetry) can have any impact.We leave this aspect to future studies.
-A similar type of drift has been reported to underlie the coarse-grained entanglement dynamics in a staggered circuit studied in Ref. [98], albeit the precise form of the drift velocity therein differs from ours.For a more comprehensive understanding, it would be worthwhile investigating various other realizations of unitary circuits with a spatial staggered structure.An example is the recently proposed circuit whose unitary gates encode the twobody scattering of particles with different masses in a supersymmetric quantum field theory [108].
The second part of our work stimulates fundamental questions pertaining to the role of the space-time symmetries on macroscopic dynamical phenomena and, more specifically, the properties of atypical current fluctuations in maximum-entropy stationary states: -It is important to determine whether the established generalized fluctuation relation ( 61) requires both PT and charge-conjugation symmetry C independently, or there is perhaps a more general fluctuation relation hinging only on the CPT invariance.
-While it appears plausible that T -symmetry is not only sufficient but also necessary for the validity of the Gallavotti-Cohen relation, this still remains formally unresolved.Investigation of spontaneouscurrent fluctuations [109][110][111][112][113][114][115][116] could provide a valuable insight on this question.
Hs j is an identity on H sj , we deduce that V emb is unitary, Finally, we require a permutation P (n1↔n2) that swaps the first n 1 qubits with the last n 2 -ones, and which can be efficiently encoded as a sequence of pairwise SWAP gates.Applying the permutation after V emb , we obtain a unitary gate U emb = P (n1↔n2) V emb which encodes the local gate U : H s1 ⊗ H s2 → H s2 ⊗ H s1 of a quantum ratchet as an operator on H . To form the circuit, the unitary gates U emb must be arranged in a brickwork fashion, with each consecutive layer shifted n 2 sites to the right relative to the previous one.
Example 1.To exemplify the construction, we explicitly work out the simplest case with s 1 = 1 and s 2 = 1/2, by embedding U as an operator acting on the Hilbert space H ⊗3 1/2 of three qubits.In particular, we embed the spin-1 Hilbert space into the triplet subspace of the first two qubits using Ω 1 : given by The second spin, s 2 = 1/2, is instead identified with the third qubit-the corresponding embedding Note that, since Ω 1 Ω † 1 projects onto a triplet subspace of H ⊗2 1/2 , we have where projects onto a one-dimensional subspace associated with a spin singlet.For a unitary operator W commuting with the projector (A5), there exists a unitary 2 × 2 matrix The embedded gate (A2) then reads For simplicity, we will set Z = 1 H 1/2 in the following.
Lastly, the permutation P 1,1/2 is embedded as , interchanging the first two with the last qubit.In the computational basis of the three-qubit Hilbert space, the embedded unitary gate thus reads We note that instead of using an embedding into a multi-qubit space, one could have alternatively employed a two-fold copy of the larger spin space, namely, one could have used an embedding H s1 ⊗ H s2 − → H smax ⊗ H smax , where s max = max(s 1 , s 2 ).Such an embedding could, for instance, be utilized in a quantum processor based on trapped ions [107]. of spin-s 1 (resp.spin-s 2 ) spaces, mainly to help distinguishing between the two different spins.
To see this, we first note that the one-site lattice shift Π s1,s2 in the backward direction, defined in Eq. ( 17), acts as On the same lattice, composed of consecutive spins s 1 , s 2 , s 1 , . . ., s 2 , the shift in the opposite direction is Π −1 s2,s1 , and it notably differs from Π −1 s1,s2 when s 1 ̸ = s 2 (it acts on a chain with a different ordering of spins).Since all of the unitary gates in the integrable ratchet are identical, we then have where we have taken into account that the order in which the opposite lattice shifts are applied does not matter.Using this freedom of choice we can then write where K denotes the antiunitary conjugation.We have used unitarity of the evolution in passing to the second line, and Eq. ( 9) in passing to the third line.Note that P acts simultaneously on all local unitary gates and, in general, cannot be written as an adjoint action of a product of some local operators.Finally, defining the antiunitary time reversal T as the adjoint action of Π s2,s1 K, Eq. (B8) demonstrates the PT -symmetry of a brickwork ratchet composed of unitary gates U = P s1,s2 V with V T = V .
Appendix C: Integrability Here, we show that the ratchet circuit composed of unitary gates (3), with V = R s1,s2 (τ ) given in Eq. ( 5), originates in the integrable family of transfer matrices (15).Specifically, we will prove the transfer-matrix shift properties ( 16) which, combined, yield the propagator U and the two-site lattice shift T in the backward (i.e., west) direction-see Eq. ( 18).The two-site lattice shift explicitly reads where Eqs. ( 16) and (B7) have been used.It is formally an endomorphism, i.e., T ∈ End[(H s1 ⊗ H s2 ) ⊗L/2 ], and Eq. ( 18) implies the two-site shift invariance of the circuit, [ U, T ] = 0.This follows from the commutation of transfer matrices for any pair of spins which is a consequence of the Yang-Baxter equation (10).
The eigenvalues of U and T are, respectively, of the form exp(i2ε tot ) and exp(−2ip tot ), where ε tot is the total quasienergy and p tot the total quasimomentum.After demonstrating the validity of the shift properties (16) in Appendix C 1, we will review the algebraic Bethe ansatz diagonalization of the transfer matrices T s (λ) in Appendix C 2. We will show that the total quasienergy and quasimomentum, resp.ε tot and p tot , are extensive: they can be obtained as a sum of single-magnon contributions ( 24) and (21).Finally, explicit form of the Bethe equations will be reported in Appendix C 3.
Propagator from transfer matrices
In this section we derive the transfer-matrix shift properties (16) using the exchange relations and Eq.(B3).Note that these relations hold also when U is substituted by its inverse U −1 , or by P s1,s2 (recall that U becomes a permutation when τ → ∞).The above relations are proven using a basis decomposition similar to the one in Eq. (B2), with the first, second, and third index in the bra-ket notation corresponding to the spaces a, b, and c, respectively.For example, for Eq.(C3) we have and similarly for other relations.
a. North-west light-cone lattice shift First, we derive the lattice shift in the north-west direction.To this end we consider the transfer matrix with a spin-s 2 auxiliary space, evaluated at λ = τ /2.Using R s2,s2 (0) = P s2,s2 , R s1,s2 (τ ) = P s2,s1 U s1,s2 , and identification of the sites 0 and L due to periodic boundary conditions, we have = U e Π s1,s2 , where Π s1,s2 is a one-site lattice shift in the negative (i.e., west) direction.We have denoted which one of the exchange relations (C3), (C4), (B3), and (C5) has to be used on a given pair of matrices in order to exchange them.On passing from the fourth to the fifth line we have also recognized that Tr a P s2,s2 L,a = 1.This concludes the derivation of the north-west lattice shift reported on the left-hand side of Eq. ( 16).
b. South-west light-cone lattice shift We now consider the south-west lattice shift: we will derive it from the transfer matrix with auxiliary spin s 1 , evaluated at λ = −τ /2.Equation ( 3), together with V = R s1,s2 (τ ) and normalization (7), implies ) where the fact that the quantum gate contains a permutation has been used in the last equality.With this in mind, we now have 2ℓ,a Eq. (C3) 1,2ℓ P s2,s1 1,2ℓ+1 P s1,s2 In the fourth line we have again used Tr a P s1,s1 This yields the lattice shift in the south-west direction, reported on the right-hand side of Eq. ( 16).
Eigenvalues of transfer matrices and lattice shifts
Here, we review the algebraic Bethe ansatz diagonalization of the integrable ratchet circuit.Following Refs.[118,119], which discuss the higher-spin isotropic Heisenberg model, we write the transfer matrix (15) of the integrable quantum ratchet as where is a (2s + 1) × (2s + 1) monodromy matrix on the spin-s auxiliary space labelled with a (note that the auxiliary spin s is not necessarily equal to any of the two physical spins s 1 , s 2 ).Its entries M (s) m,n (λ), with m, n ∈ {−s, −s + 1, . . ., s}, are operators acting on the full Hilbert space (H s1 ⊗ H s2 ) ⊗L/2 .The central role in diagonalizing the family of transfer matrices T s (λ) is played by the fundamental transfer matrix, associated with auxiliary spin s = 1/2.It is obtained by tracing out the auxiliary space in the monodromy matrix in which the spectral parameter has been shifted [120].
In particular, an N -magnon eigenvector of T s (λ) reads (see, e.g., Ref. [85]) where commuting operators B(λ j ), with j = 1, . . ., N , have been applied to the vacuum (highest-weight) state and λ j denote the rapidities which satisfy nonlinear Bethe equations (19).Following Refs.[85,118,119] one now requires two ingredients in order to obtain the eigenvalues of T s (λ): 1. commutation relations that allow us to move M (s) n,n (λ) past the sequence of operators B(λ j ) in Eq. (C13), so that we can apply it to the vacuum state (C14); 2. the vacuum eigenvalues of M (s) n,n (λ).
In the rest of Section C 2 we first specify these two ingredients and then use them to obtain the eigenvalues of transfer matrices, of the propagator U, and those of the lattice-shift operator T. From them, we then determine the quasienergies and quasimomenta.
a. Ingredients
Firstly we describe the commutation relations between M (s) n,n (λ) and B(µ).They are obtained by considering the appropriate matrix elements in the following relation, which holds by virtue of the Yang-Baxter equation (10).
Here, we have temporarily attached lower indices a and b to monodromy matrices.They denote the auxiliary spaces of respective spins 1/2 and s.Equating the matrix elements on both sides of Eq. (C15) yields relations between the entries of the two monodromy matrices.The coefficients in these relations are the elements of the Rmatrix R 1/2,s a,b (λ−µ+i/2).Crucially, they do not depend on the inhomogeneity parameter τ in the monodromy matrix (C11).
The relation we are interested in involves M (s) n,n (λ) and B(µ).It was reported in Refs.[118,119] and reads where the τ -independent coefficients c (s) . (C17) Note that the spectral parameter µ in the functions c n,n (λ) with a magnon creation operator B(µ), producing a factor given by the function c (s) 0 (λ − µ + i/2; n).The latter will "dress" the vacuum eigenvalue of M (s) n,n (λ).We now consider the vacuum eigenvalues of the monodromy's diagonal entries M (s) n,n (λ).For our purposes, it will suffice to consider s ∈ {s 1 , s 2 }, but the discussion below remains valid for other values of the auxiliary spin s.Following Ref. [85], the vacuum eigenvalues are obtained by noting that the R-matrix becomes upper-triangular when applied to the vacuum state |vac⟩.Specifically, denoting the physical spin by s p (s p = s 1 for odd-site indices j and s p = s 2 for even-site indices j), we have [122] R s,sp a,j (λ) Here, α n (λ; s p ), with n ∈ {−s, −s + 1, . . ., s}, are some functions which can be computed explicitly and satisfy the following properties: 2. for any s p , s, and λ, we have α When acting on the vacuum state (C14), the monodromy M (s) (λ) becomes a product of R-matrices of the form (C18). Since the latter are upper-triangular, the diagonal entries of M (s) (λ) satisfy where we have used a shorthand notation λ ± = λ ± τ /2.
c. Eigenvalues of the propagator and the lattice shift The propagator and the lattice-shift operator can be written in terms of transfer matrices-cf.Eq. ( 18).We exploit this to obtain n̸ =s (0; s) = 0 and α (s) s (λ; s p ) = 1, for any s p , many terms in the eigenvalue (C21) disappear, leaving us with The quasienergy is extensive, i.e., ε tot = N j=1 ε(λ j ), and the single-magnon quasienergies (24) are obtained from The eigenvalue of the two-site lattice shift operator is obtained similarly [see Eq. ( 18)]: It reads and the total quasimomentum is extensive as well: p tot = N j=1 p(λ j ).We can identify the single-magnon quasimomentum ( 21) from
Bethe equations
The entries of the fundamental monodromy matrix (C12) satisfy commutation relations (C16) specialized to the case λ → λ − i/2 and s = 1/2: Following Ref. [85], these relations allow us to identify the two-magnon scattering amplitude as Using Eqs.(C27) and (C29) in the quantization condition ( 19) for the single-magnon quasimomenta, we rewrite the Bethe equations in an explicit form as This appendix describes the Bethe ansatz and its solution in the thermodynamic (TD) limit N, L → ∞, at a fixed ratio N/L, where N is the particle number and L the system size.In Appendix D 1 we describe the Bethe ansatz equations in the thermodynamic limit (the so-called Bethe-Yang equation).In Appendix D 2 we define the dressing operation, i.e., a renormalization of the bare charge carried by the quasiparticles, necessitated by their interaction.Lastly, Appendix D 3 describes a novel solution of the Bethe-Yang equation with a particular emphasis on the computation of the drift velocity.
Bethe-Yang equation
For large L, the solutions of Bethe ansatz equations organize into the so-called "m-strings", i.e., complexes of m bound magnons carrying m quanta of magnetization, referred to as the bare charge q m = m.Let there be M m m-strings in a particular solution of the Bethe ansatz equations.Up to corrections exponentially small in L, the corresponding rapidities read where λ m α ∈ R, for α = 1, . . ., M m , are the string centers which become densely distributed in the TD limit.Their distributions satisfy an appropriate TD limit of the Bethe ansatz equations, which we will refer to as the Bethe-Yang equation.
In a chain of alternating spins, the quasienergies (24) and quasimomenta (21) are sums of contributions from sublattices of spins s 1 and s 2 .As will become clear later on, the same holds for the distribution of the string centers, which is why we will first consider the Bethe-Yang equation of a homogeneous spin-s Heisenberg chain.The latter's Bethe equations are obtained by setting s 1 = s 2 = s and τ = 0 in Eq. (C30).Inserting the string form (D1) and multiplying the equations for j = 1, . . ., m, one obtains the equations for the string centres λ m α (see Ref. [118] and also Section 8.2 in Ref. [123] for an analogous procedure in a spin-1/2 Heisenberg chain).Taking then the logarithm, one obtains the Bethe-Yang equation in the TD limit.
Specifically, let b ≡ 2s and let ρ tot(b) m (λ) be the total state densities, defined so that Lρ In this compact notation the matrix product involves summation over the discrete mode index m and integration over the continuous rapidity λ (i.e., convolution).
To determine both ρ tot(b) and n, we require another set of equations, which are obtained by maximizing the thermodynamic free energy.A crucial simplification occurs in the thermodynamic state described by the density matrix (36).There, n is determined from thermodynamic equations that do not depend on spin s, and we can therefore use n computed in a homogeneous spin-1/2 Heisenberg model.Moreover, in such a state the occupancy ratio does not depend on the rapidity either-it reads where is an su(2) character (see, e.g., Section 8.4 in Ref. [123] or the Supplemental material of Ref. [44]).
Crucially, the Bethe-Yang equation (D2) is linear in ρ tot(b) , and n does not depend on the spin.Hence, since the bare quasimomentum in the integrable ratchet is a sum of two contributions, [K
Dressing and screening operations
Bethe-Yang equation (D2) is a specific example of the dressing equation [53] (1 + Kn)q dr = q ⇒ q dr = (1 + Kn) −1 q, (D8) which encodes the effect of the interactions between the quasiparticles on the charge carried by them.In particular, according to Eq. (D2) the total state density is associated with a dressed derivative of the quasimomentum: 2πρ tot(b) ≡ (p (b)′ ) dr . (D9) Example 1.The dressing equation for the quasiparticle magnetization q m = m is solved by where n m is given in Eq. (D6).(λ) ≡ 0. We now note that, since K dr n is an operator series in Kn, it commutes with 1 + Kn, and the dressing operator is then simply found to be (1 + Kn) −1 = 1 − K dr n.It then follows that q scr ≡ q − q dr = K dr n q, (D12) which may be understood as a screened charge, and which we use in the expression for the third scaled cumulant of where we have defined We now return to a chain of alternating spins s 1 and s 2 .The single-magnon quasimomentum (21) and quasienergy (24) in the integrable ratchet imply that the quasienergies and quasimomenta of quasiparticles (i.e., bound states of magnons) can be obtained from The lower labels ± again refer to the shift λ ± = λ±τ /2 in the rapidity, i.e., in the continuous row index of a vector.From here, the effective velocity (41) of a quasiparticle in an alternating spin chain can be obtained, expressed in terms of the total state densities whose Fourier transforms are given in Eq. (D21): We will now assume that the result at half-filling (µ = 0) can be obtained by considering only the leading order in µ of all involved expressions, i.e., separately in the numerator and denominator of v d .In the leading order, the su(2) characters X m are independent of µ, while the dressed magnetization is proportional to the chemical potential, q dr m = 1 6 µ(m + 1) 2 + O(µ 2 ).We then have which is equivalent to Eq. ( 44).
FIG. 1 .
FIG. 1.A brickwork configuration of quantum unitaries U[see Eq. (3)], representing a quantum many-body spin ratchet with time flowing upwards.Each two-body local unitary gate involves a permutation and thus swaps the adjacent spin spaces, yielding chiral dynamics of spin species: spins s1 propagate east (i.e., towards the right) and spins s2 west (i.e., towards the left).The (Floquet) propagator U given by Eq. (1) corresponds to two layers of the circuit.
FIG. 3 .
FIG.3.Main figure: dynamical exponent z for various unitary-gate parameters τ at half-filling (µ = 0), extracted from tensor-network simulations with bond dimension χ.Inset: convergence of z with time t.The results indicate superdiffusive scaling compatible with z = 3/2 for all τ > 0, in line with expectations for an integrable model with SU(2) symmetry.At the exceptional point τ = 0, the diffusion constant diverges logarithmically with time t, while numerical results indicate an approximate diffusive scaling on accessible time scales.
Example 2 .
According to Eq. (D5) the kernel K m,ℓ is a sum of bare quasimomenta and can therefore be dressed as well.We have K dr = (1 + Kn) −1 K or, element-wise,
ν).We can evaluate them using the solutions of the Bethe-Yang equations (D21) at half-filling [i.e., together with Eq. (D23)], obtaining I are independent of ν, so the drift velocity cannot depend on the parameter τ of the unitary gate.Splitting the sums in the drift velocity v d = ( m a m )/( m b m ) into those over the intervals m < max(b 1 , b 2 ) and m ≥ max(b 1 , b 2 ), we see that the latter will be divergent and the former negligible in comparison.Keeping only the terms with m ≥ max(b 1 , b 2 ) finally leads us to v d (µ = 0) = b 1 (b 1 + 2) − b 2 (b 2 + 2) b 1 (b 1 + 2) + b 2 (b 2 + 2) , (D30) | 14,852.6 | 2024-06-03T00:00:00.000 | [
"Physics"
] |
On the value of Burmese amber for understanding insect evolution: Insights from †Heterobathmilla – an exceptional stem group genus of Strepsiptera (Insecta)
Burmese amber and amber from other periods and regions became a rich source of new extinct insect species and yielded important insights in insect evolution in the dimension of time. Amber fossils have contributed to the understanding of the phylogeny, biology, and biogeography of insects and other groups, and have also gained great importance for dating molecular trees. Another major potential is the documentation of faunal, floral and climatic shifts. Evolutionary transitions can be well‐documented in amber fossils and can reveal anatomical transformations and the age of appearance of structural features. Here, using a new stem group species of Strepsiptera from Burmite, we evaluate this potential of amber insect fossils to assess the current phylogeny of Strepsiptera, with the main emphasis on the early splitting events in the stem group. Amber fossils have greatly contributed to the understanding of the evolution of Strepsiptera in the late Mesozoic and the Cenozoic. †Heterobathmilla kakopoios Pohl and Beutel gen. et sp. n. described here is placed in the stem group of the order, in a clade with †Kinzelbachilla (†Kinzelbachillidae) and †Phthanoxenos (†Phthanoxenidae). †Phthanoxenidae has priority over †Kinzelbachillidae, and the latter is synonymised. The superb details available from this new fossil allowed us to explore unique features of the antennae, mouthparts, and male copulatory apparatus, and to provide a phylogenetic hypothesis for the order. The younger †Protoxenos from Eocene Baltic amber was confirmed as sister to all remaining extinct and extant groups of Strepsiptera, whereas the position of the Cretaceous †Cretostylops in the stem group remains ambivalent. While the value of Burmite and amber from other periods has a recognized impact on our knowledge of the evolution in various lineages, this new fossil does not fundamentally change our picture of the phylogeny and evolution of early Strepsiptera. However, it offers new insights into the morphological diversity in the early evolution of the group.
Occasionally amber fossils provide insights in features linked to life habits, for instance feeding, prey capture, or reproduction. Cretaceous amber fossils of †Archipsyllidae ( †Permopsocida) with well-preserved skeletal features helped to bridge the gap between chewing and sucking feeding habits in Paraneoptera Yoshizawa and Lienhard, 2016).
Long-proboscid scorpionflies from Cretaceous amber give an indirect evidence of uptake of liquid from gymno-and angiosperm reproductive organs, and the detection of pollen grains adjacent to the fossils support pollinator activity of these species (Lin et al., 2019). Direct evidence of angiosperm pollinivory in Coleoptera has been recently recorded for two families, Mordellidae (Bao et al., 2019) and Kateretidae , while two records of stem Anthophila (Poinar and Danforth, 2006;Danforth and Poinar, 2011;Poinar, 2020) and an unplaced aculeate, †Prosphex , provide evidence for increasing angiosperm-dependence in Mesozoic Hymenoptera. Locomotion and climbing on broad leaves, as of angiosperms, or wet branches is indicated by the specialized and enlarged tarsal pads of Timematodea (Phasmatodea) from Cretaceous Burmese amber . Social behaviours are also observed, as in brood care by a 100-million-year-old scale insect documented by Wang et al. (2015), and possible inter-colonial combat in the stem ant genus †Gerontoformica by Barden and Grimaldi (2016). Specialized predatory habits are amply evinced by stem Formicidae (e.g. Perrichot, 2014;Perrichot et al., 2020), and even by the dictyopteroid †Alienoptera, for which a unique type of cephalo-prothoracic preycatching mechanism was recently described Ko c arek, 2019). Debris-carrying camouflage among predaceous larvae is reported from Cretaceous Burmese, French, and Lebanese ambers, including Chrysopoidea, Myrmeleontoidea, and Reduviidae (Wang et al., 2016a; see also P erez de la Fuente et al., 2012). Moreover, ectoparasitism has been confirmed via preservation of a rhopalosomatid larva in its host gryllid (Lohrmann and Engel, 2017), and feather-feeding has been suggested by preservation of partially damaged dinosaur feathers with fossil nymphs of †Mesophthiridae . Endoparasitism is indicated by the presence of a minute strepsipteran primary larva in Burmite, which is extremely similar to extant first-instar larvae .
Amber inclusions (and impression fossils) of different periods and regions have gained great importance for dating molecular trees (e.g. Ronquist et al., 2012;Wahlberg et al., 2013;Misof et al., 2014;Espeland et al., 2018;McKenna et al., 2019;Matsumura et al., 2019). The importance of a reliable identification of extinct species has been recently underlined in a study on a Triassic impression fossil, †Leehermania prorova Chatzimanolis, Grimaldi and Engel, which was originally placed in the polyphagan beetle family Staphylinidae (Chatzimanolis et al., 2012). It was transferred to the suborder Myxophaga based on indepth phylogenetic analyses with cladistic and Bayesian methods (Fik a cek et al., 2020). This transfer has a strong impact on future dating of the megadiverse beetle suborder Polyphaga and the series Staphyliniformia (see e.g. Misof et al., 2014). The "peril of dating beetles" was also emphasized by Toussaint et al. (2017).
Amber fossils can provide crucial insights on the emergence of biogeographic patterns (e.g. Rust et al., 2010;Abellan et al., 2011;Jałoszy nski, 2012;Barden and Ware, 2017;Cai et al., 2017;Poinar, 2018;Gimmel et al., 2019;Mashimo et al., 2019). For example, re-evaluation of taxonomic assumptions for fossils attributed to or near the ant genus Leptomyrmex resolved decades of speculation about the biogeographic origin of the Australasian radiation of the genus, clearly demonstrating a Neotropical origin for the clade with probable Oligocene trans-Antarctic dispersal (Boudinot et al., 2016;Barden et al., 2017). More recently, a Baeomorpha (Hymenoptera, Rotoitidae) from mid-Cretaceous Burmese amber, extends the distribution of fossil Rotoitidae from northern Laurasia to the southern Hemisphere, where the two extant genera are restricted to Chile and New Zealand (Huber et al., 2019). A Gondwanan origin of Zoraptera was suggested based on fossil material embedded in Burmese amber (Mashimo et al., 2019). Whereas Myanmar was traditionally considered as part of Laurasia (Mitchell, 1993;Boucot et al., 2013), it is now assumed that the West Burma Block, which includes the early Cretaceous fossil site of Burmese amber, was originally attached to Gondwana (e.g. Metcalfe, 2017;Poinar, 2018).
A major potential of Mesozoic and Tertiary amber fossils is the documentation of faunal, floral and climatic shifts. A brief comparison of the late Cretaceous flora and fauna preserved in Tilin amber (central Myanmar) with early Cretaceous and Eocene biota was presented by Zheng at al. (2018), describing for instance a pre-Cenozoic transition from stem group to crown group Formicidae. A snapshot of the Middle Eocene northern European staphylinine rove beetle diversity was recently presented in Brunke et al. (2019) and compared to the extant fauna of this area. However, comprehensive comparisons between biota documented by amber fossils of different world regions and geological periods are still missing, although taxonomically comprehensive work has been published (e.g. Rasnitsyn and Quicke, 2002). Systematic reviews are accumulating for different groups, such as for the ants (e.g. LaPolla et al., 2013;Barden, 2017) and various beetle taxa (e.g. Krell, 2006;Abell an et al., 2011;Legalov, 2012Legalov, , 2020Peris, 2020), and also studies on the palaeobiology of predators, parasitoids and parasites, plant-arthropod associations, and the diversification of insects based on disparity of mouthparts (e.g. Labandeira, 2002Labandeira, , 2006aLabandeira and Currano, 2013;Ponomarenko and Prokin, 2015;Nel et al., 2018). Because of the rapid accumulation of new fossil taxa in recent years (e.g. Ross, 2019aRoss, , b, 2020, the need for comprehensive reviews is acute. Anatomical insights can be gained from amber fossils on principle. However, preservation of internal soft parts of fossil insects is very rare, and Cretaceous amber fossils with well-preserved internal organs have not been described yet. An exception are mummified tissues in amber documented by Grimaldi et al. (1994), and indirect flight and leg muscles observed in some aculeate fossils in Burmite (B. Boudinot pers. obs.). Specimens of insect Tertiary fossils with preserved internal organs are known but also extremely rare. Soft parts of cantharid and nitidulid beetles embedded in Miocene Dominican amber were reported by Henwood (1992a), and the same author found exceptionally well-preserved flight muscles in fossil dipterans extending even to cell ultrastructure (Henwood, 1992b). The almost complete anatomical reconstruction of the Eocene stem group strepsipteran †Mengea tertiara (Menge) (Pohl et al., 2010;H€ unefeld et al., 2011) remains a unique exception. It was shown that the internal structures of this species, as documented with µ-computed tomography (µ-CT), do not differ distinctly from those of extant strepsipteran males of Mengenillidae. If turbidity is present in the amber, e.g. in the case of the larva of †Mengea , or if important syninclusions prohibit grinding of the amber for better examination of the fossils, µ-CT provides an opportunity to examine amber fossils. This was emphasized in Soriano et al. (2010), although the potentially destructive effects of the procedure were also noted in that study. Very good results were repeatedly obtained at DESY (Deutsches Elektronen-Synchrotron, Hamburg, Germany) (e.g. Pohl et al., 2010Pohl et al., , 2019 and other facilities (e.g. Bai et al., 2016Bai et al., , 2018, usually without recognizable negative effects on the specimens. However, a discoloration such as a darker stripe in the amber can sometimes be caused by Xradiation during scanning . Methods for preparing small-sized 3D amber samples were discussed in detail by Sidorchuk and Vorontsov (2018). These techniques distinctly facilitate and improve the study of very small insects or other minute organisms.
In the case of the species described here, the use of µ-CT was not necessary because the amber was very clear, and all morphological details of the new fossil were visible without µ-CT scanning.
Burmese amber has provided new insights into a number of different areas, and our ability to describe morphological details of the fossils is constantly improving. This raises the question whether accumulating information necessarily improves our phylogenetic understanding of the investigated group, such as Strepsiptera, which is increasingly well represented in Cretaceous amber (e.g. Pohl and Beutel, 2016). The phylogenetic placement of this small and highly specialized holometabolous order (c. 600 spp.) (e.g. Beutel, 2008, 2013) has been strongly disputed over a long time (e.g. Pohl and Beutel, 2013), but was reliably settled recently. A systematic position in a clade Coleopterida, together with the megadiverse Coleoptera, is supported by large morphological and molecular data sets, including transcriptomes and genomes (Wiegmann et al., 2009;Niehuis et al., 2012;Boussau et al., 2014;Misof et al., 2014;Peters et al., 2014). Strepsiptera is mainly characterized by the endoparasitic ecology of the larvae and usually also of females (Stylopidia, ca. 97% of all species). Linked with this specialized lifestyle are the extremely miniaturized 1 st instar larvae and extreme sexual dimorphism (e.g. Pohl and Beutel, 2008).
†Protoxenos janzeni Pohl, Beutel et Kinzelbach, also embedded in Baltic amber, was identified as the sister group of all the remaining Strepsiptera Pohl and Beutel, 2016), even though it is much younger than †Cretostylops engeli Grimaldi and Kathirithamby from Burmese amber .
The new fossil strepsipteran is described and its morphology documented. The phylogenetic placement is evaluated by adding the observed structural features to previously published data matrices Beutel, 2005, 2016;Bravo et al., 2009). The character evolution linked with early splitting events in the order is discussed, with special emphasis on the unique condition of the copulatory apparatus.
Material
The single specimen (holotype) of †Heterobathmilla kakopoios Pohl and Beutel, gen. et sp. n. in Burmese amber is from the wellknown deposit of the southwest corner of the Hukawng Valley (26°20 0 N, 96°36 0 E) in Myanmar, dated as 98.79 AE 0.62 Ma (e.g. Shi et al., 2012;Poinar, 2018). It is part of the collection of the Zoologisches Forschungsmuseum Alexander Koenig in Bonn, Germany (accession number ZFMK-STR-00000104). In order to document the head of the fossil in frontal view, the amber was trimmed with a razor blade and then polished with emery papers and mud chalk.
Specimen imaging
The piece of amber was temporarily mounted on a glass microscope slide using glycerine and covered with a glass coverslip. A Leica MZ 12.5 stereomicroscope with magnifications up to 1009 was used for observations (Leica Microsystems GmbH). The images of the entire specimen were taken with a Canon EOS 7D equipped with a Canon MP-E 65 mm macro lens fitted with a StackShot macro rail (Cognisys). Images of the head and the terminal segments were taken with a Canon EOS 7D equipped with a Nikon M Plan 20 ELWD microscopic lens, plus an adjustable extensions bellow and illuminated with two flashlights. An Axio Zoom.V16 with a Plan NeoFluar Z 1.09 (Carl Zeiss Microscopy GmbH) was used for the tarsi and the images were saved as CZI files. An Axiovert S100 inverted fluorescence microscope, equipped with a Spot CCD camera (Visitron Systems GmbH), was used for autofluorescence images of the terminal segments of the fossil.
Stacks of several partially focused images were recorded for overcoming limited depth of field. Single images of the Axio Zoom.V16 were exported with ZEN 2.3 lite. All images were stacked with Zerene Stacker (Zerene Systems LLC). They were processed using Adobe Photoshop CS6 (Adobe System Incorporated) and arranged as plates. Adobe Illustrator CS6 (Adobe Systems Incorporated) was used for lettering. Drawings are based on the micrographs and observations made with the Leica MZ 12.5 stereomicroscope.
Phylogenetic analyses
The characters of †Heterobathmilla were added to the data matrix published by with Mesquite (Maddison and Maddison, 2018). In addition, the characters of later described extant and fossil taxa such as Bahiaxenos, †Kinzelbachilla, †Phthanoxenos, the larvae of †Mengea and †Eocenoxenos were added to the matrix Henderickx et al., 2013;Engel et al., 2016;Pohl and Beutel, 2016;Pohl et al., 2019). Parsimony analyses were carried out with NONA version 2.0 (ratchet, 1000 replicates) (Goloboff, 1999) and TNT (Goloboff et al., 2008) Diagnosis. Male. Distinguished from all other strepsipteran families on the basis of the following features: Size less than 6 mm; antennal foramen distinctly widened, about twice as wide as diameter of scapus; labrum free, with paired anterolateral processes; galea present.
Nomenclature. In their original descriptions, †Kinzelbachilla and †Phthanoxenos were placed in separate families, †Kinzelbachillidae and †Phthanoxenidae by Pohl and Beutel (2016) and Engel et al. (2016), respectively. Considering the combination of a unique apomorphy (dorsal antennal processes) with a unique plesiomorphy (presence of parameres), it appears tempting to erect a new family for †Heterobathmilla. The monophyly of other groups would not be affected by this taxonomic rank. However, as the results of the analysis (see below) suggest a very close relationship of †Heterobathmilla with †Kinzelbachilla and †Phthanoxenos, we prefer to place the three described species in a single family.
The question of priority for †Kinzelbachillidae and †Phthanoxenidae was complicated by an online-early article distribution that was compounded by a misleading publication date. Specifically, the name †Phthanoxenidae was provided in the online advance copy of Engel et al., with the date given as "Available online 13 November 2015". However, this version cannot be considered as published because the work does not contain evidence of registration in ZooBank (Articles 8.5.3 and 9.9, International Commission on Zoological Nomenclature, 2012). Consequently, as specified by Article 21.9 of the code, the nomenclatural acts of Engel et al. are to be considered valid and available based on the publication date of the print version, which is recorded as March 2016 in the text of Volume 58 of Cretaceous Research. Although this implies priority for †Kinzelbachillidae, validly published on 4 January 2016 in an online-early version registered in ZooBank, the managing editor of Cretaceous Research confirmed that Volume 58 was mailed on 1 December 2015, conferring availability of †Phthanoxenos at that time by article 21.4 addressing incorrect dates. For these reasons, we recognize †Phthanoxenidae as the senior synonym of †Kinzelbachillidae syn. n.
Key to adult males with main focus on fossil families Etymology. The name is derived from "Heterobathmie", used by W. Hennig (1974) to characterize a mosaic pattern of plesiomorphic and apomorphic features ("Heterobathmie der Merkmale"). The endingilla is used in several generic names in the order, for instance in Mengenilla or Bohartilla.
Diagnosis. Differs from all other known extant or extinct Strepsiptera by the bipectinate antenna and the presence of parameres, the latter subdivided into a proximal and a distal portion.
Compound eyes with small ommatidia not separated by chitinous bars, without microtrichia between them. Labrum well-developed and free, with distinct paired distolateral processes. Mouthfield sclerite not developed. Anterolateral antennal foramina large, almost one third as wide as head anterior to compound eyes. Eight-segmented antenna bipectinate; antennomeres 3-8 with lateral flabellae, and 4-8 with additional dorsal processes; Mandibles strongly developed, with distinct primary and secondary articulation; with apically pointed distal part almost forming right angle with robust base. Maxilla not divided into cardo and stipes; lacinia absent; galea distinct, maxillary palp one-segmented, with oviform sensillum. No individual labial elements recognizable; labial palp absent.
Anterior pronotal border strongly convex (Fig. 2). Mesonotum with distinct posterolateral process; anterior and posterior border concave (Fig. 2). Discrimen of metaventrite reaching about half length of sclerite (Fig. 4); distinct anteriorly converging furrows reach beyond anterior end of discrimen; additional oblique lines originate from furrow and extend anterolateral towards lateral edge of ventrite. Metatrochanter with oblique distal edge and apical excavation for insertion of metafemoral base (Fig 4); femora with slightly convex edges; tibiae slightly curved; tarsi very slender, with diameter very slightly decreasing distally; basitarsomere of pro-and middle legs about twice as long as following segment; tarsomeres 2 and 3 about equally long; tarsomere 4 shortest; tarsomere 5 about twice as long as 4 (Fig. 4). R 1 reaches distal 3/4 th of wing and then merges with anterior margin (Fig. 2); r 2 short, restricted to distal region, nearly reaching tip of wing; r 3 short; r 4 longer; r 5 reaching posterior margin, strongly pigmented basally; ma 1 and ma 2 long, almost reaching wing margin; ma 3 extending from base to hind margin or at least ending very close to it; cua 1 extending from base to hind margin.
Abdomen slightly less than half as long as entire body, subparallel (Figs 3, 4). Sternites I-VII between 3 and 3.5 times as wide as long. Sternum VIII rounded posteriorly. Penis with very slender distal part, apically pointed (Figs 8A,B, 9); parameres appear divided into proximal and distal portion by transverse membranous zone; apically strongly sclerotized and pointed (Fig. 9).
with the same number of steps were obtained with TNT (traditional search). The strict consensus tree in both analyses was identical (Fig. 10).
Discussion
As in several other groups, the knowledge of the past diversity of Strepsiptera has been distinctly improved by amber fossils. A total of 41 fossil strepsipteran species are presently described (see Kogan and Poinar, [2019] for a current checklist). Of these, 20 species are from Miocene Dominican amber, 13 species from Eocene Baltic amber, two compression fossils from the Eocene Green River formation (USA), one species from Eocene Fushun Amber (China), one from Eocene Brown coal Geisel Valley (Germany), and one from Colombian copal (Holocene -Pleistocene). It is noteworthy that all known strepsipterans from the Cretaceous belong to the stem group. Members of extant groups, like representatives of Stylopidia, are completely absent in late Mesozoic deposits. In contrast, various representatives of this large strepsipteran subunit that have endoparasitic adult females are recorded from the Eocene, including Corioxenidae, Elenchidae, Myrmecolacidae and Stylopidae (see below). This suggests that the transition of the females to permanent endoparasitism did not occur before the Paleogene.
Usually, adult males are discovered as amber inclusions, for instance †Mengea tertiara (Menge, 1866; see also Kinzelbach and Pohl, 1994) and †Protoxenos from Baltic amber, †Cretostylops Grimaldi et Kathirithamby, †Kinzelbachilla and †Phthanoxenos Engel and Huang from Burmese amber Kathirithamby and Engel, 2014;Pohl and Beutel, 2016;Engel et al., 2016), (Kinzelbach and Pohl, 1994), or a species of the extant family Myrmecolacidae from Eocene amber from north-eastern China (Wang et al., 2016b). Amber fossils of adult females of the basal groups of Strepsiptera remain unknown (in contrast to Kathirithamby, [2018]). A tiny primary larva from Burmese amber was discovered recently , and also a freeliving late instar, very likely an immature stage of †Mengea tertiara from Baltic amber .
Amber fossils have greatly contributed to the understanding of the evolution of Strepsiptera in the late Mesozoic and the Cenozoic. In our analyses, the relatively young †Protoxenos ( †Protoxenidae) from Eocene Baltic amber was clearly confirmed as sister to all remaining extinct and extant groups. A slender distal mandibular part (char. 42.1) and broad fan-shaped hind wing (char. 62.1) are unambiguous apomorphies of Strepsiptera excl. †Protoxenidae. An additional potential apomorphy of this clade is the distinctly reduced size and less robust body . A large, free and bi-lobed labrum (chars. 38.0, 39.1), very robust mandibles (char. 42.0), eight-segmented antennae (char. 21.1) with lateral flabellae on antennomeres 3-7, maxillae with a galea (char. 47.0), and hind wings longer than wide (char. 62.0) are strepsipteran groundplan features preserved in †Protoxenos. It is conceivable that males of †Protoxenos were still taking up food, in contrast to extant strepsipterans. However, gut contents could not be reliably identified in the Eocene species . Despite the clarified first split in Strepsiptera, the fossil evidence concerning the area of origin remains ambiguous.
†Protoxenos was discovered in Baltic amber, whereas the oldest known species are preserved in Cretaceous Burmese amber.
representatives of Strepsiptera. †Heterobathmilla is unambiguously placed in a clade with †Kinzelbachilla and †Phthanoxenos, supported by a distinctly widened antennal foramen (19.0). Another potential synapomorphy is the position of the oviform sensillum near the tip of the maxillary palp (56.1). However, this feature, like many others, is not recognizable in †Phthanoxenos and is thus ambiguous at this node. Another shared feature is the presence of an anterolateral labral process (39.1). However, this structure is also present in †Protoxenos, and thus likely a groundplan feature of the order. The monophyly of †Phthanoxenidae and the absence of "parameres" in all hitherto known fossil or extant members of the order (including †Protoxenos), raises the question of the evolutionary background of these lateral lobes of the penis ("aedeagus").
The term "paramere" has been contentious for generations (e.g. Snodgrass, 1935Snodgrass, , 1957Crampton, 1938), and its evolutionary identity as a distinct structure without homologs in hemimetabolous groups is further called into question by †Heterobathmilla. Two anatomical observations support the interpretation that these paired structures are gonopods, or genital appendages of other Hexapoda (Boudinot, 2018). First, the "parameres" of †Heterobathmilla correspond positionally to the gonopods of other Holometabolaincluding Hymenoptera, Raphidioptera, and Mecopterida. Second, the apparent transverse line of the structure matches the coxa-stylus pattern of abdominal appendages IX across the winged insects (Boudinot, 2018), a pattern also consistent among Paleozoic Pterygota Prokop et al., 2020). The primary point of uncertainty is that the penis hinges on abdominal sternum IX, similar to other Strepsiptera, rather than within the gonopods as observed in those Holometabola that retain these appendages (Boudinot, 2018). Despite this, it remains plausible that the apparent "parameres" of †Heterobathmilla and "lateral lobes" of Coleoptera are at least partially homologous with gonopods, albeit integrated developmentally. Although re-expression of genital appendages in the case of †Heterobathmilla would be more parsimonious, multiple independent losses cannot be ruled out considering the presently known stem group taxa. Moreover, parallel losses of structures across the phylogeny, such as wing vein abscissae and muscles, are known to occur based on rigorous statistical modeling (e.g. Klopfstein et al., 2015). Further resolution on this issue of genital homologies and functional morphology may be provided by µ-CT scanning, as in Pohl et al. (2010), if the specimen is fine enough to have preserved muscular tissue. The position of †Cretostylops in the stem group remains uncertain. The genus is placed in a trichotomy with the †Phthanoxenos - †Heterobathmilla - †Kinzelbachillaclade and the monophylum comprising †Mengea and extant Strepsiptera. Presently no features are available to solve this phylogenetic ambiguity. The placement of the extant Bahiaxenos from Brazil (known only from the male holotype) in an unresolved trichotomy with †Mengea and crown group Strepsiptera, is likely due to the lack of any characters of immature stages or females. As a whole, fossils have not fundamentally changed our picture of the phylogeny and evolution of Strepsiptera but have provided tantalizing clues to ancient morphological transformations. Relationships in crown group Strepsiptera are not affected at all by extinct taxa (e.g. Henderickx et al., 2013;see also Fig. 10 and Pohl and Beutel, 2005: Fig. 28).
The age of origin of Strepsiptera, the late Carboniferous (or earliest Permian), can be assessed based on the well-established sister group relationship with Coleoptera and the fossil record of beetles (e.g. Ponomarenko, 1969;Niehuis et al., 2012;Boussau et al., 2014;Beutel et al., 2018; see also ). The splitting event was dated as ca. 300 Mya in Misof et al. (2014) and as ca. 350 Mya in McKenna et al. (2019). Unambiguous beetle fossils from the Lower Permian are usually comparatively large and well-sclerotized species (e.g. Ponomarenko, 1969;Kirejtshuk et al., 2014). In contrast to this, strepsipterans, except for the minute primary larvae (Pohl, 2000(Pohl, , 2002Pohl et al., 2018) or the cephalothorax of endoparasitic females (e.g. Richter et al., 2017) and male puparia (e.g. , are weakly sclerotized and fragile. Consequently, the chances to be preserved as impression fossils are low. The vast majority of the fossils are amber inclusions, even though specimens in Eocene oil shale or Eocene limestone are also known (Kinzelbach and Pohl, 1994;Antell and Kathirithamby, 2016), although a single primary larva was found in Eocene brown coal from the Geisel valley in Germany (Kinzelbach and Lutz, 1985;Pohl, 2009). As already pointed out in Pohl and Beutel (2016) early evolutionary history of the order in the late Palaeozoic and earlier Mesozoic remains completely in the dark. The Carboniferous genus †Stephanastus, represented by the incompletely preserved holotype, was assigned to a new extinct order †Skleroptera by Kirejtshuk and Nel (2013). This taxon was interpreted as a subgroup of Coleopterida, supposedly the sister group of Strepsiptera + Coleoptera (Kirejtshuk and Nel, 2013). However, this hypothesized placement of the fossil was rejected as unfounded and very unlikely by Beutel et al. (2019). A recently discovered Cretaceous primary larva is nearly identical with recent first instars. It underlines a remarkable evolutionary stasis in the order over ca. 100 million years, and it discards earlier alleged findings of strepsipteran immatures (e.g. , which very likely belong to the beetle family Ripiphoridae Batelka et al., 2018). As far as adults are concerned, the oldest known fossils differ only in details from the extant forms (e.g., Pohl and Beutel, 2016). Unless well-preserved impression fossils from the early Mesozoic or Permian are discovered, the morphological gap between Strepsiptera and Coleoptera will remain large .
Burmese Cretaceous amber has turned out to be a rich source of new extinct species of Strepsiptera and other insects. Since the description of the first fossil strepsipteran from Burmese amber in 2005, three new species have been discovered. However, except for one primary larva, only males have been found. The minute 1 st instar larva has revealed a Mesozoic origin of endoparasitism. Moreover, the amber fossils have contributed greatly to the understanding of the evolution and morphological diversity of the group in the late Mesozoic and Cenozoic. With the clarified position of †Protoxenos from Eocene Baltic amber as sister to all remaining strepsipteran groups, the area of origin of Strepsiptera remains still ambiguous. Terminal segments with external genitalia, dorsal view. Abbreviations: dpa, distal portion of paramere; ppa, proximal portion of paramere; pe, penis; pr, process; sVII-sIX, sternites VII-IX; tVII-tIX, tergites VII-IX; tl, transverse line; X, abdominal segment X. Scale bar 100 µm. Fig. 10. Strepsipteran phylogeny, strict consensus tree of 39 minimum length trees obtained with NONA; Bremer indices are given as bold numbers beneath branches; (1) Baltic amber (minimum age approx. 42-49 Ma, probable maximum age 54 Ma [Odin and Luterbacher, 1992;Ritzkowski, 1997]), (2) Burmese amber (98.79 AE 0.62 Ma [Shi et al., 2012;Poinar, 2018]), (3) Dominican amber (15-20 Ma [Iturralde-Vinent and MacPhee, 1996;Iturralde-Vinent, 2001]). [Colour figure can be viewed at wileyonlinelibrary.com] | 6,456.8 | 2020-09-10T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
A Jitter Injection Signal Generation and Extraction System for Embedded Test of High-Speed Data I/O
An instrument for on-chip measurement of transceiver transmission capability is described that is fully realizable in CMOS technology and embeddable within an SoC. The instrument can be used to inject and extract the timing and voltage information associated with signals in high-speed transceiver circuits that are commonly found in data communication applications. At the core of this work is the use of ΣΔ amplitude- and phase-encoding techniques to generate both the voltage and timing (phase) references, or strobes used for high-speed sampling. The same technique is also used for generating the test stimulant for the device-under-test.
Introduction
Transceiver circuits find a multitude of uses in cheap, widely-held data communications applications such as smart phones, tablets, working principle of the PTI and its two main modes of instrument operation. Section V will simulate some practical examples using the PTI. Experimental results for both a discrete and CMOS IC implementations will be provided in Section VI. Finally, conclusions will be drawn in Section VII.
System overview
A probabilistic test instrument is proposed and implemented in this paper for transceiver testing. It consists of two main parts: (1) multi-channel programmable bitstream generator, and (2) a probability extraction unit. Figure 1 shows the system architecture of this test instrument. On the top left-hand side of this figure is the data signal generation blocks used for stimulating a DUT with a pre-defined test pattern, such as a pseudo-random bit sequence (PRBS). The data pattern is held in a 1-bit circular memory and routed to a single flip-flop. This signal generator will be denoted as channel 1. Right below this in channel 2 is another set of blocks that are used to define the timing of the edges of the incoming data stream denoted T Data . This allows for arbitrary test signals (e.g., jitter) to be introduced on the edges of the incoming data stream. The timing of the edge T Data is produced through a phase-modulating encoding process performed in software using a ΣΔ modulator and a digital-to-time converter (DTC), followed by storage in a 1-bit circular memory. The output of this memory is then passed to a time-mode filter (TMF) realized using either a PLL [4], a DLL [6] or time-mode SC reconstruction filter [1] more on this in a moment. In a similar vein, both the level crossing of a sampling process and timing edge denoted by parameters V TH and T STRB are realized using the bottom two sets of sub-blocks in channels 3 and 4. While T STRB is realized using a phase-modulated ΣΔ encoding process followed by a TMF, V TH is created using a voltage-sensitive ΣΔ encoding process followed by an amplitude-mode filter (AMF), which could be RC in nature. Through the appropriate encoding, both T STRB and V TH can be independently controlled, as illustrated in Fig. 2, to establish the sampling point in the voltage vs. time space using a voltagebased comparator [11]. The second part of this test instrument is the probability extraction unit (PEU). The PEU is used to extract the statistical information associated with the DUT output through the application of two sets of counters, N and N Total .
ΔΣ-Encoding Signal Generation Methods
Sigma-delta modulation is used to encode a bandwidthlimited signal into a 1-bit digital sequence. Although the schematics show a sigma-delta modulation sequence as being applied directly to the input of an analog circuit, each pulse of this sequence must be reconstructed in both time and amplitude. In other words, each pulse goes through a zero-order hold operation. This zero-order hold operation is the role of the 1-bit digital-to-analog (DAC) or digital-to-time (DTC) blocks that follows the sigma-delta encoding process. As a 1-bit signal is a poor approximation of any real-valued signal with almost-infinite precision, a quantization error results. To minimize the effect of this quantization error, ΔΣ encoding shifts the quantization error to an out-of-band frequency region where it can be filtered out using a frequency-sensitive filter and recovers most of the original signal. An important aspect of ΔΣ-encoding is that it is not limited to voltage or amplitude levels, as seen along the y-axis in Fig. 2, but can also be used to encode the signal information along the time-axis. These two insights are used as the basis of the signal generator used in the instrument proposed here. Let us consider these two aspects.
Continuous-Time Voltage Signal Generation
N-bits of a band-limited digital signal that is ΔΣ encoded can be indefinitely rotated, converted into a corresponding voltage level (e.g., V SS and V DD ) and applied to a voltage-sensitive analog frequency-selective reconstruction filter as depicted in Fig. 3. The filter would remove the out-of-band quantization noise and convert the digital signal into a corresponding continuous-time voltage signal. As this method of signal generation has been well studied and published widely, we refer our readers to [9] for further details.
Continuous-Time Phase Signal Generation
N-bits of a band-limited digital signal that is ΔΣ encoded can be indefinitely rotated, converted into a corresponding phase level (e.g., ϕ MIN and ϕ MAX degrees) and applied to a phase-sensitive frequency-selective reconstruction filter as depicted Fig. 4. The filter would remove the out-of-band quantization noise and convert the digital signal into a corresponding continuous-time phase signal. These types of filters are less familiar to most, so we'll spend some time discussing them here. At this time, the authors know of three different ways to implement a phase-sensitive frequency-selective filter circuit: (i) phase-locked loop, (ii) delay-locked loop and (iii) time-mode filter circuit. The genesis of this circuit principle was first described in [16,17].
A PLL is a basic element of electronics and is used in many different ways such as jitter suppression, clock synchronization, etc. A PLL can also be used as a phase-sensitive frequency-selective reconstruction filter [17]. A block diagram of PLL is shown in Fig. 5. In the feedforward path of the PLL is a phase-detector, loop filter with transfer function F(s) and a voltage-controlled frequency oscillator. In the feedback path, a divide by N PLL frequency divider block can be seen. By selecting the appropriate loop-filter F(s), frequency-divider ration N PLL , the input-output phase transfer function Θ o /Θ i can be adjusted so that the desired frequency response is achieved. This principle was first used to generate test signals for evaluating the frequency response behavior of PLLs [17] and further expanded to high-order PLL design in [5].
(ii). Delay-Locked Loop (DLL): In much the same manner as the PLL, a type-II DLL shown Fig. 6 can also be used as a time-mode reconstruction filter circuit. Here the DLL consist of a phase-detector front-end, a loop filter with transfer function F(s) and a delay-line driven by a reference clock signal. The first application of a 1st-order DLL as a reconstruction filter can be traced back to [13]. In [6], the application of high-order DLLs as an adjustable frequency-selective reconstruction filter was introduced. Another method that one could use to realize a phasesensitive frequency-selective filter is the time-mode resonator circuit shown in Fig. 7. This design is very different from a PLL or DLL implementation as it relies on the manipulation of several phase signals using several feedback loops [1]. In this implementation the realization uses several voltagecontrolled delay lines and several switched-capacitor phaseto-voltage conversion circuits. This particular area of filter design is rapidly developing and one can expect more area efficient realizations in the very near future.
In general, the IC area required to implement the TMF may be considered excessive in many cases. However, as PLLs and DLLs are blocks commonly found on an IC, a better approach is to time-multiplex these resources as part of the TMF. Alternatively, the TMF could be implemented on the loadboard of the device interface board of the ATE. While this degrades the fully integrated approach it does provide greater flexibility in the selection the TMF characteristics thereby optimizing the signal generator performance levels.
Jitter Injection System/Timing Strobes
The right-hand side of Fig. 1 shows the architecture of the multi-channel programmable bit-stream generator (BSG). It consists mainly of three modules. A software programmable bit generator, a pattern generator and a hardware implementation of a filter bank consisting of voltage-sensitive and phasesensitive or TM filters. A multiplexing circuit consisting of a D-type flip-flop is also included for re-timing purposes.
Collectively, these circuits realize a pattern signal generator whose phase is controlled by an independent source as depicted in Fig. 8(a).
The BSG has 4 channels: Channel 1 of the programmable BSG is used for stimulating a DUT with a pre-defined test pattern such as a pseudo random bit sequence (PRBS). The data pattern is held in a 1-bit circular memory and routed to the Dinput of the flip-flop. Right below this is channel 2, where another set of blocks are used to define the timing of the edges of the incoming data stream, denoted as T Data . This allows for arbitrary test signals (e.g., jitter) to be introduced on the edges of the incoming data stream. The timing of the edge T Data is produced through a phase-modulating encoding process performed in software using a ΣΔ modulator and a digital-to-time converter (DTC), followed by storage in a 1-bit circular memory. The output of this memory is then passed to a time-mode filter (TMF) realized using either a PLL, DLL or time-mode reconstruction filter, as described in Section III. The phaseencoded signal is then passed to the clock input of the D-type flip-flop. Together, channel 1 and 2 produce an arbitrary digital pattern with various edge timings for exercising the DUT.
Channel 3 generates a threshold voltage (V TH ) whose level is programmable in software. It consists of a ΣΔ modulator, a 1-bit DAC followed by a 1-bit circular memory. The output is passed through a voltage-mode filter (VMF), which is implemented by some active-RC circuit. Similarly, channel 4 generates the timing strobes (T STRB ) associated with a sampling process. Both V TH and T STRB are depicted by the horizontal and vertical dashed lines shown in Fig Fig. 4 Synthesizing a phasemodulated signal using a circular ΣΔ-delta encoded bit-stream, a digital-to-time converter and a phase-sensitive frequencyselective filter circuit sampling process. The T STRB is generated using the ΣΔ phase signal generation technique, in much the same way as described for the jittered clocked signal of channel 2. These two strobes (V TH and T STRB ) will be used by the probability extraction unit to extract the CDF associated with the DUT output. Through the appropriate encoding, both T STRB and V TH can be independently controlled to establish the sampling point in the voltage vs. time space using a voltage-based comparator.
Probability Extraction Unit (PEU)
A block diagram of the PEU unit developed in this work is illustrated in Fig. 9. It is based on the work found in [8] but has much earlier roots, such as that related to multichannel analyzers [10]. The PEU consists of four main blocks: (1) voltage- sensitive comparator, (2) a bank of two counters, (3) a voltage threshold generator, and (4) an edge timing strobe generator. The comparator compares the output DUT edge with respect to the threshold voltage value and sets the output to a logic B1^state if the DUT output is above the threshold or a logic B0^state otherwise. The second block in the PEU is the counter block. Two 16-bit counters will count the logic value presented to it at the time instant set by the timing strobe generator. One counter, attached to the output of the comparator, will count the number of rising edge transitions from the DUT that are ahead of the timing strobe T STRB (whose count value is denoted by N), and the second counter will count the total number of timing strobe rising edge transitions (whose count value is denoted as N Total ). Using the output values of these two counters over a range of timing strobe locations; it is possible to extract the DUT output cumulative density function (CDF) as a function of the strobe time T STRB , i.e., The third block is a digitally controllable voltage threshold generator. This circuit is used to set the threshold level for the comparator V TH . The forth block is a digitally controllable timing strobe generator used to set the sampling instant of each counter, T STRB . A simple drawing to capture the nature of the extraction unit is shown in Fig. 8(b).
The operation of the PEU can be seen in Fig. 10. Here an incoming signal is compared against two different threshold levels at the moment of the rising edge of the timing strobe signal. The comparator output is then presented to a 16-bit counter which increments its stored value N at the moment the rising edge of the timing strobe occurs. On completion of N Total iterations per timing strobe, the CDF value for that edge time can be deduced according to Eq. (1).
Circutis Details For Critical Decision-Making Elements
In this section a descriptions of the critical decision-making circuits will be described. Other than the time-mode reconstruction filter circuit, the only other elements that make up the PTI are D-type flip-flops and a comparator circuit.
D-Type Flip-Flops
The performance of the PEU is highly dependent on the performance of the D-type flip-flop. It is desired to have a flipflop with a very low setup time. The rising edge triggered Dtype flip-flop can be implemented in various ways; one of the implementation is an edge-triggered flip-flop using two level sensitive latches in cascade [12] as shown in Fig. 11. First stage is often called the master stage and the second stage is called the slave stage.
When the clock denoted by Clk is low, the transmission gate T1 and T4 are on and T3, T4 are off. The D input flows through to point A and as well as it's complemented at point B. When the clock goes high, T1 and T4 shut off, while T2 and T3 turn on. This causes the value of the B node to be passed to the Q output in inverted format. At the same time, the B node value is passed back onto itself, thereby holding its value constant. When the clock goes back low, T1 and T4 turn on. The value of D on the Q output is then circulated around the two inverter of the slave section back onto itself. When the clock returns to the high state, the Q output can then take on a new logic value, depending on the D input. The complementary output of Q is also available at the output of the inverter section of the slave stage.
Comparator
The comparator used in the PEU compares the DUT output signal with a reference threshold voltage V TH , which is varied with time during a measurement cycle. At the appropriate sampling instant, the DUT output signal is compared to the reference threshold V TH , if greater the output will be set to a logic high level. Conversely, when the DUT output voltage is lower than V TH the output will be set low. An op-amp with a latch can be used as a comparator but is limited to low frequency applications. The most important parameter for a comparator design is its input sensitivity (i.e. increases the minimum input signal with which comparator can make a decision) and its propagation delay [12]. It is desirable for our application to have a comparator with a small propagation delay but one with a high sensitivity. A comparator generally consists of a preamplifier and a decision circuit involving a positive feedback loop. The preamplifier is used to amplify the input signal to improve the comparator sensitivity. It also isolates the input of the comparator from switching noise, often called kickback noise coming from the positive feedback stage, when the decision circuit switches state [12]. The decision circuit determines which of the input signals is greatest. An implementation of the comparator is shown in Fig. 12.
Working Instrument Principle
The proposed test instrument has two main modes of operation. The first mode of operation is for measuring the jitter distribution of a DUT, such as DLL, PLL or clock and data recovery (CDR) unit subject to a known level of input jitter. Such a test setup is used to perform a jitter tolerance test. The second mode of operation is for measuring the input-output excess phase response of a time-sensitive channel of a DUT, commonly referred to as a jitter transfer function test. The working principles for these two modes of operation are discussed below.
Jitter Tolerance Measurement
In the first mode of operation we can measure the jitter distribution of a DUT such as PLL or DLL subject to some input phase-modulated test stimulus, e.g., perform a jitter tolerance type test. Both the multi-channel programmable BSG and the PEU are used to perform this measurement. In this mode of operation, all four channels of the multichannel programmable BSG shown in Fig. 1 are used to generate a phase-modulated test signal with a specific PRBS test pattern superimposed. Figure 13(a) illustrate the arrangement of channel 1 and 2 of the multi-channel programmable BSG, together with the input software setup for generating the Gaussian test signal. A Gaussian noise source is low-pass filtered to band-limit the wideband noise from the random number generator (RN). This signal (consisting of real numbers) is then encoded into a one-bit sequence using a ΣΔ modulator. Subsequently, the output of the ΣΔ modulator is phase modulated using the DTC and low-pass filter in the time-domain using a TMF. The output of the TMF is a Gaussian-shaped phased-modulated clock signal denoted by T Data .
Once the DUT is stimulated by the phase-modulated signal, its output is passed to the PEU for the output signal characterization as described in Section V. The threshold voltage V TH and T STRB inputs to the PEU are set by channels 3 and 4 of the multi-channel programmable BSG. For a jitter tolerance test, V TH is held constant throughout the entire test.
Input-Output Phase Transfer Measurement
In the second mode of operation the input-output phase transfer function of a DUT of a time-sensitive channel can be measured. This is performed by exciting the DUT with a 0-1 cyclic bit pattern whose clock is phase-modulated by a sine wave of a specific amplitude, phase and frequency and measuring the amplitude of the output sinusoidal phase response of the DUT. The ratio of the output phase amplitude to the input phase amplitude provides a measure of the system gain at a specific frequency. Repeating this test over a wide range of frequencies provides a measure of the input-output phase transfer for this DUT. In this test mode, all channels of the multi-channel programmable BSG of Fig. 1 are used. Further, a sine wave of known amplitude, phase and frequency is synthesized through channel 2 as opposed to a Gaussian noise signal in the previous test setup. Figure 13(b) illustrates the test set up for a phase transfer function test. As shown in this figure, a sine wave of known amplitude, phase and frequency is encoded using the ΣΔ modulator and then phase modulated by a DTC. The phase-modulated signal is then low pass filtered using a TMF. The output of the time mode filter is essentially a clock signal with sine wave modulating its phase, as depicted in Fig. 14 Fig. 13 System setup for performing a jitter tolerance test. A PRBS sequence whose clock signal is: (a) phase-modulated with a Gaussian distribution signal for a jitter tolerance test, and (b) phase-modulated with a sinusoidal signal for characterizing the phase response of a time-sensitive channel is then used to modulate the 01 cyclic pattern from the PRBS generator. This signal is essentially a clock signal whose phase varies according to a sine wave and it will be used as the test stimulus for the DUT. The output of the DUT is then sent to the PEU for signal characterization. Both the threshold voltage V TH and T STRB inputs to the PEU are set by channels 3 and 4 of the multi-channel programmable BSG, identical to that described in the previous test setup.
Once the CDF is derived using the PEU, the amplitude of the output phase signal is then determined by determining the upper and lower limits of the CDF. Normalizing the output amplitude by the input amplitude provides a measure of the gain of the DUT at a particular input frequency. Repeating for different input frequencies provides a measure of the phase transfer as a function of frequency.
Test Time
A simple estimate of the test time to complete a full characterization of the transceiver operating at a bit rate F B would be given by where N Total is the number of points collected for a single CDF point and (ϕ MAX -ϕ MIN )/Δϕ is the total number of CDF points collected during the test. Here ϕ MAX -ϕ MIN represents the input full-scale range of the transceiver and Δϕ is the desired resolution of the test. As some time is necessary to transfer the each point of the CDF to the ATE, this is included in the test time as a parameter Data Transfer Time.
For a production test, such as one that wanted to verify that the noise level is less than some threshold, then a two CDF points is all that is required, one at the 25 % quartile and the other at 75 % quartile, from which the robust mean and standard deviation [15] can be derived. Thus, the test time reduces to
System Calibration
Any instrument added into the test sequence must undergo a focused calibration process. The approach most familiar to the test engineer is to make use of the digitizer in the ATE to characterize the phase-modulated signal produced by the sigma-delta signal generator. This characterization signature would then be compared to that produced by the PEU from which a set of calibration factors would be generated. This process would be repeated over frequency so that a set of frequency-dependent calibration factors could be derived. A more detail description of this process is outlined in [5].
Simulation Results
In this section we present the MATLAB/Simulink simulated results for the two modes of operation of the proposed probabilistic test instrument (PTI) presented earlier. The DUT for this simulation is modeled as an ideal 6th-order PLL with a bandwidth of 100 kHz [5]. The DUT is assumed noiseless.
Jitter Tolerance Measurement
A known Gaussian jitter is injected into a DUT and its output is measured using the probability test instrument. Figure 13(a) shows the test setup for this measurement. The time-mode filter (TMF) was implemented using another PLL. A known Gaussian noise was band limited using a low pass Chebyshev filter with a bandwidth of 60 kHz. The band-limited Gaussian noise with a standard deviation of 160 ps was then encoded into a one-bit stream using a 5th-order ΣΔ modulator [3], followed by a 90degree phase encoding with 50 % duty cycle. The phase-encoded signal was then injected into the DUT. The output of the DUT is then passed to the PEU. The threshold voltage V TH was fixed to be 2.5 V as the output of the DUT swings from 0 V to 5 V. By varying the T STRB from 0 to 45 degrees in steps of 1 degree (corresponding a time change of 1000 ps) we can generate the CDF of the DUT output with the help of the PEU as shown in Fig. 15(a). Differentiating the CDF, the PDF of the DUT output can be derived and compared to the pre-programmed PDF of the input signal as shown in Fig. 15(b). We can see from this figure that the standard deviation of the DUT output measured using the PTI is approximately 160 ps, which is essentially the same as the input level. This is not surprising as the input noise bandwidth was well within the bandwidth of the PLL, and left unaffected by the magnitude roll-off of the PLL.
Phase-Frequency Response of a Time-Sensitive Channel
The phase response of the DUT was simulated using the test setup shown in Fig. 13(b). The time-mode filter (TMF) was again implemented using a PLL. A sine wave with an amplitude of 0.2 and 0.45 LSB DC offset having frequencies ranging from 10 kHz to 1 MHz are phase encoded using the phase signal generation technique. For each sine wave, a 90°phase encoding with a 50 % duty cycle is used. The phase encoded bit stream is then used to stimulate the DUT and its output is passed to the PEU. The reference threshold voltage V TH of the PEU was fixed at 2.5 V. The timing reference T STRB was varied from 0 to 45 degrees in steps of 1 degree. Taking the ratio of the amplitude of the output to input signals for different input frequencies of the sine wave results in the excess phase response of the DUT shown in Fig. 16. When compared to the expected PLL response we see that they are in very good agreement.
Experimental Results
Two separate implementations of the PTI were constructed. One version was implemented using all discrete components and another involved the use of a PEU constructed in a 130 nm CMOS process. The two implementations provide separate confirmation of the proposed approach.
Discrete-Implementation of the PTI
The test instrument was prototyped using the configuration shown in Fig. 17 using discrete components. Part (a) illustrates the general block diagram of the test setup and part (b) illustrates the arrangements of the instruments together with the PCB prototype. Here the circular scan-chain registers are loaded with bits derived form a 3rd-order ΣΔ modulator with a noise transfer function having a Butterworth filter function. The bandwidth of the ΣΔ modulator was designed to be 781 kHz with a sampling frequency of 50 MHz. The resulting oversampling ratio (OSR) is set to 32. The DTC was constructed using a 90°phase step with a 50 % duty cycle so that the bit map replaces every 0 bit from the ΣΔ modulator with a 1100 code, and all subsequent 1's by a bit code of 0110 [2]. The DUT was implemented in hardware using a 6th-order PLL designed to approximate a Gaussian filter response. The bandwidth of the PLL was selected to be 300 kHz. The counters were implemented in software. A second-order DLL was used to extract the timing strobe signal from the second circular scan-chain register.
The experimental setup consists of 5 parts: (1) an HP 81130 A pattern generator with two channel outputs, (2) a 6thorder PLL with a 100 kHz bandwidth acting as the DUT, (3) a PCB containing the PEU and a second-order DLL acting as time-mode reconstruction filter (TMF), (4) an oscilloscope, and (5) computer for some signal processing.
The DUT was subject to a repeating 1-0 pattern with a clock jitter set to 160 ps. The corresponding PDF of the DUT output was captured using the PEU and is displayed in Fig. 18. It is also compared to the simulated result found using MATLAB/Simulink. As is evident, the results are quite similar. The RMS jitter had a standard deviation of 170.1 ps as compared to the 160 ps found from simulation.
The phase response of the DUT was also measured using the PTI and an external phase modulated clock source as shown in Fig. 19. We can see from the plot that frequency response of the DUT measured using our instrument has a gain offset. This is because of the limited resolution of the phase signal generator, T STRB . After adding a gain offset we can see from this plot that both tests gives us very similar frequency response. We can see that the bandwidth of the DUT is approximately 100 kHz. For comparison the expected DUT response derived from MATLAB simulation is also provided in the plot of Fig. 19.
CMOS IC Implementation of the PTI
The PEU portion of the PTI was designed and implemented in IBM CMOS 130 nm technology. The PEU was designed to operate at a clock rate of 114.5 MHz. As previously described, Fig. 20. The voltage-sensitive comparator was implemented as a standard two-stage amplifier and the counters were implemented with ripple counters. When the test is completed, the results are stored in the two counters and are serially shifted out and stored in separate off-chip logic registers. In addition, the CMOS chip has a jitter-injector circuit included to act as an on-chip DUT. It is essentially a voltage-controlled delay line as depicted in the top left hand corner of Fig. 20.
A microphotograph of the chip is not shown here, as it is covered with a solid layer of metal. Instead a plot of the Cadence layout is shown in Fig. 21. Here the outline of the various blocks can be seen. The size of the PEU and the timing strobe control circuit is a combined 0.24 mm 2 . It should be noted that this particular layout used very area-inefficient D-type flip-flops as part of the circular memory banks. A better approach would be to use a memory array but these were not available to us at the time. On returned from fabrication, the chip was mounted on a custom-built PCB and interfaced to the test equipment for testing purposes in much the same way as illustrated in Fig. 17.
With the DUT implemented in silicon alongside the PEU, the input bit-sequences for the voltage and timing strobe circuits were generated with the HP81130A pattern generator. The reference clock of 114.5 MHz was derived from an external crystal oscillator. Each input bit pattern was a sigmadelta modulated version of the digital inputs, D Threshold and D Time . The digital code D Threshold was kept as a constant, while D Time was varied over its full range. This resulted in the timing strobe being moved across a 90-degree phase range. For each test, the output values of the counters were captured off-chip in external digital registers located in an FPGA. The count cycle begins by setting the first two bits of each counter output Fig. 19 The measured frequency response behavior of the input phase sensitive DUT as extracted by the discrete implementation PTI high. By lining up these two bits, one can deduce the start of the count value beginning with the MSB.
As the timing strobe is moved across the DUT eye-diagram, the edge locations of the DUT output signal can be accurately determined. Figure 22 illustrates the extracted CDF (dashed green line) corresponding to the DUT output signal when subjected to a series of different timing strobe locations varied across 90 degrees. Mathematically, the PDF corresponding to this signal can be computed by taking the derivative of the CDF signal, resulting in the solid blue line in the plot of Fig. 22. It is important to note that this PDF is an ideal best-fit curve derived from the 4point CDF and not the actually PDF, as this is not known.
This same measurement was made using an Agilent 9000B oscilloscope, which is capable of displaying a histogram or PDF of a random signal. These results are superimposed on the plot shown in Fig. 22 as a dotted red line. While the signal paths to each measuring instrument are slightly different, i.e., additional buffers are included to drive off-chip signals, the results show similar behavior (same shape), albeit with the on-chip PEU Fig. 20 Block diagram of the experimental CMOS chip and the external signal sources. Note that an on-chip jitter generation circuit is included on the IC to act as an on-chip DUT Fig. 21 Screen capture of the Cadence layout of the PEU implemented in a 130 nm CMOS process. The active area of the PEU is 0.24 mm 2 Fig. 22 A comparison of the measured on-chip PDF and the PDF captured by an external high-performance oscilloscope. Also shown is the on-chip measured CDF having about twice the RMS noise variation (specifically, the on-chip noise standard deviation was found to be 5.4 degrees or equivalently 524 ps and the off-chip noise variation was about 226 ps). The extra noise suggests the PEU is slightly noisier than the high-performance oscilloscope. The exact reason for this error is yet to be determined. One possible reason may be that the comparator is noisier than that which was first thought. Nonetheless, the results do confirm that the PEU is tracking the underlying CDF/PDF of the incoming signal with a 99.7 % confidence interval accuracy of ±1.5 ns. The signal resolution is much lower than 1 ns but on account of the comparator noise level is not usable below 1.5 ns.
Conclusion
A design and implementation of a probabilistic test instrument used for injecting and extracting timing information associated with many of todays' communication systems was proposed. With a simple change in software code we showed how we can change the sampling edge T STRB , threshold voltage reference level V TH and the edge of the data stimulant T Data associated with some arbitrary input bit sequence using various ΣΔ encoding techniques. These signals were also shown to be able to extract the CDF/PDF of the DUT output. Two implementations were used to demonstrate the feasibility of the proposed approach. One involved a fully discrete implementation of the PTI and another involved the realization of a fully integrated PEU in a 130 nm CMOS process. While the performance of the discrete implementation showed excellent correlation with the bench setup, the CMOS implementation suffered higher than expected noise levels. A result that we believe is related to the performance of the on-chip comparator. Future implementations will pay closer attention to the design of the CMOS comparator.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 8,016.4 | 2015-06-24T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Assessing Pedestrian Environment: A Review on Pedestrian Facilities in Rajshahi City Corporation Area
Traffic congestion and lack of public pedestrian space are faced by most urban areas in Bangladesh. Provision of facilities for pedestrian is not increased as a rate of increase in population. Consequently pedestrians are at greater risk for their safety, especially in the commercial zones of large cities. In Rajshahi, walkways are constructed recently; but the facilities are not enough for providing friendly environment. The demand for the improvement of pedestrian facilities is raised due to the reasons such as difficulties in crossing busy intersections, conflicts among pedestrians and vehicles, physical barriers, low visibility, improper design of handicapped accessible ramps and so on. The purpose of the study is to assess pedestrian environment to show how extent the pedestrian facilities are friendly for walking. The pedestrian friendly environment is evaluated under five criteria i.e. Connection, Convenience, Convivial, Comfortable and Conspicuousness. Rajshahi City Corporation area is divided into four zones for comparative view of existing pedestrian planning condition such as Residential Zone, Commercial zone, Recreational zone and Industrial zone. For fulfilling the objectives, questionnaire survey has conducted for user perception on pedestrian environment. Sample size was 267 according to the population of RCC area in 2017 with 95% confidence level. The method of the sampling is random selection. This study reveals that pedestrian environment is not friendly everywhere and worse in commercial zone rather than other three zones. Main lacking of pedestrian friendly environment is Conviviality among five criteria. Pedestrian’s recommendation is also included. It is expected that the results will be highly useful for improvement of walkways for efficient, adequate and safer movement of the pedestrians.
Introduction
Walking is undoubtedly the most common and natural travel mode in cities. It is the primary mode of transportation as other transportation modes require walking to support them such as accessing public transport and changing modes of transportation [1]. The promotion of walking reduces carbon footprint of the society and minimizes energy consumption especially the use of non-renewable energy sources [2]. Many illnesses and health issues faced by many people in the modern society such as stress can also be relieved through walking [3]. Encouraging walking can also generate economic activities and promote tourism that increases a city's competiveness [4]. Though walking has a lots of benefits, people are discouraged to use footpath. Pedestrian environment is a vital influencing factor of walking. The factors that mainly affect the pedestrian's situation is the high speed of motorized vehicles; most car drivers ignore their obligation to give priority to pedestrian at an unprotected pedestrian crossing [5]. A friendly environment can encourage pedestrians. The issue of pedestrian friendly environment has been of increasing importance lately in urban planning and design for the reasons of social life, experiential quality, sustainability, economy or health [6]. In terms of the health benefits of walking and changes in the built environment may help people reach their physical activity goals in addition to individually oriented behavior-change interventions by promoting sustainable and healthy life style choices. Walking environments or pedestrian environments are spaces that are supposed to facilitate walking or pedestrian movement and activities [7]. Pedestrians in the roadside environment is subjected to a set of several factors significantly affecting perception of safety, comfort and convenience [8]. A pedestrian movement network can be created through reorganizing the existing streets and pavements and the construction of new walkways [9]. Recently, Rajshahi City Corporation has constructed new walkways.
In the last decade Rajshahi has reached a high growth rate which has led to a massive increase in traffic [10]. In Rajshahi city, walking is encouraged through constructing efficient amount of walkways. The behavior of pedestrian toward using walkways depends on the physical environment of it. A friendly environment for pedestrian is necessary to attract people. Given the benefits and advantages in promoting walking, it is worthwhile to investigate whether it is conducive to walking. The study aims to evaluate the quality of the pedestrian environment in Rajshahi city in terms of pedestrian's perception and the relevant urban design qualities. Furthermore, it is interesting to examine and recommend planning and design guidelines and policies for the future pedestrian friendly environment development of Rajshahi City area and to create walkable communities for Rajshahi city.
Methodology
Rajshahi City Corporation area is selected for this study. Rajshahi is one of the divisional center of Bangladesh [11] and Rajshahi City Corporation is the major divisional headquarter [12]. For the study purposes, the study area is divided into four categories such as residential zone, commercial zone, industrial zone and recreational zone, so that a comparative view of four zones can be presented. Questionnaire survey has been conducted for knowing user perception. Primary and secondary both data are collected for study purposes. A questionnaire has prepared for data collection. Primary data has been collected through site reconnaissance, questionnaire survey and in-depth interviews. In this study, random sampling technique has been used or else it may not fulfill the purpose. Empirical method is used for population prediction. The total sample needed to conduct the study is about 267 with confidence level 95% and confidence interval 6 which is calculated by using sample calculator.
Secondary data has been collected from online sources and various development authority. Different govt. organization (RDA, RCC etc.) that are associated with the development and management of footpaths or walkways have been visited to collect various documents and data related to footpaths or walkways development in Rajshahi City. Data collected from questionnaire survey and related authority are analyzed by using Excel and SPSS software. Findings are noted down and represented through charts, figures, tables etc.
Recommendation on pedestrian facilities is also found out.
Data Analysis and Result
This section investigates how the pedestrian environment can be more pedestrian-friendly and the criteria used in pedestrian planning. There are various qualities which can contribute to a pedestrian-friendly environment and the below reviews a range of research carried out on this particular topic. Sarkar has proposed that an 'ideal' pedestrian environment would enable different activities to happen without causing any conflicts amongst various users such as cyclists, pedestrians and drivers [13]. According to Sarkar, a successful pedestrian environment depends on three criteria: i. An 'user-friendly environment' which can offer various amenities to pedestrian in which designers should aim to make the streetscape user-friendly not only for pedestrians, but also to cater the needs of vehicles. ii. A 'unique environment' which has high 'legibility' iii. A 'visually stimulating environment' in which stimulating elements should be incorporated into the design of walkways. According to Urban Design Compendium (2000), the environment of pedestrian routes can be assessed by five criteria such as connection, convenience, convivial, comfortable and conspicuousness [14]. The assessment under these criteria is discussed following below-
Connection
In order to know the connectivity among the pedestrian routes, respondents are asked the following question -Does the pedestrian routes connect the places where they want to go?
The answer varies from zone to zone. Following charts (figure 1) show the results of connectivity of footpaths in different zones of RCC area. In residential zone, 81% respondents said that the pedestrian routes connects the places where they want to go. Facilities in Rajshahi City Corporation Area Only 19% respondents said that the pedestrian routes does not connect the places where they want to go. So it indicates that the existing connectivity of the pedestrian routes in residential areas is quite good. In commercial zone, 73% respondents responded with positive answer and only 27% responded with negative answer. It also denotes that the connection among the pedestrian routes in commercial areas is quite good.
In recreational zone, 80% respondents agreed with positive answer and 20% respondents agreed with negative answer. It denotes that the connection rate in the residential areas is also good. And the maximum positive percentages is found in industrial zone (96%). Therefore, the rate of the connection among the pedestrian routes is so good. If we compare among the four zones, the respondent's answers denoting that connection rate among the pedestrian routes is better in industrial zone than the others three zones.
Convenience
The convenience of walkways is evaluated on the basis of three criteria. These are the straightness of pedestrian routes and the presence of breaking down place in the routes and more waiting time to cross the road. Respondents are asked the following questions -Are routes direct and are crossings easy to use? Do pedestrians have to wait more than 10 seconds to cross roads? The answer varies from zone to zone. Following charts show the results of connectivity of footpaths in different zones of RCC area. According to the figure 2 and 3, maximum pedestrian routes are straight in four zones. But, the scenario of other two criteria is different. Breaking down places are found in four zones. The percentages is more in residential (40%) and commercial (38%) zone. On the other hand, waiting time is more in commercial zone only. Overall, the convenience of pedestrian routes in industrial zone is better than other three.
Convivial
The rating of the convivial of the pedestrian routes is assessed on the basis of four criteria. These are the attractiveness of the footpaths, livable, endurable and tolerable feelings, security of the area and satisfaction level of existing amenities. In order to know the conviviality among the pedestrian routes, respondents are asked the following questions -Are routes attractive, well lit and safe, and is there variety along the street?
The answer varies from zone to zone. Following charts show the results of connectivity of footpaths. According to the figure 4 and 5, maximum walkways are attractive and the percentages is higher in industrial (96%), recreational (84%) and residential zone (77%). But the walkways fail to give livable, endurable and tolerable feelings in most of the zone. Security of the area is quite good except commercial zone.
On the other hand, above 50% satisfaction level of existing amenities of footpaths is found in recreational and industrial zone. Therefore, the conviviality of walkways is better in industrial zone than others. On the other hand, above 50% satisfaction level of existing amenities of footpaths is found in recreational and industrial zone. Therefore, the conviviality of walkways is better in industrial zone than others.
Comfortable
The rating on comfortable of the walkways is assessed on the basis of four criteria. These are the satisfaction level with surface quality, comfortable with the width of footpath while walking, secure with the width of footpath and the appearance of any obstacles during walking. In order to know the pedestrian's easement, respondents are asked the following questions -What is the quality and width of the footway, and what obstructions are there?
Following charts show the results of connectivity of footpaths. In residential zone ( figure 6), about 79% pedestrian is satisfied with the surface quality and 62% respondents said that they feel comfortable with the width of footpath while walking and 59% respondent feels secure with the width while walking. As opposed to, in case of facing obstacles the condition is very bad. About 84% respondents said that they face obstacles.
In case of commercial zone (figure 7), footpaths do not give a pleasant environment for walking comparing to the others. Though the condition of surface quality and comfort of footpath is above 50%, the percentages of security is below 50%. Moreover, maximum pedestrian (80%) faces obstacles during walking. In industrial areas ( figure 9), surface quality is very good (96%). But, quality of other three criteria is not so well. Secure with the width and facing obstacles are not up to the mark. Overall, recreational zone provides more comfort than other three.
Conspicuousness
The rate of the Conspicuousness of the walkways is also evaluated on the basis of four criteria. These are easy to find out and follow the pedestrian routes, proper lighting during night to see footpath, clear view of approaching vehicles during crossing or turning and satisfaction level with the provision of signage. In order to know the connectivity among the pedestrian routes, respondents are asked the following questions -How easy is it to find and follow a route? Are there surface treatments and signs to guide pedestrians?
Following charts show the results of connectivity of footpaths ( figure 10 and 11).
Maximum criteria provide a friendly environment for walking without provision of signage. The overall percentages of easy to find out and follow the pedestrian routes, proper lighting during night to see footpath and clear view of approaching vehicles during crossing or turning is good in RCC area. The percentage of clear view of approaching vehicles during crossing or turning is more in recreational zone and the percentage of easy to find out and follow the pedestrian routes is more industrial zone. Besides, more than 70% proper lighting facilities is found to see footpath at night in three zones except recreational. The percentage of clear view of approaching vehicles during crossing or turning is more in recreational zone and the percentage of easy to find out and follow the pedestrian routes is more industrial zone. Besides, more than 70% proper lighting facilities is found to see footpath at night in three zones except recreational.
Pedestrian's Recommendation for Better Environment
Second portion of questionnaire involves to the recommendation of respondents. Pedestrians are asked for recommendation on facilities which will necessary for betterment of walking environment such as natural shade, facilities for differently able peoples, drinking water facilities, emergency facilities, dustbin, seating arrangement, lighting facilities and others. Multiple responses are found. This chart (figure 12) shows that maximum percentage (23%) is for dustbin facilities. The second highest facility is natural shade holding 19%. The demand of drinking water facilities (18%) is little bit lower than natural shade. Seating arrangement is also recommended by 15% of respondents. The portion of emergency facilities and lighting facilities is 3% and 5% respectively. And the percentage of others facilities is 8%. According to the respondents, others facilities means railing facilities and public toilet facilities.
Conclusion and Recommendation
According to the questionnaire survey result, the condition of commercial zone is worse than other three. Pedestrian schemes implemented within the study areas that are connected but weak in convivial, conspicuousness, comfortable and convenience. Pedestrian planning in Rajshahi city seems to only focus on getting people from one place to another but neglects the considerations for the quality of walking journeys and promoting walking as an alternative transportation choice. Planning in Rajshahi City has not fully considered pedestrian planning as a form of transport planning when compared to railway and motor vehicles. No specific design standards are followed. Although the Authority of Rajshahi City Corporation has started to recognize the importance of pedestrian planning, only piecemeal projects were launched to alleviate the congested pedestrian environment in urban areas.
A common mistake found in the whole study area is separation of footpath from road by providing railing or buffer zone. There are many problems such as lack of facilities, lack of proper maintenance, street vendors, obstacles etc. This study recommends the following stepsi. Feasible improvement and enhancement plan for current physical pedestrian environment are highly recommend. ii. Not limited to very small scale of provisions, the pedestrian linkages with necessary facilities should be taken into account for better walkability iii. Walkways should be provided according to the necessity of the area. Area wise different approach should be taken so that walking can be promoted for all types of walk. iv. Social Mobility Impacts should also be considered and further modification will be done according to different local contexts v. Obstruction must be removed such as wastages, street vendors, illegal structures etc vi. Railing or buffer zone should be provided for segregation of footpath from road. vii. Some rules and regulations should also be introduced to regulate parking on footpath, street vendors and the users for using walkaways. viii. Maintenance of footpath is very important. It is possible to enhance walkability and pedestrian environment both by proper maintenance. This study will help to adopt Proactive Action Plan for Comprehensive Pedestrian Planning in Rajshahi. | 3,746.4 | 2019-05-15T00:00:00.000 | [
"Economics"
] |
Performance analysis of ethylene-propylene diene monomer sound-absorbing materials based on image processing recognition
In order to study the performance of rubber sound-absorbing materials, this study processed the surface of ethylene-propylene diene monomer (EPDM) rubber sound-absorbing materials based on image recognition. Simultaneously, in this study, microscopic images were obtained from macroscopic rubber materials, and the images were processed to become standard images with certain characteristics. In addition, this study combines image processing to obtain pictures related to sound absorption performance. In the identification of rubber sound-absorbing materials, this study used EPDM rubber as the material, and studied the influence of various factors on the sound absorption performance of rubber sound-absorbing materials from the technical point of view and obtained the corresponding processed images. Through research, it is found that the sound-absorbing materials of this study have good sound-absorbing effects based on the control of relevant process conditions, and the image recognition and processing functions can be applied to the research of rubber sound-absorbing materials.
Introduction
Noise can have a negative impact on human health. In lighter cases, noise affects people's quality of life, such as causing fatigue and discomfort, causing irritability, reducing labor productivity, and affecting communication quality. In addition, it damages the human auditory organs, causes deafness, and has adverse effects on the nervous and cardiovascular systems. Particularly strong noise even causes mental illness and death. Based on this, it is necessary to reduce the noise or the use of sound insulation materials to reduce the impact of noise on people. Among them, rubber sound-absorbing materials are a very common [1].
The rubber-based polymer is composed of a polymer with a chain structure. Generally, the polymer has a diameter of several angstroms, and also has long-chain molecules having a length of several thousand, tens of thousands, or even hundreds of thousands of angstroms, which are easily curled into random coils to exhibit flexibility of the polymer. Due to the long-chain structure of the polymer, the molecular weight is not only high but also polydisperse. In addition, it can have different side groups, plus branching, cross-linking, crystallization, orientation, copolymerization, etc., making the polymer's motion unit multiplicative [2]. Therefore, compared with small molecules, the molecular motion of the polymer can be roughly divided into two sizes of motion units, that is, large-sized units and small-sized units. The multi-layer structure of the rubber polymer and the kinematic characteristics of the flexible molecular chain give the material a special high viscoelasticity, high internal damping characteristics, and molecular designability [3]. As a sound-absorbing material, it has the characteristics of slack time spectrum width, absorption of sound and audio bandwidth, sound absorption performance, and material use performance. What is more remarkable is that when the longitudinal acoustic wave in the air is introduced into the rubber elastic body, the vibration direction of the mass point deviates from the direction of propagation of the sound wave due to the shear deformation caused by the vibration of the polymer elastomer material. This shearing elastic force can change the direction in which the sound propagates, generate transverse waves, and increase the propagation path, which is beneficial to the improvement of the sound absorption coefficient [4]. In addition, compared with other sound-absorbing materials, polymer materials are easier to process by foaming, pressing, and extrusion, and the sound-absorbing structure is designed, which facilitates the simultaneous introduction of damping and other sound-absorbing mechanisms into sound-absorbing materials for sound-absorbing performance design [5].
At present, the design of the sound absorption properties of the polymer mainly focuses on the structural design of the polymer chain: such as the soft segment and hard segment ratio of the molecular chain, the molecular chain configuration, and the winding mode. The main purpose is to enhance the damping properties and intrinsic sound absorption properties of polymer materials. Then, by modifying in a polymer material, such as adding a bubble filler or other foaming process, the polymer material has a certain amount of microcavities, and the sound absorption performance of the material is improved by friction [6]. The rubber used to make the sound-absorbing material is styrene butadiene rubber, neoprene rubber, natural rubber, polyurethane, and the like. Because of the poor damping properties (energy conversion properties) of these rubbers, they rely mainly on the cavity to attenuate the acoustic signal, while the material itself contributes less to the attenuation of the underwater acoustic signal. Therefore, this material is particularly suitable for underwater acoustic absorption materials, and it has the advantages of light weight and easy molding compared with the material added with metal powder. The shortcoming is that the performance of the universal sound-absorbing material in the air is not high, and the elastomer rubber has disadvantages such as poor flame retardancy and aging due to its own problems [7].
Through the above research, new ideas for developing new sound-absorbing materials can be provided. Therefore, the development of this research has important and necessary significance for solving the problem of noise pollution. Combined with the above review, it can be found that the conductive phase/piezoelectric phase/ elastomer piezoelectric damping sound absorbing composite material is a new technology, which is very promising because of its high internal friction, high damping and sound absorption coefficient, and excellent comprehensive performance. But there are still some problems. First, the overall performance of the damping sound absorption coefficient needs to be further improved. Second, due to the large number of phases and long preparation process, the influence of the relative composite properties is not clear. This study analyzes rubber sound-absorbing materials based on image analysis to solve the shortcomings of traditional rubber sound-absorbing materials.
Test methods
In this study, a moderately sized deketoximine type room temperature vulcanized silicone rubber matrix 107 was used as a prepolymerized monomer to prepare a matrix for piezoelectric damping sound-absorbing composites. Studies have shown that the greater the elasticity of the matrix polymer, the better the sound absorption performance of the matrix and the composite, while the excessive elasticity causes other properties to decrease. Therefore, when designing a piezoelectrically damped sound-absorbing composite material, it is desirable that the matrix has a higher elasticity to facilitate the sound-absorbing damping performance, and it is not desirable to reduce other mechanical properties due to excessive elasticity [8]. In this section, the prepared RTV substrate is modified to achieve a comprehensive improvement of flame retardancy, heat resistance, electrical resistivity, and mechanical properties, and is an alternative substrate for the preparation of piezoelectric damping sound-absorbing composite materials.
The powder material was dried in a vacuum oven at 105°C/5 h until the constant weight was weighed on the analytical balance. The 107-base glue prepolymer in the drum is opened, and the calculated amount of glue is weighed according to Table 1 by the reduction method. It was poured into a three-necked flask and treated under agitation at 220°C/6 h. After the water was sufficiently removed, the heating was stopped, and 120# gasoline was added at a time 2-3 times the volume of the glue. The heating was stopped, and the stirring was continued until uniform. The desired auxiliary was added, stirring was continued for 1-2 h, the mixture was poured out, and the bottle was sealed. If necessary, the bottle mouth needs to be opened, the mold is injected, and the solidified RTV blank sample can be obtained by standing at room temperature for 1-2 days. The above experimental procedure was repeated to obtain a series of RTV samples [9].
Firstly, all pre-added powders, nano-SiO 2 , D4-nano-SiO 2 , DBDPE/Sb 2 O 3 , DBDPO, Al (OH) 3 , are dried in a vacuum oven at 105°C/5 h until the constant weight is weighed on the analytical balance. After that, it is put into the grinder according to the calculation amount. Then, the RTV base prepolymer is shaken, the cap is opened, and the grinder is injected into the grinder. The grinder is started, the temperature is set at 80-120°C, and the powder and base rubber are dispersed and kneaded for 30-60 min. After that, it was transferred into a three-necked flask, and under the action of stirring, 120# gasoline was added in a mixing colloid volume of 2-3 times. After the dispersion is uniform, the mixture is decanted, some of the solvent is dried, and placed in the grinder again. A calculated amount of a crosslinking agent, a coupling agent, a catalyst, and the like are added to the ground colloid, and the mixture is ground for 60-90 min to fully crosslink the base rubber and seal the bottle. A piezoelectric ceramic composite having a PZT volume fraction of 20% to 60% is prepared by using an elastomeric RTV as a matrix and a conventional PZT as a piezoelectric phase. The content of conductive phase NG was fixed to study the influence of PZT and RTV ratio on the comprehensive performance of piezoelectric damping composites. All powder materials are dried at 95°C for 2-4 h. We do the ingredients according to the ratio of NG:piezoelectric phase:RTV prepolymer = 3: x: (100-x) (x is 0, 20, 30, 40, 50, 60 wt.%, respectively). The raw materials are ball milled on a planetary ball mill at a speed of 350 r/min for 6-8 h, and the solvent is dried after discharge. The elastic mixture prepolymerization system is further extruded and mixed on a three-roll mill, and then a wafer-shaped sample having a thickness of 10 cm and a thickness of 10 cm is pressed by a self-made tableting mold on a powder tableting machine. After the multimeter was tested for non-conduction, the upper and lower surfaces were coated with electrodes and polarized in a silicone oil at a high pressure of 8-10 kV for 15 min to obtain a sample [10]. Table 2 shows the piezoelectric properties of a series of 3 wt.% NG/PZT/RTV composites with different PZT contents before and after secondary high-voltage polarization. The added piezoelectric powder had d33 = 434 pC/N [11].
It can be seen from the table that before the secondary high-voltage polarization, the series of 3 wt.% NG/PZT/ RTV composites with different PZT content also have partial piezoelectric properties, but it is lower than the same system after polarization. Meanwhile, the piezoelectric properties increase with the increase of PZT content, reaching a maximum value of 68 pC/N at a PZT content of 40 wt.% [12]. Thereafter, the PZT content continues to increase and the piezoelectric properties are extremely reduced. After the secondary high-voltage polarization, the piezoelectric properties of the composites are higher than those before the polarization of the same system, and the variation of the size with the PZT content is consistent with that before polarization. The maximum piezoelectric performance also occurs at a PZT content of 40 wt.%, which is 74 pC/ N. Before the secondary high-voltage polarization, the reason why the 3 wt.% NG/PZT/RTV system has piezoelectric properties is that the added PZT itself has a high piezoelectric coefficient, and the piezoelectric properties of the piezoelectric material have characteristics that do not change with the shape of the material [13]. It is speculated that the piezoelectric performance after the secondary high-voltage polarization is improved because the voltage of the secondary polarization is much higher than the initial polarization voltage of the PZT, so that the remaining domains that are not easily deflected under high voltage are sufficiently polarized, and the insulated RTV is just to prevent the occurrence of breakdown [14].
The surface analysis of rubber sound-absorbing materials by image processing is a microscopic topography of a series of 3 wt.% NG/PZT/RTV composites with different raw materials and different PZT contents. Figure 1 shows an example of image processing of a rubber sound-absorbing material surface. Among them, Fig. 1a is the raw material of the rubber sound-absorbing material, and Fig. 1b is the part of the rubber sound-absorbing material selected as the research object. After material property analysis by image processing, the microscopic state of the rubber material is shown in Fig. 1c that is, the sound absorption performance analysis can be performed in combination with image analysis.
Results
First of all, different doses of foaming agent will affect the performance of rubber sound-absorbing materials in the preparation. Then, 2-mm-thick rubber SEM photographs of different foaming agents are shown in Fig. 2. The analysis of Fig. 2 shows the influence of the amount of foaming agent on the microstructure of rubber, which is also an important part of rubber sound absorption research.
The vulcanization characteristic curve of the foamed EPDM rubber is shown in Fig. 3 when the vulcanization temperature is 150°C to 180°C for 25 min. Figure 4 is a microscopic topography of a series of 3 wt.% NG/PZT/RTV composites with added raw materials and different PZT contents.
The effect of different vulcanization times on the damping properties of the foamed EPDM rubber is shown in Fig. 5, where the vulcanization temperature was 165°C.
The effect of different vulcanization temperatures on the damping properties of the foamed EPDM rubber is shown in Fig. 6, where the vulcanization time is 30 min.
The effect of carbon black on the SEM image of the foamed EPDM rubber is shown in Fig. 7.
The effect of filler type on the damping properties of carbon black reinforcing foamed EPDM rubber is shown in Fig. 8. Among them, the amount of carbon black is 10 phr, and the amount of filler is 20 phr.
The effect of the type of filler on the sound absorption properties of carbon black reinforced foamed EPDM rubber is shown in Fig. 3. At this time, the amount of carbon black is 10 phr.
Discussion and analysis
It can be seen from Fig. 2 that when the amount of the foaming agent is 3 phr, the number of cells is small, and most of the cells are closed cells. As blowing agent continues to increase, the number of cells continues to increase and gradually changes from closed to open. When the amount of blowing agent is 9 phr, most of them are mixed pore structures foaming agent is 12 phr, serious parallel pore phenomenon has appeared, and the cells are easily collapsed, and larger open pore structures are formed. When the amount of the foaming agent is 15 phr, the cells are dense at this time, the cell morphology is not uniform, and a pore phenomenon occurs [15]. These different cell structures are all affected by the difference in the amount of blowing agent.
It can be seen from Fig. 3 that the vulcanization temperature and the vulcanization time have a great influence on the torque of the foamed EPDM rubber. When the vulcanization temperature is 150°C, the vulcanization temperature is low, the scorch time is extremely short, the foaming has not started, and the rubber has begun to vulcanize. When the foaming agent starts to decompose, the degree of crosslinking is already high, and it is difficult to form cells, so it cannot be vulcanized at this temperature. When the vulcanization temperature is 180°C, the vulcanization process will be over vulcanized in less than 10 min. At this time, the vulcanization rate drops rapidly, and the foaming agent decomposes in a large amount, which causes the gas generated by the decomposition to substantially destroy the EPDM rubber matrix, causing the cells to collapse, and the resulting foamed EPDM rubber has major defects. Therefore, it cannot be vulcanized at this temperature. Observing a few curves, it can be found that when the vulcanization time is 15 min, the vulcanization period is basically reached, and the vulcanization rate is faster [16]. Therefore, the effect of vulcanization at a vulcanization temperature of 155°C to 175°C for 15 min to 35 min on the damping sound absorption properties of the foamed EPDM rubber will be discussed. It can be seen from Fig. 4 that PZT has a higher degree of crystallization, complete lattice, clear grain boundaries, and full grain. The pure RTV elastomer has a smooth, flat section with no defects, holes, and cracks, and presents the cross-section characteristics of the elastic material. For the series of 3 wt.% NG/ PZT/RTV composites, with the increase of PZT content, the interface compatibility between 3wt.%NG/ 30wt.%PZT/RTV and 3wt.%NG/40wt.%PZT/RTV is better, followed by 3wt.%NG/20wt.%PZT/RTV and 3wt.%NG/50wt.%PZT/RTV.
The effect of different vulcanization time on the damping properties of foamed EPDM rubber is shown in Fig. 5, where the vulcanization temperature is 165°C. It can be seen from Fig. 5a that the foamed EPDM rubber has the largest storage modulus when the vulcanization time is 15 min. At other curing times, the storage modulus is not much different. This is because when the vulcanization time reaches 15 min, the vulcanization rate is already fast, the degree of crosslinking is already high, but the foaming is still in the early stage, and the gas pressure generated by the decomposition of the blowing agent is mostly insufficient to open the cross-linked EPDM rubber matrix, so that the cells are difficult to form. At this time, the foamed EPDM rubber has a small internal cell and a small number. When the vulcanization time is 20 min, at this time, the vulcanization rate of the foamed EPDM rubber is close to that of vulcanization at 15 min, but the foaming agent is decomposed in a large amount, and the internal pressure generated is too large, which breaks the cross-linking, and finally forms many cells. Thereafter, when the vulcanization time is 25 min, 30 min, and 35 min, the vulcanization rate and the foaming speed are both fast, the foaming and vulcanization reach a dynamic balance, and a large number of cells having a uniform pore diameter are produced [17]. It can be seen from Fig. 5b that the vulcanization time has a certain influence on the damping properties of the foamed EPDM rubber. At a temperature of − 25°C , the maximum loss factor, the effective damping temperature range is − 65°C~20°C, and there is a peak at − 45°C, a unique liquid-liquid transition peak of EPDM rubber. Under different curing time, their effective damping temperature range is not much different, but the loss factor is different. The vulcanization times corresponding to the loss factor from large to small are 30 min, 35 min, 25 min, 20 min, and 15 min, respectively. When the vulcanization time is 15 min, the loss factor of the foamed EPDM rubber is the lowest. When the vulcanization time is 20 min, the loss factor is slightly higher, and the vulcanization time has a higher loss factor at 25 min and 35 min. At 30 min, there is a maximum loss factor with a value of 1.44. The reason for this phenomenon is similar to 3.2(A). When the energy passes through the small and dense cells, there is greater energy loss. When the vulcanization time is 15 min, the generated cells are too small, the energy loss is small, and the damping performance is poor. When the vulcanization time is 20 min, the cells are too large and the energy loss is small. However, when the vulcanization time is 30 min, the generated cells are small and dense, and when the energy is passed, a large internal friction is generated by friction with the wall of the hole, waveform conversion, relaxation, etc., and the damping performance is improved. It can be seen from Fig. 6a that the foamed EPDM rubber has the highest storage modulus when the vulcanization temperature is 160°C. When the vulcanization temperature is 170°C, 155°C, the storage modulus is lower. When the vulcanization temperature is 165°C, 175°C, the storage modulus is the lowest. According to the analysis of the EPDM rubber vulcanization characteristic curve, when the vulcanization time is 30 min, the vulcanization temperature is in the range of 155°C to 175°C, and the vulcanization rate is not much different. However, at this time, the foaming speed has a large increase with the increase of the vulcanization temperature, which will have a great influence on the damping properties of the foamed EPDM rubber. As shown in Fig. 6b, the highest loss factor is 1.491 when the vulcanization temperature is 160°C. However, in the whole effective damping temperature range, the loss factor is smaller than the loss factor of 165°C, and the maximum loss factor of the foamed EPDM rubber is 1.44 at a vulcanization temperature of 165°C, which is slightly smaller than the vulcanization temperature of 160°C. Therefore, when the vulcanization temperature is 165°C, the foamed EPDM rubber has the best damping performance. This phenomenon occurs because the foaming agent is largely decomposed at a vulcanization temperature of 170°C and 175°C, and the generated gas pressure is too large, destroying most of the cross-linking bonds. At the Fig. 7 The effect of carbon black on the SEM image of the foamed EPDM rubber same time, a large number of cells collapse, and a parallel hole phenomenon occurs, and many defective cells are formed, which further reduces the damping performance. When the vulcanization temperature is 155°C, the decomposition rate of the foaming agent is too low, the gas pressure is insufficient to break the cross-linking bonds, the generated cells are less, and the energy loss is also small. When the vulcanization temperature is 160°C and 165°C, the decomposition rate of the foaming agent reaches a certain level, and the vulcanization rate of the EPDM rubber reaches a dynamic balance. At this time, more cells with more uniform pore diameter will be generated. When the energy passes, there will be a large loss, and the damping performance will be further improved.
The effect of the amount of carbon black on the SEM photograph of the foamed EPDM rubber is shown in Fig. 7. It can be seen from the figure that when carbon black is used to reinforce foamed EPDM rubber, the cells are larger, and when the amount of carbon black is 10 phr, more cells are formed, and the walls of the holes are thicker, but uneven. As the amount of carbon black increases, the structure of the cells begins to deteriorate and gradually forms a pore phenomenon. When the amount of carbon black is more than 30 phr, the cells collapse, most of the cells produce a parallel pore phenomenon, and there are major defects, and the pore walls are also thinner and thinner, which eventually leads to only a small amount of closed-cell foam inside the foamed EPDM rubber. At the same time, a large number of parallel pore foams cause gas to escape and the damping sound absorption performance is degraded.
The effect of filler type on the damping properties of carbon black reinforcing foamed EPDM rubber is shown in Fig. 8. Among them, the amount of carbon black is 10 phr, and the amount of filler is 20 phr. It can be seen from Fig. 8a that after the addition of the rare earth oxide filler, the storage modulus of the foamed EPDM rubber decreases rapidly, and the fastest decline in the storage modulus is the Gd 2 O 3 -filled foamed EPDM rubber. When Y 2 O 3 , CeO 2 , and La 2 O 3 are filled with foamed EPDM rubber, the difference of storage modulus is small, and the addition of inorganic filler CaCO 3 has little effect on the storage modulus of foamed EPDM rubber. It can be seen from Fig. 8b that the Gd 2 O 3 -filled foamed EPDM rubber has the highest loss peak, the maximum loss factor value is 1.5 MPa at − 20°C, and the effective damping temperature range is − 65°C to 20°C. Followed by Y 2 O 3 -, CeO 2 -, CaCO 3 -, and La 2 O 3 -filled foamed EPDM rubber, the damping effect is the worst. This phenomenon may be caused by the fact that the rare earth filler is a rigid particle, and the energy generated by the vibration generates more heat in the foamed EPDM rubber than the rigid particle, which causes a further increase in internal friction. The poor damping performance of La 2 O 3 may be due to its poor compatibility with the EPDM rubber matrix, resulting in a large amount of agglomeration, resulting in reduced damping performance.
The damping property of the rare earth oxide Gd 2 O 3 -filled foamed EPDM rubber is best due to the magnetic properties of Gd 2 O 3 . After being filled into the rubber, when the external vibration can pass through the rubber, the rubber will undergo a certain deformation, which causes the movement of the Gd 2 O 3 magnetic powder to be restrained and increases the relative displacement. At the same time, the total magnetization of the rubber changes rapidly, eventually causing macro eddy current loss in the rubber. In addition, the reversible movement of the domain wall and the microscopic eddy current generated by the rotation of the magnetization vector and the magnetic hysteresis damping caused by the irreversible movement of the domain wall can transform the vibration energy into heat dissipation, and finally achieve a good damping effect.
The effect of the type of filler on the sound absorption properties of carbon black reinforced foamed EPDM rubber is shown in Fig. 9, at which time the amount of carbon black is 10 phr. It can be seen from Fig. 9 that the sound absorption performance of the foamed EPDM rubber is obviously improved after the filler is added, and the foamed EPDM rubber filled with CaCO 3 has the worst effect. Gd 2 O 3 -filled foamed EPDM rubber has the best sound absorption performance, the maximum sound absorption coefficient is 0.39 at 400 Hz, and the sound absorption coefficient is increased by 0.2 when no filler is added, followed by Y 2 O 3 -, CeO 2 -, and La 2 O 3 -filled foamed EPDM rubber. Filler is one of the reasons that affect the sound absorption performance of rubber materials. Different rare earth fillers have different effects on the sound absorption properties of foamed EPDM rubber, which are related to the structure and properties of rare earth itself. Yttrium and Gadolinium are densely arranged hexagonal structures, Cerium is a face-centered cubic structure, and Lanthanum is a double hexagonal structure. Since Yttrium and Gadolinium are magnetic, their magnetic damping increases the dissipation of acoustic energy in the foamed EPDM rubber, thereby improving the sound absorbing performance. Secondly, the compatibility of different rare earths with EPDM rubber matrix is also different. The compatibility of bismuth with EPDM rubber matrix is the worst, and the compatibility of other rare earths with EPDM rubber is not good. Therefore, there must be a better modification method to improve the compatibility of the rare earth with the EPDM rubber matrix.
Conclusion
In order to study the properties of rubber sound-absorbing materials, this study analyzed the production process of rubber materials and obtained the following results: This study investigated the effect of carbon black on the damping properties of foamed EPDM rubber. The results show that only a small amount of carbon black will increase the damping performance, and the addition of carbon black will result in a decrease in sound absorption performance. The more carbon black added, the more obvious the sound absorption performance decreases. At the same time, this study discussed the effect of filler types on the damping properties of carbon black reinforced foamed EPDM rubber. It was found that Gd 2 O 3 -filled carbon black reinforced foamed EPDM rubber has the best damping performance and sound absorption performance; Y 2 O 3 -, CeO 2 -, and La 2 O 3 -filled foamed EPDM rubber has poor sound absorption performance; and inorganic filler CaCO 3 -filled foamed EPDM rubber has the worst sound absorption performance. This study investigated the effect of filler loading on the sound absorption properties of carbon black reinforced foamed EPDM rubber. The results show that the effect of the filler on the sound absorption properties of the foamed EPDM rubber is generally the more the filler is used. The worse the sound absorption effect, the best sound absorption effect when the filler dosage is 10 phr. In addition, this study investigated the effect of material thickness on the sound absorption properties of carbon black reinforced foamed EPDM rubber. The results show that the thicker the foamed EPDM rubber, the better its sound absorption performance. When the thickness of the foamed EPDM rubber is 60 mm, the maximum sound absorption coefficient is 0.66 at 400 Hz, and the effective sound absorption band is 250 to 2000 Hz. | 6,648.6 | 2018-11-19T00:00:00.000 | [
"Materials Science"
] |
A criterion for measuring the degree of connectedness in linear models of genetic evaluation
Summary - A criterion for measuring the degree of connectedness between factors arising in linear models of genetic evaluation is derived on theoretical grounds. Under normality and in the case of 2 fixed factors (0, 0), this criterion is defined as the Kullback-Leibler distance between the joint distribution of the maximum likelihood (ML) estimators of contrasts among 0 and 0 levels respectively and the product of their marginal distributions. This measure is extended to random effects and mixed linear models. The procedure is illustrated with an example of genetic evaluation based on an animal model with phantom groups
INTRODUCTION
The development of artificial insemination in livestock and the potential for using sophisticated statistical BLUP methodology (Henderson, 1984(Henderson, , 1988 gave new impetus for across-herd or station genetic evaluation and selection procedures, eg reference sire systems in beef cattle (Foulley et al, 1983;Baker and Parratt, 1988) or sheep (lVliraei Ashtiani and James, 1990) and animal model evaluation procedures in swine (Bichard, 1987;Kennedy, 1987;Webb, 1987).
In this context, concern about genetic ties among herds or stations is becoming increasingly important although, from a theoretical point of view, complete disconnectedness among random effects can never occur, as explained in detail by Foulley et al (1990). Petersen (1978) introduced a test for connectedness among sires based on the property of the &dquo;sire x sire&dquo; information matrix after absorption of herd-year-season equations. Fernando et al (1983) proposed an algorithm to search for connected groups in a herd-year-season by sire layout which was based on the physical approach of connection developed by Weeks and Williams (19G4). This view was also taken up by Tosh and Wilton (1990) to define an index of degree of connectedness for a factor in an N-way cross classification. Foulley et al (1984,1990) reviewed the definition and problems relevant to this concept. They offered a method for determining the level of connectedness among 2 levels of a factor by relating the sampling variance of the corresponding contrast under the full model to its value under a model reduced by the factors responsible for unbalancedness.
The purpose of this paper is 2-fold: i) to extend this procedure defined for a specific contrast to a global measure of connectedness among levels of a factor; ii) to set up a theoretical framework to justify such a measure on mathematically rigorous grounds.
METHODOLOGY
Our starting point is the following basic property: if observations in each level of some factor (ie B) are equally distributed across levels of another factor (ie 0), BLUE estimators of the contrasts B i -B i &dquo; <! —<!' are orthogonal under an additive fixed linear model with independent and homoscedastic errors. This property is lost under an unbalanced distribution up to an ultimate stage consisting of what is called disconnectedness or confounding between the 2 factors. This suggests the idea of measuring the degree of connectedness by some distance between the current status of the layout and the first &dquo;orthonormal&dquo; one following the terminology of Calinski (1977) and Gupta (1987). The Kullback-Leibler distance I 12 (x) = J p l (x) In [ P I ( X )/ P 2 (x)]dx between 2 probability densities Pi(!);P2(-!) turns out to be a natural candidate for measuring such a distance (Kullback, 1968(Kullback, , 1983. The model assumed is a linear model with additive fixed effects and NIID (normally, identically and independently distributed) residuals e ! N(O, (]' 2 I N ) where y is an N x 1 data vector, 9, ! and A are vectors of fixed effects and X o , X j and X are the corresponding incidence matrices. Without loss of generality, we will assume a full rank parameterization in vectors 0 and < pertaining to factors 0 and 0 and resulting in contrasts such as B i -0 1 and ! — !1 so that: where me and ni o are the numbers of levels for the factors 9 and § respectively.
The vector X in [1] designates remaining effects of the model. In a 2-way crossclassified design (eg mean ti, &dquo;treatment&dquo; and &dquo;block&dquo;), one has A = c1 N with c = p + 9 1 + 1> 1 but this parameterization turns out to be more general and may include one or several extra factors.
Degree of connectedness is assessed through the Kullback-Leibler distance between the joint density f (9, !) of the 1!IL (maximum likelihood) estimators â Similarly, by substituting to 0: and, finally from the last term in (11!, one has: Four remarks are worth mentioning at this stage: 1) As shown by formulae [10] !13!, [14] and !15!, one may talk equivalently about connectedness between 0 and 0 as well as connectedness of (or among 9 levels) due to the incidence of 0 (or connectedness of 0 due to the incidence of 0) in a model including 0, 0 and A using the terminology of Foulley et al (1984,1990). This terminology is also in agreement with that taken up by statisticians (Shah and Yadolah, 1977).
2) It is interesting to notice that the variance Coo.,5 of the conditional distribution of 0 given $ is also the variance of the marginal distribution of 6 under the reduced model (0, A). This leads to view the ratio of determinants in [13] in the same way as Foulley et al (1990) ie using their notation: where C R and C F are C matrices pertaining to 4 under the full (F) model in [1] and the reduced model (R) without 0 respectively. Moreover, the -y coefficient defined as: generalizes the -yi i , coefficient of connectedness introduced by Foulley et al (1990) for the contrast 9 j -8 i , ; it varies similarly from q = 0 (or D = +oo) in the case of complete disconnection to 7 = 1 (or D = 0) in the case of perfect connection (ie ortlzogonality).
3) Let us consider the characteristic equation: The roots k i of [18] are the eigenvalues of CBB 'Coo.0 or CF 1 C R so that: where kg is the geometric mean of the kis and ro = dim (Coo). Hence In q = rokg which is the justification to standardize D and y to: so as to take into account the numbers of elements in 0 to be estimated when comparing degree of connectedness of factors differing in number of levels. This standardization procedure is analogous to that proposed by S61kner and James (1990) for comparing statistical efficiency of crossbreeding experiments involving different numbers of parameters. In that respect q § can be interpreted as a kind of average measure of connectedness for (0i,!) among all pairs of levels of the factor 9 due to the incidence of the nuisance factor 0 for a fixed effect model (see the Appendix). Since y is equal to both JC-'Coo. 01 and IC;JCq,q,.oj, one can standardize with respect to ro or as well as to r b depending on the factor which we are interested in.
4) An alternative form to [18] is:
the roots of which p 2 = 1 -k i turn out to be the squared canonical sampling correlations between â and !. Since the (non zero) roots of [21] are also the (non zero) roots of ICøoC¡¡iCoø -p 2 C øø/ = 0, they satisfy the equation !C!.6 — (1 -p2)C!!I = 0. Thus q can be expressed as: with p i = 0 (ie ki = 1 -p.2 = 1) for i = re + 1, re + 2, ... , ro if ro < r or for i = r4> + 1, rØ + 2,..., re if r4> < reo 5) The presentation was restricted to 2 factors and 0 . It can be extended to more than 2 classifications. For instance, with 3 factors _B, ø, 1 Ji, one can consider the Kullback-Leibler distance between f (4,4, O) and f(0) f«, lY ). The resulting D coefficient can be expressed as D = 2 ln (IIee,'>'1 / IIee '4 >w,>,1) and interpreted as the degree of connectedness of e due to fittiiig q 5 and TI in the complete model (a, <i !,À). 6) This approach developed for models with fixed effects can be extended to mixed models as well. A first obvious extension consists of taking k in [1] (or part of it) as a vector of random effects. The only change to implement in computing the matrix in [7] is to carry out an absorption of A equations which takes into account the appropriate structure of this vector. Actually this can be easily done using the mixed model equations of Henderson (1984).
In more general mixed models, one has to keep in mind that from a statistical point of view, connectedness is an issue only for factors considered as fixed (Foulley et al, 1990). In other words, in a model without group effects, BLUP of sire transmitting abilities or individual genetic merits always have solutions whatever the distribution of records across herd-year-seasons and other fixed effects.
Nevertheless, the phenomenon of non orthogonality between the estimation of a contrast of fixed effects and the error of prediction in some level of a random effect still exists and may be addressed in the same way as outlined previously. For instance to measure degree of connectedness between one random factor u = {ui}; i = 1, 2, ... , m u (eg sire) and one fixed factor < (eg herd), it suffices to consider in [3] its error of prediction from BLUP ie replace 4 in [2a] by A = {!i = u iu il . All the above formulae apply since the derivation of [10] or [16] requires tr (QC) = 0 (see !9cJ) which results from general properties of the Z and C matrices ((8J, [9a] and !9bJ) that do not refer to any particular structure (fixed or random) of the vectors of parameters. Again, the only computational adjustment to make is to view the corresponding I matrices as coefficient matrices of Henderson's mixed model equations (Henderson, 1984) after absorption of the equations in h. In fact, this extension fully agrees with the role played by ICI in the the theory of Bayes D-optimality (see eg DasGupta and Studden, 1991).
NUMERICAL EXAMPLE
A small hypothetical data set is employed to illustrate the procedure.
The layout (table I) consists of a pedigree of 8 individuals (A to H) with performance records on 7 of them (B to H) varying according to sex (si; i = 1, 2), year (a j ; j = 1, 2, 3) and herd (h!; k = 1, 2). Unknown base parents (a to h) were assigned to 3 levels of a group factor (9¡; L = 1, 2, 3). Data of this layout are analyzed according to an individual (or &dquo;animal&dquo;) genetic model (Quaas and Pollak, 1980) accomodated to the so-called accumulated grouping procedure of Thompson (1979), Quaas and Pollak (1982), Westell (1984) and Robinson (1986) (see Quaas, 1988 for a synthetic approach to this procedure). Using classical notations, this model can be written as: or, using distributions where y is the data vector, i3 is the vector of fixed effects (sex, year, herd), u is the random vector of breeding values, and X and Z are the corresponding incidence matrices. The vector u of breeding values has expectation Qg and variance A O '2 a where Q defined as in Quaas (1988) assigns proportions of genes from the 3 levels of group (vector g) to the 8 identified individuals, A is the so-called numerator relationship matrix among those individuals and a £ is the additive genetic variance. Using Quaas' notations, u can be alternatively written as: with u* ! N(0, A Qd ) being the random vector of the within-group breeding values.
The (full rank) parameterization chosen here is:
The grouping strategy of base animals is an issue of great concern for animal breeders due to the possible confounding or poor connectedness with other fixed effects in the model (Quaas, 1988). Therefore, it is of interest to look at the degree of connectedness between this group factor and other fixed effects, or equivalently to degree of connectedness among group levels due to the incidence of other fixed effects. In this example, 3 fixed factors (in addition to group) were considered which are sex (S), year (A) and herd (H) and their incidence on connectedness of groups can be assessed separately (S, A, H) or jointly (S + A, A + H, H + S, S + A + H). From notations in (1), degree of connectedness of G due to A is based on: The corresponding information matrix is obtained from the coefficient matrix derived by Quaas (1988) for a mixed model having the structure described in !23aJ, [23b] and (23c). Letting the vector of unknowns be (P', g', u')', this coefficient matrix is given by: In this example, the matrices involved in [26] are: Elements in the first column of Q within brackets are deleted in the computations due to the parameterization chosen in [24a] and [24b}. A-' is half stored with non zero elements being: A * may also be calculated directly from Quaas' rule (Quaas, 1988). Connectedness between groups due to the incidence of the other fixed effects was assessed under the full model using Quaas' system in [26], and also for an u * deleted model (y = Xp + ZQg + e), then using the ordinary least squares equations. Numerical results are given in table II. In this example, the main sources of disconnectedness are by decreasing order: herd, year and sex, the first factor being by far the most important one since the -y * values associated with herd are 0.312, 0.247, 0.272 and 0.239 when this factor is considered alone, and with year, sex and year plus sex respectively. Actually, this result is not surprising on account of the grouping procedure based on parents in groups 2 and 3 coming out of different herds. One may also notice that D values for combinations of factors exceed the sum of D values for single factors. For instance, D is equal to 1.433 for S + A + H vs ED = 1.316 for each factor taken separately. Results for the purely fixed model (u * deleted) are in close agreement with those of the full model. This procedure of ignoring u * effects for investigating linkage among groups was first advocated by Smith et al (1988) due to its relative ease of computation in large field data sets.
The extension of the theory to the measure of degree of connectedness of random factors is illustrated in this example by calculations of D and & d q u o ; ' ( * for breeding values (table II). Sources of unbalancedness rank as previously, but the average level of connectedness (-y * = 0.574) for breeding values in higher than for groups (y * = 0.239) due to prior information (Foulley et al, 1990).
The theory also applies to specific contrasts among effects as originally proposed by Foulley et al (1984by Foulley et al ( , 1990. The degree of connectedness for pair comparisons among breeding values then reduces, simply to the ratio of prediction error variance of the pair comparison under a reduced model (R) with some effects deleted (in table III, all fixed effects except mean and group) and under the full model (F), ie: where 6 i i, = ui -uj, . Figures shown reflect a great heterogeneity in the pattern of degree of connectedness. This diversity can usually be well explained by looking at the levels of factors which differ or are shared by individuals compared. For instance, B and F are closely connected (y * = 0.840 and 0.808 in I and II respectively) because they are in the same herd and share close proportions of genes from the 3 groups of base parents (0.5, 0 and 0.5 from groups 1, 2 and 3 respectively in B vs 0.375, 0.125 and 0.5 in F). On the contrary, D and G who are coming fiom different herds and for whom, 3/4 of their genes are originating from different groups (groups 2 and 3 respectively) are poorly connected (-y * = 0.047 and 0.064 in I and II respectively). Moreover, !y* values computed according to both procedures (exact or approximate definition) are in good agreement in this example although it is difficult to draw general conclusions from such a limited example.
DISCUSSION AND CONCLUSION
This paper provides a theoretical framework to the definition of an objective criterion for measuring the degree of connectedness between factors involved in Gaussian linear models of genetic evaluation. The procedure proposed herein is based upon tlie assessment of non-orthogonality between estimators of contrasts (or errors of prediction for random effects) via the Kullback-Leibler distance.
This measure offers great flexibility since it can be employed for a particular comparison among levels of some factor or for a global evaluation of their degree of connectedness. Applications of these criteria to degree of connectedness among sires in a reference sire system based on planned artificial inseminations with link bulls have already been made in France (Foulley et al, 1990;Hanocq et al, 1992;Laloe et al, 1992). where C R and C F are the same as in [16]. This criterion appears also in statistical inference on variance-covariance matrices as the so-called Stein loss function (Anderson, 19b4;Loh, 1991). Here, it can be interpreted as the Kullback-Leibler distance between the marginal density f (9) of 8, and its conditional density, f (8!!), given the value of the parameter !.
The feasibility of our procedure is determined by the ability to compute the logarithm of the determinant of a coefficient matrix after possible absorption of some factors as required by other statistical procedures based on the likelihood function. In the current context of genetic evaluation with the animal model, an application of this procedure to phantom groups might be feasible using, at least, the model ignoring u * as a first approximation.
In that respect, it has also been suggested (Kennedy and Trus, 1991) to look at the elements of the coefficient matrix X'ZQ whose relative values in row k provides the expected proportions of genes out of the different levels of groups contributing to the corresponding level of the k th fixed effect. In our example, these values are as follows: These figures show a more unbalanced distribution across herd and/or year than across sex levels. Notice that this matrix gives the distribution of data according to groups for each factor separately. No account is taken of the joint distribution of data between those factors. In this model, this means that the factors sex and group are not perfectly connected due to slighty unbalanced proportions observed. As a matter of fact, 9 2 -9 1 is correlated to §2 -¡it and 9 3 -!l in the &dquo;sex + group&dquo; model whereas they are uncorrelated in the full model (see table II). The -y * criterion applied to breeding values measures how the C. matrix of variances of prediction errors is reshaped due to the incidence of an unbalanced distribution of data across the nuisance factors. This change in C implies a related change in the variance covariance matrix of estimated breeding values which influences the selection differential. Accuracy of selection is also expected to be altered. In this respect, insufficient connectedness can be compared to some extent to some non-optimum selection procedure which ignores, or does not weight properly, some sources of information, eg, within family selection vs index selection. More research is needed in this field to quantify the amount of genetic progress which may be lost due to reduction in the degree of connectedness.
For fixed effects, connectedness is directly related to the unbiasedness requirement. This is especially true for group effects in the animal model for which much concern has been raised (Smith et al, 1988;Quaas, 1988;Canon et al, 1992). The criterion developed here may help to check whether differences between groups in a particular model can be reasonably captured by the data structure. If not, one will have to reconsider the grouping procedure, or one may be tempted to put prior information on group effects ie to treat them as random as suggested by Foulley et al (1990). In any case, one will have to compare different models and there are now specific statistical procedures available to do that in animal breeding (Wada and Kashiwagi, 1990 ie the expectation with respect to the distribution of 9 1 of the conditional expectation of lnR(!2,<)'!!i) taken with respect to the distribution of Ô2,! given Ô 1 . This conditional expectation is by definition a D-measure noted D(B 2 , I) 81 ) ; because this is again a constant (see (10!): which does not depend upon 0 i , [A.3b] Similarly, -y(6, !) = exp [-2D( Ø , +)] can be expressed as the following product: and equivalently after permutation of < 9 i and W 2 , as: Thus, letting F(8j , $) such that: one has: and -y * (0, ell) = (q(0, ell) 1/2 can be interpreted as, either the geometric mean of the F(8.j , <) coefficients in [A.6], or as the geometric mean of all possible !y(6i, 41!j) coefficients (including the unconditional ones). For three elements B i' 8j , 8 k , in 0, one would have: ' and similarly for 4 elements 0,, 8j , 8!;, B! These formulae can be easily extended to any number of elements r o in 9_. Formula !A.7! applies and the coefficient of the power pertaining to the !(0,, <)!,...) term given k variables !j in (F(8; , $)1 ' is then 1/Cr e _ 1 ie the inverse of the coefficient for the ktli power in the binomial expansion of order re -1. | 5,122.6 | 1992-10-15T00:00:00.000 | [
"Biology",
"Mathematics"
] |
Holographic Properties of Irgacure 784/PMMA Photopolymer Doped with SiO2 Nanoparticles
To enhance the holographic properties, one of the main methods is increasing the solubility of the photosensitizer and modifying the components to improve the modulation of the refractive index in the photopolymer. This study provides evidence, through the introduction of a mutual diffusion model, that the incorporation of SiO2 nanoparticles in photopolymers can effectively enhance the degree of refractive index modulation, consequently achieving the objective of improving the holographic performance of the materials. Different concentrations of SiO2 nanoparticles have been introduced into highly soluble photosensitizer Irgacure 784 (solubility up to 10wt%)-doped poly-methyl methacrylate (Irgacure 784/PMMA) photopolymers. Holographic measurement experiments have been performed on the prepared samples, and the experiments have demonstrated that the Irgacure 784/PMMA photopolymer doped with 1.0 × 10−3wt% SiO2 nanoparticles exhibits the highest diffraction efficiency (74.5%), representing an approximate 30% increase in diffraction efficiency as compared to an undoped photopolymer. Finally, we have successfully achieved the recording of real objects on SiO2/Irgacure 784/PMMA photopolymers, demonstrated by the SiO2/Irgacure 784/PMMA photopolymer material prepared in this study, which exhibits promising characteristics for holographic storage applications. The strategy of doping nanoparticles (Nps) in Irgacure 784/PMMA photopolymers has also provided a new approach for achieving high-capacity holographic storage in the future.
Introduction
Holographic storage uses the interference principle of light to record information in the form of holograms on storage materials [1][2][3].The stored information can be retrieved by reconstructing holograms [4].Photopolymers incorporating poly-methyl methacrylate (PMMA) have emerged as a predominant choice in the burgeoning field of holographic storage, which is a focal point of contemporary research, attributed to their affordability, minimal shrinkage, ample storage capacity, adjustable thickness, and heightened stability and security [5][6][7][8][9].The information storage capacity of photopolymers is closely related to their holographic properties.Incorporating nanoparticles with varying refractive indices from the monomer has been proven to effectively enhance the refractive index modulation, thus improving the holographic properties of photopolymers [10][11][12].
The purpose of doping nanoparticles into photopolymer materials is to enhance the refractive index modulation of the material by using high-refractive-index nanoparticles as the refractive index modulation component [13].This increases the refractive index difference between the bright area (filled with photoproducts) and the dark area (filled with nanoparticles) during the exposure process, thereby achieving the goal of improving the holographic performance of the material [14].This technique, pursued by numerous researchers over the years, has seen the exploration and utilization of various nanoparticles to this end, as delineated in several key studies [15][16][17][18][19][20][21].In 2008, Luo et al. [18] produced a material with a diffraction efficiency of 49.3% by doping a 1.6 mm thick PQ/PMMA photopolymer material with SiO 2 nanoparticles.In 2013, Lee et al. [19] introduced gold nanoparticles into a PQ/PMMA photopolymer with a thickness of 1.5 mm.The localized surface plasmon resonance (LSPR) effect of the gold nanoparticles was utilized to enhance the photopolymerization effect, resulting in increased grating strength and a doubled diffraction efficiency (from 23.3% to 47.1%).Moreover, the shrinkage rate of the material was only 0.17%.In 2019, Zhu et al. [20] introduced graphene oxide (GO) into PQ/PMMA photopolymers and confirmed the phase modulation capability of GO photopolymers.They successfully recorded a fork grating on the material and reconstructed a vortex beam.In 2022, Hu et al. [21] incorporated fullerene into a PQ/PMMA system, increasing the intensity diffraction efficiency of the material to 72%.
While numerous nanoparticle doping strategies have facilitated advancements in PQ/PMMA photopolymer materials, the potential of these materials remains constrained due to the limited solubility of the photosensitizer PQ [22,23].Specifically, its solubility in MMA stands at a mere 0.7wt%, thereby establishing an upper threshold for the efficacy of PQ/PMMA photopolymer materials [24].As a crucial component of the photochemical reaction of photopolymer materials, increased solubility of the photosensitizer PQ in the material directly correlates with enhanced performance of the photopolymer material, which is a well-established observation.Subsequent studies have been conducted to improve the solubility of the photosensitizer PQ by adding solvents such as BZMA [25] and THFMA [26] (up to 1.3wt% solubility), but the results were not substantial.In recent years, researchers have focused on a highly soluble photosensitizer, Irgacure 784, TI.Compared to the photosensitizer PQ, Irgacure 784 can reach a maximum solubility of 10wt% in MMA [27].It has been demonstrated that Irgacure 784/PMMA photopolymer materials perform better than PQ/PMMA photopolymer materials in terms of sensitivity, response time, and diffraction efficiency under the same experimental conditions [28][29][30].It is apparent that Irgacure 784 holds promise as the favored photosensitizer for a new generation of photopolymer materials.Building upon existing research on the influence of doping with nanoparticles on photopolymer materials, we anticipate the diffraction efficiency of photopolymer materials will be improved by introducing nanoparticle components that amplify the refractive index modulation to the Irgacure 784/PMMA system.However, research in this area is currently rather limited.Therefore, we aim to provide new research directions and improve the holographic performance of photopolymer materials through our study.
Doping nanoparticles proves to be an effective technique in enhancing the refractive index modulation of photopolymers and is widely used in the preparation of photopolymers.Numerous studies cited above demonstrate the effectiveness of this approach in enhancing the holographic properties of photopolymers.When the photopolymer is undoped with nanoparticles, the combination of Irgacure 784 + PMMA already has good holographic properties.Unfortunately, research on nanoparticle doping in this system is severely restricted.The purpose of this investigation is to explore the holographic storage potential of the 784 PMMA system with SiO 2 nanoparticles experimentally.The reasons for using SiO 2 nanoparticles are that they have excellent physical properties such as high hardness and high transparency, and they do not participate in the photochemical reaction of the material during the exposure process.In this study, we establish a mutual diffusion model for the photopolymer material system and conduct numerical simulations to analyze the impact of the addition of nanoparticle components on the refractive index modulation of the material.On this basis, SiO 2 nanoparticles of varying concentrations have been dispersed in Irgacure 784/PMMA photopolymer material.By measuring the diffraction efficiency of Irgacure 784/PMMA photopolymers doped with different concentrations of SiO 2 nanoparticles, it has been demonstrated that the doping of SiO 2 nanoparticles can enhance the diffraction efficiency of photopolymer materials.In this study, the material doped with 1.0 × 10 −3 wt% SiO 2 nanoparticles exhibits the highest diffraction efficiency, which is almost doubled as Polymers 2023, 15, 4391 3 of 12 compared to that of the undoped material.Finally, we perform holographic recording of real objects on the photopolymer material doped with 1.0 × 10 −3 wt% SiO 2 nanoparticles and obtain clear and stable holographic reconstructed images.This proves that the highperformance holographic photopolymer material developed in this study has the potential to meet the demand for large-capacity volume holographic recording.It also provides a new approach to explore methods for improving the performance of holographic storage materials in the future.
Mutual Diffusion Model
Introducing nanoparticles into photopolymer materials as doping agents leverages their ability to serve as refractive index modulating elements within the substrate.Nanoparticles occupying the dark region induce more photosensitizers to diffuse from the dark to the light region, resulting in a higher consumption of photosensitizers in the light region.This achieves the goal of increasing the photoproduct yield.(The photoproducts are the result of the photoinitiator undergoing a polymerization reaction with the material substrate under the influence of light.The generation of photoproducts is the primary cause of the refractive index grating formation and constitutes the main factor influencing the degree of refractive index modulation in the material [22]).During the exposure process, the photosensitizer Irgacure 784 is changed into excited molecules by light and photopolymerized with the monomer MMA in the bright regions to form the photoproducts.As the consumption rate of the photosensitizer molecules in the bright regions is faster than that in the dark regions, the corresponding concentration gradient is generated between the bright and dark regions.The photosensitizer molecules in the dark regions diffuse to the bright regions due to their concentration difference, which increases the chemical potential energy in the bright region.Subsequently, the nanoparticles in the bright regions are affected by the high chemical potential energy used to diffuse from the bright regions to the dark regions, resulting in an increase in the concentration of nanoparticles in the dark regions.Due to the mutual diffusion effect between the monomer and the nanoparticles, the concentration of the photosensitizer in the bright regions of the material is higher after full exposure, resulting in more photoproducts.While in the dark regions, a large number of nanoparticles with large refractive index differences from the monomer are distributed, forming a steady refractive index modulation.At the completion of exposure, the photoproducts and unconsumed photosensitizer molecules are mostly concentrated in the bright regions, while the dark regions are filled by a large number of nanoparticles [14,31,32].
To investigate the impact of introducing silicon dioxide nanoparticles on photopolymer materials, we employ the photopolymer material mutual diffusion model [33][34][35][36] to analyze the physical mechanisms of spatial transfer between different components within the material: In Equation (1), the photosensitizer used is Irgacure 784 (TI), and the dopant is SiO 2 nanoparticles, with the photoproduct (P) as the output.C [.] (x, t) is the spatiotemporal distribution of the concentration of the corresponding component, i.e., TI, SiO 2 , or P. D is the diffusion coefficient of system, and F(x, t) is the spatiotemporal distribution of polymerization rate.
In Equation ( 2), k = Φεd, where Φ is the quantum yield, ε is the molar absorption coefficient, and d is the thickness of the material.Finally, I(x, t) is the distribution of the incident light field.
To investigate the impact of introducing SiO 2 nanoparticles onto the photopolymer material, we conducted numerical simulations by substituting the following initial conditions into Equation (1).In this paper, the transmission grating recorded by photopolymer material is used.The angle between the two incident angles is 25 • .The incident light field is expressed as I(x, t) = I 0 1 + Vcos K g x [37].Since the irradiation intensity I(x, t) used in the article is constant, I(x, t) is a constant function of time.Similarly, the polymerization rate F(x, t) is a constant function that does not change with time.This means that the generation rate of photoproducts mentioned in the article is constant.Because all C [.] (x, t) in Equation ( 1) are functions of time t, the addition of t variables to I and F simply correspond to C [.] (x, t).
The exposing fringe visibility V is unity, and the grating constant is K g = 2π/λ g , with the grating period being λ g = 1.03 µm.The initial light intensity is set to 5 mW/cm 2 .The initial concentration conditions of each component are set as This paper mainly analyzes the influence of introducing SiO 2 nanoparticles onto the whole material by comparing the spatial distribution of TI molecule concentration in the system of doped SiO 2 nanoparticles and undoped SiO 2 nanoparticles under the same conditions.The concentration of TI, photoproducts, and SiO 2 in photopolymers varies with time t.However, since the accumulated exposure energy over time is the direct cause of this change, the coordinate axis below (a, b, c) in Figure 1 is J/cm 2 , which describes the energy.Figure 1 depicts the spatial distribution of the component concentrations (C [.] (x, t)) within the photopolymer system as a function of exposure energy, with the exposure energy being directly proportional to the exposure time.
, = • , In Equation ( 2), = Φεd, where Φ is the quantum yield, ε is the molar absorption coefficient, and d is the thickness of the material.Finally, , is the distribution of the incident light field.
To investigate the impact of introducing SiO2 nanoparticles onto the photopolymer material, we conducted numerical simulations by substituting the following initial conditions into Equation (1).In this paper, the transmission grating recorded by photopolymer material is used.The angle between the two incident angles is 25°.The incident light field is expressed as , = I 1 V cos [37].Since the irradiation intensity , used in the article is constant, , is a constant function of time.Similarly, the polymerization rate , is a constant function that does not change with time.This means that the generation rate of photoproducts mentioned in the article is constant.Because all [.] , in Equation ( 1) are functions of time t, the addition of t variables to and simply correspond to [.] , .
The exposing fringe visibility V is unity, and the grating constant is = 2 , with the grating period being = 1.03 μm.The initial light intensity is set to 5 mW/cm 2 .The initial concentration conditions of each component are set as [ ] , 0 = 1.2 × 10 mol cm and [ ] , 0 = 3 × 10 mol cm ; the diffusion coefficient is = 3.2 × 10 cm s.This paper mainly analyzes the influence of introducing SiO2 nanoparticles onto the whole material by comparing the spatial distribution of TI molecule concentration in the system of doped SiO2 nanoparticles and undoped SiO2 nanoparticles under the same conditions.The concentration of TI, photoproducts, and SiO2 in photopolymers varies with time t.However, since the accumulated exposure energy over time is the direct cause of this change, the coordinate axis below (a, b, c) in Figure 1 is J/cm 2 , which describes the energy.Figure 1 depicts the spatial distribution of the component concentrations ( [.] , ) within the photopolymer system as a function of exposure energy, with the exposure energy being directly proportional to the exposure time.As exposure time increases, the concentration of the photosensitizer TI in the bright regions decreases (appearing in dark color); correspondingly, the concentration of photoproducts generated by the photo-induced polymerization reaction in the bright regions As exposure time increases, the concentration of the photosensitizer TI in the bright regions decreases (appearing in dark color); correspondingly, the concentration of photoproducts generated by the photo-induced polymerization reaction in the bright regions increases with time.The photosensitizer TI in the dark regions is influenced by the concentration gradient and thus flows towards the bright regions, driving the SiO 2 nanoparticles in the bright regions to diffuse in the opposite direction into the dark regions.After the grating reaches a steady state, the photoproducts in the bright regions and the nanoparticles in the dark regions form a steady refractive index modulation.
Figure 2 illustrates the spatial distribution of the photosensitizer (TI) in the photopolymer materials with and without nanoparticle doping at the end of the exposure, while keeping the kinetic parameters used in the simulation constant.It is readily apparent that the introduction of SiO 2 nanoparticles significantly enhances the consumption of the photosensitizer TI molecules within the illuminated region.This effect indicates a higher diffusion rate of photosensitizer TI molecules from the dark regions to the illuminated regions, resulting in a further increase in the quantity of photoproducts within the bright areas.
increases with time.The photosensitizer TI in the dark regions is influenced by the concentration gradient and thus flows towards the bright regions, driving the SiO2 nanoparticles in the bright regions to diffuse in the opposite direction into the dark regions.After the grating reaches a steady state, the photoproducts in the bright regions and the nanoparticles in the dark regions form a steady refractive index modulation.
Figure 2 illustrates the spatial distribution of the photosensitizer (TI) in the photopolymer materials with and without nanoparticle doping at the end of the exposure, while keeping the kinetic parameters used in the simulation constant.It is readily apparent that the introduction of SiO2 nanoparticles significantly enhances the consumption of the photosensitizer TI molecules within the illuminated region.This effect indicates a higher diffusion rate of photosensitizer TI molecules from the dark regions to the illuminated regions, resulting in a further increase in the quantity of photoproducts within the bright areas.As a result, this facilitates the targeted augmentation of refractive index modulation, consequently elevating the holographic performance of the material.
Preparation Process
The components of photopolymer materials are shown in Table 1.To make a SiO2/Irgacure 784/PMMA photopolymer sample, we adopted the optimized thermal polymerization method [38].SiO2 nanoparticles were dispersed in methyl isobutyl ketone (MIBK) solution and shaken in an ultrasonic water bath for 3-5 h.The well-dispersed SiO2/MIBK solution and MMA monomer were added to the reagent bottle.Then, Irgacure 784 photosensitizer (4wt%) and 2,2-azobisisobutyronitrile (AIBN) thermal initiator (1.2wt%) were dissolved in a mixed solution and ultrasonically shaken in a water bath at 60 °C for 20 min to ensure uniform mixing.The compositions of the mixture were As a result, this facilitates the targeted augmentation of refractive index modulation, consequently elevating the holographic performance of the material.
Preparation Process
The components of photopolymer materials are shown in Table 1.To make a SiO 2 /Irgacure 784/PMMA photopolymer sample, we adopted the optimized thermal polymerization method [38].SiO 2 nanoparticles were dispersed in methyl isobutyl ketone (MIBK) solution and shaken in an ultrasonic water bath for 3-5 h.The well-dispersed SiO 2 /MIBK solution and MMA monomer were added to the reagent bottle.Then, Irgacure 784 photosensitizer (4wt%) and 2,2-azobisisobutyronitrile (AIBN) thermal initiator (1.2wt%) were dissolved in a mixed solution and ultrasonically shaken in a water bath at 60 • C for 20 min to ensure uniform mixing.The compositions of the mixture were kept as MMA:AIBN:TI = 100:1.2:4.The reagent bottle was then continuously stirred on a magnetic stirrer and kept at a constant temperature of 60 • C until the solution was viscous.The mixture was poured into a glass mold with a thickness of 1.5 mm and placed horizontally in an oven at 60 • C for 48 h to be solidified.The finished product of the experiment is shown in Figure 3.The material prepared using this process is suitable for holographic storage.
kept as MMA:AIBN:TI = 100:1.2:4.The reagent bottle was then continuously stirred on a magnetic stirrer and kept at a constant temperature of 60 °C until the solution was viscous.The mixture was poured into a glass mold with a thickness of 1.5 mm and placed horizontally in an oven at 60 °C for 48 h to be solidified.The finished product of the experiment is shown in Figure 3.The material prepared using this process is suitable for holographic storage.
Spectra Measurements
The photosensitivity of photopolymers determines the holographic properties of the materials.The materials prepared in this research were analyzed utilizing a TU-1901 dualbeam UV-V spectrophotometer.The absorption spectra of SiO2 nanoparticle materials and undoped nanoparticle materials at the same photosensitizer concentration (4wt%) were obtained, as shown in Figure 4. Figure 4 reveals that incorporating SiO₂ nanoparticles has a negligible impact on the material's absorption coefficient within the 530 nm-550 nm wavelength range.However, at an approximately 514 nm band, there is a significant surge in the absorption coefficient, denoting high susceptibility to laser interactions in this region.The utilization of lasers operating near this wavelength is likely to induce substantial holographic scattering and
Material Characteristics 4.1. Spectra Measurements
The photosensitivity of photopolymers determines the holographic properties of the materials.The materials prepared in this research were analyzed utilizing a TU-1901 dualbeam UV-V spectrophotometer.The absorption spectra of SiO 2 nanoparticle materials and undoped nanoparticle materials at the same photosensitizer concentration (4wt%) were obtained, as shown in Figure 4.
kept as MMA:AIBN:TI = 100:1.2:4.The reagent bottle was then continuously stirred on a magnetic stirrer and kept at a constant temperature of 60 °C until the solution was viscous.The mixture was poured into a glass mold with a thickness of 1.5 mm and placed horizontally in an oven at 60 °C for 48 h to be solidified.The finished product of the experiment is shown in Figure 3.The material prepared using this process is suitable for holographic storage.
Spectra Measurements
The photosensitivity of photopolymers determines the holographic properties of the materials.The materials prepared in this research were analyzed utilizing a TU-1901 dualbeam UV-V spectrophotometer.The absorption spectra of SiO2 nanoparticle materials and undoped nanoparticle materials at the same photosensitizer concentration (4wt%) were obtained, as shown in Figure 4. Figure 4 reveals that incorporating SiO₂ nanoparticles has a negligible impact on the material's absorption coefficient within the 530 nm-550 nm wavelength range.However, at an approximately 514 nm band, there is a significant surge in the absorption coefficient, denoting high susceptibility to laser interactions in this region.The utilization of lasers operating near this wavelength is likely to induce substantial holographic scattering and Figure 4 reveals that incorporating SiO 2 nanoparticles has a negligible impact on the material's absorption coefficient within the 530 nm-550 nm wavelength range.However, at an approximately 514 nm band, there is a significant surge in the absorption coefficient, denoting high susceptibility to laser interactions in this region.The utilization of lasers operating near this wavelength is likely to induce substantial holographic scattering and consequent material loss, indicating its unsuitability for holographic recording endeavors.Therefore, a 532 nm green DPSS laser was selected as the recording light for the material.
Diffraction Efficiency
To measure the optical properties of the material, we built the optical system as shown in Figure 5.A laser beam of 532 nm was split into two equal power laser beams to incident on the material, and the angle between the two beams was 22 • .Laser power of 5 mW was used to irradiate the photopolymer for recording.The experimental results show that the interference spot of the two beams irradiated the sample and gave a circular spot with a radius of about 2 cm.consequent material loss, indicating its unsuitability for holographic recording endeavors.Therefore, a 532 nm green DPSS laser was selected as the recording light for the material.
Diffraction Efficiency
To measure the optical properties of the material, we built the optical system as shown in Figure 5.A laser beam of 532 nm was split into two equal power laser beams to incident on the material, and the angle between the two beams was 22°.Laser power of 5 mW was used to irradiate the photopolymer for recording.The experimental results show that the interference spot of the two beams irradiated the sample and gave a circular spot with a radius of about 2 cm.During the measurement of diffraction efficiency, the first shutter (K1) remained open while the second shutter (K2) was opened for 49 s and then closed for 1 s.This process was repeated 20 times.The light intensity was recorded through laser power meter probe 1 (D1) and laser power meter probe 2 (D2) for the transmitted ( ) and diffracted light ( ) intensities, respectively.These recorded values were then used to calculate the diffraction efficiency of the material.The material is thick (1.5 mm) and has a certain absorption and reflection of the recording light (532 nm).If we ignore the absorption and the Fresnel reflection, the diffraction efficiency () can be defined as follows: where and are the intensities of the transmitted and first-order diffracted beams.The recording sensitivity (S) of the material can be expressed as follows [21]: where here represents the intensity of recording wave, is photopolymer material thickness, and is the diffraction efficiency of the material.In this study, five groups of experiments (A1~A7) were designed and compared based on the different concentrations of doped nanoparticles.The mass fractions of each component among the groups are shown in Table 2.During the measurement of diffraction efficiency, the first shutter (K1) remained open while the second shutter (K2) was opened for 49 s and then closed for 1 s.This process was repeated 20 times.The light intensity was recorded through laser power meter probe 1 (D1) and laser power meter probe 2 (D2) for the transmitted (I 0 ) and diffracted light (I +1 ) intensities, respectively.These recorded values were then used to calculate the diffraction efficiency of the material.The material is thick (1.5 mm) and has a certain absorption and reflection of the recording light (532 nm).If we ignore the absorption and the Fresnel reflection, the diffraction efficiency (η) can be defined as follows: where I 0 and I +1 are the intensities of the transmitted and first-order diffracted beams.The recording sensitivity (S) of the material can be expressed as follows [21]: where I here represents the intensity of recording wave, d is photopolymer material thickness, and η is the diffraction efficiency of the material.In this study, five groups of experiments (A1~A7) were designed and compared based on the different concentrations of doped nanoparticles.The mass fractions of each component among the groups are shown in Table 2.For SiO 2 /Irgacure 784/PMMA photopolymers doped with SiO 2 nanoparticles of different concentrations (0.0 × 10 −3 wt%, 0.5 × 10 −3 wt%, 0.8 × 10 −3 wt%, 1.0 × 10 −3 wt%, 1.2 × 10 −3 wt%, 2.0 × 10 −3 wt%, and 5.0 × 10 −3 wt%), the different diffraction efficiency curves are shown in Figure 6.For SiO2/Irgacure 784/PMMA photopolymers doped with SiO2 nanoparticles of different concentrations (0.0 × 10 wt%, 0.5 × 10 wt%, 0.8 × 10 wt% , 1.0 × 10 wt%, 1.2 × 10 wt% , 2.0 × 10 wt%, and 5.0 × 10 wt%), the different diffraction efficiency curves are shown in Figure 6.From Figure 6, it is apparent that the introduction of nanoparticles at various concentrations induces varying degrees of enhancement in the diffraction efficiency of Irgacure 784/PMMA photopolymer materials.Group A4 (doped with 1.0 × 10 wt% SiO2 nanoparticles) exhibits the highest diffraction efficiency (74%), with a time to reach saturation diffraction efficiency of 850 s.Group A6 (doped with 2.0 × 10 wt% SiO2 nanoparticles) attains saturation diffraction efficiency fastest (at 450 s), with a maximum diffraction efficiency of 63%.Through an analytical comparison of the experimental data presented in this study with findings from other researchers (as delineated in Table 3), it is unequivocally apparent that there is a positive effect of doping SiO2 nanoparticles on the diffraction efficiency of Irgacure 784/PMMA photopolymer materials.Figure 7 presents data on the maximum diffraction efficiency and sensitivity of photopolymers doped with varying concentrations of SiO2 nanoparticles.The doping of SiO2 nanoparticles improves the diffraction efficiency of the materials but also leads to a significant increase in the sensitivity of the photopolymer materials.Of the materials observed, the photopolymer material doped with 1.0 × 10 wt% of SiO2 nanoparticles has a sensitivity that is increased by about 2.25 times compared to the Irgacure 784/PMMA photopolymer.Through an analytical comparison of the experimental data presented in this study with findings from other researchers (as delineated in Table 3), it is unequivocally apparent that the positive effect of doping SiO2 nanoparticles on the maximum diffraction efficiency and sensitivity of From Figure 6, it is apparent that the introduction of nanoparticles at various concentrations induces varying degrees of enhancement in the diffraction efficiency of Irgacure 784/PMMA photopolymer materials.Group A4 (doped with 1.0 × 10 −3 wt% SiO 2 nanoparticles) exhibits the highest diffraction efficiency (74%), with a time to reach saturation diffraction efficiency of 850 s.Group A6 (doped with 2.0 × 10 −3 wt% SiO 2 nanoparticles) attains saturation diffraction efficiency fastest (at 450 s), with a maximum diffraction efficiency of 63%.Through an analytical comparison of the experimental data presented in this study with findings from other researchers (as delineated in Table 3), it is unequivocally apparent that there is a positive effect of doping SiO 2 nanoparticles on the diffraction efficiency of Irgacure 784/PMMA photopolymer materials.Figure 7 presents data on the maximum diffraction efficiency and sensitivity of photopolymers doped with varying concentrations of SiO 2 nanoparticles.The doping of SiO 2 nanoparticles improves the diffraction efficiency of the materials but also leads to a significant increase in the sensitivity of the photopolymer materials.Of the materials observed, the photopolymer material doped with 1.0 × 10 −3 wt% of SiO 2 nanoparticles has a sensitivity that is increased by about 2.25 times compared to the Irgacure 784/PMMA photopolymer.Through an analytical comparison of the experimental data presented in this study with findings from other researchers (as delineated in Table 3), it is unequivocally apparent that the positive effect of doping SiO 2 nanoparticles on the maximum diffraction efficiency and sensitivity of Irgacure 784/PMMA photopolymer materials.Thus, the introduction of SiO 2 nanoparticles of varying concentrations as a new component can improve the holographic properties of the material.Irgacure 784/PMMA photopolymer materials.Thus, the introduction of SiO2 nanoparticles of varying concentrations as a new component can improve the holographic properties of the material.
Recording and Reconstruction of Object Volume Hologram
To authenticate the capacity of the materials to record volume holograms, the experimental arrangement used is depicted in Figure 8.
Recording and Reconstruction of Object Volume Hologram
To authenticate the capacity of the materials to record volume holograms, the experimental arrangement used is depicted in Figure 8.
Recording and Reconstruction of Object Volume Hologram
To authenticate the capacity of the materials to record volume holograms, the experimental arrangement used is depicted in Figure 8.A volume hologram of an object was taken, as shown in Figure 8, and the A2 group of the sample with the highest diffraction efficiency was selected for the measurement.The sample was placed at a designated position, and the shutters K1 and K2 were opened so that the reference light and the object light could illuminate the material at the same time.When the set time was reached, both K1 and K2 were closed and the hologram recording was completed.
Then, K1 was opened and K2 was closed for the hologram reconstruction; the original object and the reconstruction of the hologram are shown in Figure 9.
Figure 9 presents the holographic reconstructed images of the undoped nanoparticle photopolymer material and the nanoparticle-doped photopolymer material.It is evident that the clarity of the holographic reconstructed images has been significantly improved with the SiO 2 /Irgacure 784/PMMA photopolymer.
of the sample with the highest diffraction efficiency was selected for the measurement.The sample was placed at a designated position, and the shutters K1 and K2 were opened so that the reference light and the object light could illuminate the material at the same time.When the set time was reached, both K1 and K2 were closed and the hologram recording was completed.
Then, K1 was opened and K2 was closed for the hologram reconstruction; the original object and the reconstruction of the hologram are shown in Figure 9.
Conclusions
SiO2 nanoparticles with different concentrations were doped into Irgacure 784/PMMA photopolymer, and a 1.5 mm thick SiO2/Irgacure 784/PMMA photopolymer was prepared.By employing a mutual diffusion model, the concentration changes in different components within the photopolymer material during the exposure process were numerically simulated.This study provides evidence that the introduction of SiO2 nanoparticles enables the formation of a steady-state refractive index modulation within the photopolymer.Furthermore, during the diffusion process, the presence of SiO2 nanoparticles induces the influx of a greater number of photosensitizer molecules into the illuminated region, leading to increased consumption and subsequent generation of photoproducts.Ultimately, this approach facilitates the anticipated increase in the degree of refractive index modulation in the photopolymer.Subsequently, optical property measurements were performed on photopolymer materials doped with nanoparticles of varying concentrations.It was found that the material doped with 1.0 × 10 wt% SiO2 nanoparticles exhibits the highest diffraction efficiency, which has been greatly improved compared to the material without nanoparticles (from 47.8% to 74.5%).The photopolymer material doped with 1.0 × 10 wt% SiO2 nanoparticles was utilized for holographic recording and reconstruction, exhibiting definitive and stable recording outcomes.The results of our experiments indicate that holographic properties are improved when the Irgacure 784/PMMA photopolymer is doped with SiO2 nanoparticles.Further investigation is recommended to determine the effectiveness of doping with other nanoparticles, and this study can be used as a reference for enhancing holographic properties using the polymer system studied.The synthesized material demonstrates excellent holographic properties and great potential in the application of holographic recording and storage.
Conclusions
SiO 2 nanoparticles with different concentrations were doped into Irgacure 784/PMMA photopolymer, and a 1.5 mm thick SiO 2 /Irgacure 784/PMMA photopolymer was prepared.By employing a mutual diffusion model, the concentration changes in different components within the photopolymer material during the exposure process were numerically simulated.This study provides evidence that the introduction of SiO 2 nanoparticles enables the formation of a steady-state refractive index modulation within the photopolymer.Furthermore, during the diffusion process, the presence of SiO 2 nanoparticles induces the influx of a greater number of photosensitizer molecules into the illuminated region, leading to increased consumption and subsequent generation of photoproducts.Ultimately, this approach facilitates the anticipated increase in the degree of refractive index modulation in the photopolymer.Subsequently, optical property measurements were performed on photopolymer materials doped with nanoparticles of varying concentrations.It was found that the material doped with 1.0 × 10 −3 wt% SiO 2 nanoparticles exhibits the highest diffraction efficiency, which has been greatly improved compared to the material without nanoparticles (from 47.8% to 74.5%).The photopolymer material doped with 1.0 × 10 −3 wt% SiO 2 nanoparticles was utilized for holographic recording and reconstruction, exhibiting definitive and stable recording outcomes.The results of our experiments indicate that holographic properties are improved when the Irgacure 784/PMMA photopolymer is doped with SiO 2 nanoparticles.Further investigation is recommended to determine the effectiveness of doping with other nanoparticles, and this study can be used as a reference for enhancing holographic properties using the polymer system studied.The synthesized material demonstrates excellent holographic properties and great potential in the application of holographic recording and storage.
Figure
Figure 1a-c depicts the spatial distribution of the various components during the exposure process.Figure 1 displays varying concentrations through a color gradient, ranging from light to dark shades.The lighter hues indicate higher concentrations, while the darker hues indicate lower concentrations.As exposure time increases, the concentration of the photosensitizer TI in the bright regions decreases (appearing in dark color); correspondingly, the concentration of photoproducts generated by the photo-induced polymerization reaction in the bright regions
Figure
Figure 1a-c depicts the spatial distribution of the various components during the exposure process.Figure 1 displays varying concentrations through a color gradient, ranging from light to dark shades.The lighter hues indicate higher concentrations, while the darker hues indicate lower concentrations.As exposure time increases, the concentration of the photosensitizer TI in the bright regions decreases (appearing in dark color); correspondingly, the concentration of photoproducts generated by the photo-induced polymerization reaction in the bright regions increases with time.The photosensitizer TI in the dark regions is influenced by the concentration gradient and thus flows towards the bright regions, driving the SiO 2 nanoparticles in the bright regions to diffuse in the opposite direction into the dark regions.After the grating reaches a steady state, the photoproducts in the bright regions and the nanoparticles in the dark regions form a steady refractive index modulation.Figure2illustrates the spatial distribution of the photosensitizer (TI) in the photopolymer materials with and without nanoparticle doping at the end of the exposure, while Figure 1a-c depicts the spatial distribution of the various components during the exposure process.Figure 1 displays varying concentrations through a color gradient, ranging from light to dark shades.The lighter hues indicate higher concentrations, while the darker hues indicate lower concentrations.As exposure time increases, the concentration of the photosensitizer TI in the bright regions decreases (appearing in dark color); correspondingly, the concentration of photoproducts generated by the photo-induced polymerization reaction in the bright regions increases with time.The photosensitizer TI in the dark regions is influenced by the concentration gradient and thus flows towards the bright regions, driving the SiO 2 nanoparticles in the bright regions to diffuse in the opposite direction into the dark regions.After the grating reaches a steady state, the photoproducts in the bright regions and the nanoparticles in the dark regions form a steady refractive index modulation.Figure2illustrates the spatial distribution of the photosensitizer (TI) in the photopolymer materials with and without nanoparticle doping at the end of the exposure, while
Figure 9
Figure 9 presents the holographic reconstructed images of the undoped nanoparticle photopolymer material and the nanoparticle-doped photopolymer material.It is evident that the clarity of the holographic reconstructed images has been significantly improved with the SiO2/Irgacure 784/PMMA photopolymer.
Table 1 .
Main components of photopolymer materials.
Table 1 .
Main components of photopolymer materials.
Table 2 .
Experimental grouping of doped SiO2 nanoparticles with different concentrations.
Table 2 .
Experimental grouping of doped SiO 2 nanoparticles with different concentrations.
Table 3 .
Comparison of diffraction efficiency and sensitivity of Irgacure 784/PMMA photopolymer materials with different ratios.
Table 3 .
Comparison of diffraction efficiency and sensitivity of Irgacure 784/PMMA photopolymer materials with different ratios.
Table 3 .
Comparison of diffraction efficiency and sensitivity of Irgacure 784/PMMA photopolymer materials with different ratios. | 8,435 | 2023-11-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Diaquabis(8-chloro-1,3-dimethyl-2,6-dioxo-1,2,3,6-tetrahydro-7H-purinato-κN 7)copper(II) dihydrate
The title mononuclear copper(II) complex, [Cu(C7H6ClN4O2)2(H2O)2]·2H2O, based on 8-chlorotheophylline (HCt), has the Cu atom at a center of symmetry in a slightly distorted trans square-planar geometry coordinated by two N atoms of two deprotonated HCt ligands and two O atoms of water molecules. The crystal packing is stabilized by hydrogen bonds involving deprotonated HCt ligands, coordinated water molecules and uncoordinated solvent water molecules.
The title mononuclear copper(II) complex, O 2 ) 2 (H 2 O) 2 ]Á2H 2 O, based on 8-chlorotheophylline (HCt), has the Cu atom at a center of symmetry in a slightly distorted trans square-planar geometry coordinated by two N atoms of two deprotonated HCt ligands and two O atoms of water molecules. The crystal packing is stabilized by hydrogen bonds involving deprotonated HCt ligands, coordinated water molecules and uncoordinated solvent water molecules.
The stucture of (I) is shown in Fig. 1. It is composed of a mononuclear entity [Cu(Ct) 2 (H 2 O) 2 ], together with two crystal water molecules; the copper II atom, lying in a center of symmetry, is bonded to the nitrogen atoms of two individual 8-Ct molecules and oxygen atoms from two water molecules (Table 1), forming a trans square-planar arrangement. It should be noted that the ligand is in its anionic form (8-Ct -) in order to achieve charge balance.
Selected bond distances and bond angles are given in Table 1. The Cu-N and Cu-O bond lengths and bond angles at Cu1 are similar to those reported in some tetra-coordinated copper complexs (Zhao et al., 2003;Okabe et al., 1993).
The 8-Ct molecule deviates slightly from planarity and the dihedral angle created by the least squares planes between the pyrimidine and imidazole ring is 1.2 (1) °.
The structure presents O-H···O, O-H···N intermolecular hydrogen bonds (Table 2). between 8-Cts and water molecules. The coordinated water molecule is a donor towards the pyrimidine O2 and the uncoordinated water O4, thus linking the complex units into a 2-dimentional structure along the b axis. Besides, the lattice water molecules acts as a donor towards the pyrimidine O1 and imidazole N4. These two hydrogen bonds serve to link the 2-D structures into a 3-D array along the c axis. Hydrogen atoms attached to carbon atoms were positioned geometrically and treated as riding, with C-H = 0.96 Å and U iso (H) = 1.2U eq (C). The water H atoms were located in a difference Fourier map, and were refined with a distance restraint of O-H = 0.82-0.84 Å and U iso (H) = 1.5U eq (O). The crystals are unstable outside the parental solution, for what the quality of the diffraction data was poor. This led to unrealistic displacement parameters for four C and one O atoms, which were accordingly restrained to be nearly isotropic. Fig. 1. The structure of (I), showing 30% probability displacement ellipsoids and the atom-labeling scheme. Diaquabis(8-chloro-1,3-dimethyl-2,6-dioxo-1,2,3,6-tetrahydro-7H-purinato-κN 7 )copper(II) dihydrate Crystal data [Cu(C 7
Special details
Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes.
Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 914.2 | 2008-08-13T00:00:00.000 | [
"Chemistry"
] |
Black Holes or objects without event horizon?
In this paper, we investigate the question of whether the generally accepted interpretation of the supermassive object in M87, investigated by EHT collaboration, is the only possible one. There are grounds for this, in particular, due to the detection of a magnetic (cid:28)eld in its vicinity. For this purpose, the stability of self-gravitating objects is investigated based on the correct de(cid:28)nition of the energy of gravitation in the framework of the bi-metric approach to the theory.
Introduction
In work [1], bi-metric equations of gravitation were considered, according to which the gravitation of a point mass does not have a singularity. The force of gravity acting on a test particle at rest is where r g = 2GM/c 2 is the Schwarzsschild radius of mass M, r is distance from the center, and f = (r 3 g + r 3 ) Singularity is missing in this model. The gravitational force decreases within the Schwarzschild radius, and eventually tends to zero along with the distance from the center.
Typically, the Schwarzschild radius is small compared with the dimensions of celestial objects. That is, almost always the condition r r g is executed and therefore F coincides with Newton's law.
However, this solution leads to the possibility of the existence of supermassive objects without an event horizon. The possibility of this was considered even before such an object was discovered in the center of our galaxy [2]. In [3] it was proposed as an alternative to black holes. Later, this possibility was studied in more detail in [13, 2, ?, 14]. However, the nature of such objects remained unclear.
Here new result is presented in viewed on EHT [4,5] collaboration results.
2 Energy of gravitation A correct expression for the density of the gravitational eld in the bi-metric model can be obtained directly from the expression for the force (1) in Minkowski space.
We dene the potential of the force F as U (r) =´r 0 As a result, we obtain that function U (r) satises the dierential equation where the mass density is ρ = t 00 /c 2 .
This equation has a clear physical meaning. This is the Poisson equation for a spherically symmetric eld, in which ρ is the mass density of the gravitational eld created by the mass M , since in the absence of another matter only the density of the gravitational eld can lead to the fact that the potential will dier from zero.
We can also write where κ = 4πG/c 4 which gives, when integrating over entire spacê where E k is the internal inergy of the M 0 formed by N protons in the volume V = 4 /3πR 3 . The second term V t 00 is the gravitational binding energy in volume V . Here and r g = 2GM 0 /c 2 . These graph are shown in g. 2 for several values of mass M 0 . In equation 4, E k is equal [10]: m e and m n are mass of electron and neutron, respectively.
These graphs show that all congurations of self-gravitating objects are stable. The graphs also show that the mass of the completely degenerate Fermi gas not exceeding 10 39 g. Apparently, objects with large masses have a more complex structure, their outer layers have a low density.
It is also impossible to exclude gross errors in determining the mass of such objects [17]. However, in any case, one might think that objects with masses less than 10 39 g are a viable alternative to black holes.
Light deection
Let's nd the radius of the ring of light from the object in M87. This can be done in two ways. First, the deection angle, dened as the angle between the asymptotic incoming and outgoing trajectories, obtained in [6] for an arbitrary Lagrangian like used in [15] based on some Weinberg method.
, r is the distance from the object center, B = f 2 , r g is the closest approach distance of the light, u is the impact parameter, dened as the distance between the black hole and each of the asymptotic photon trajectories, related to r m by u = B(rm) Second, we can use Nemirov's simple and elegant method [7], suitable for at space. This gives In this equation R is the radius of the object,r g is its Schwarzschild radius, D is the distance from an observer.
The both method gives the same result -0.22 µuas that is consistent with observation [4]. However, a quantitative comparison is not possible for the result [5] due to the strong blurring of the inner boundary of the photon ring.
In addition, it should be noted that the existence of a magnetic eld in the vicinity of the object is dicult to reconcile with the generally accepted interpretation of the M87 object as a Black Hole.
Thus, in fact, we do not have a quantitative test for a condent conclusion about the nature of M87.
There is, however, another method for this, which is discussed in the following sections.
5 Schwarzschild radius of the visible Universe
Motion of a uid
The expanding Universe can be viewed as an isentropic uid. It is shown in [15] that the macoscopically small elements (particles) of such a spherically symmetric uid are described in exact accordance with the relativistic Euler equations by the following Lagrangian where G αβ = χ 2 g αβ , η αβ is a solution of our vacuum equations of gravitation in Minkowski space, χ = w/ρc 2 where w is the volume density of the enthalpy where w is the enthalpy per unit volume, ρ is the density, t 00 is the density of gravitational binding energy for mass M , p is the pressure.
The integral of motion is This equation is the relativistic Bernoulli equation in Minkowski space.
The constant has a meaning of the energy of a particle and , therefore, this equation can be read as where the constant E = E/mc 2 is the dimensionless energy E of the particles of the uid and v =ẋ. At present, the pressure is negligible and therefore χ = 1 + /ρc 2 , and for small distance from an observer and atĒ = 1, the velocity v(r) is proportional to the distance r from the observer:
Acceleration of Universe as a test gravity properties
The Lagrange equation obtained from (7) This graph shows that the acceleration of distant particles rapidly tends to zero at distances to observed objects in excess of the value R g = c 3 8πGρ 1/2 were ρ is the density of the observed Universe because R g = 1.6 · 10 28 cm at ρ = 6 · 10 −30 g/cm 3 . The same should be said about the force of gravity, which is the acceleration of a unit mass.
Therefore, this value R g can be called the gravitational radius of the visible Universe.
At any point in the innite universe, only gravity within the radius R g aects the observed matter.
The graph shows that the acceleration of the expansion of the universe, which cannot explain general relativity, is a consequence of the behavior of gravitation near the Schwarzschild radius R g of the Universe.
The reason for the acceleration of the expansion of the Universe is that its observable size approaches the Schwarzschild radius R g of the Universe. This is a measurable quantity. And it is this that makes it possible to test the conclusion about the existence of black holes and the existence of a singularity.
For example, you can nd the deceleration parameter: gives some condence in the above result.
The observation gives the present-day value of this parameter q 0 as approximately −0.55. Graph 4 shows q as a function of distance from the observer in the theory used here.
Graph 4 shows that the observed value of pre sent-day q 0 ≈ −0.55 corresponds to the geometrical distance of the remote objects of about 10 28 cm. It is consistent with g. 3 that shows the acceleration of distance object.
Appendix
Physical meaning of the bimetricity In paper [1], the opinion was substantiated, which goes back to Rosen [8], that the theory of gravitation should be bi-metric. However, this assumption needs serious substantiation. The book [11] is devoted to this, and below is a brief substantiation of the description of gravitation in the Mikowski space.
In Einstein's classical theory, coordinates play a twofold and dicultly compatible role. On the one hand, coordinates are only a way to parameterize events, that is, points in space-time. From this point of view, they are completely arbitrary. On the other hand, they play the role of gauge transformations.
Rosen [8] was the rst to recognize the need for introducing Minkowski space into theory. The possibility of considering Einstein's equations in at space was also considered by some authors after the paper of Tirring [9].
The physical meaning of bimetrism used in the present paper is based on ideas going back to Poincaré, who realized that there is a strange situation: In order to characterize the properties of the geometry of space, we must know the properties of measuring instruments, and in order to characterize the properties of instruments, we must know the geometric space properties. In the modern interpretation, this can be summarized as follows: The physical, operational sense has only the aggregate "space-time geometry + properties of measuring instruments" [11]. In this form, this fact has never been realized in physics.
However, a step in its implementation can be made if we notice that it is the reference frame used by the observer is the measuring tool that is necessary to establish the geometric properties of space-time. Therefore, it should be assumed that the following statement is true: Only the combination space-time geometry + properties of the reference frame used has physical meaning.
Of course, by reference frame we mean here not a coordinate system, but a physical device consisting of a reference body and a clock attached to it.
Remembering all the above, we now consider a classical eld F in an inertial reference system (I RF ), where space-time according to experience is Minkowski space. The world lines of the particles of mass m moving under the action of the eld F form the reference body of a non-inertial reference which can be named the proper reference frame (P RF ) of the eld F .
If an observer in a P RF of the eld F is at rest, his world line coincides with Du α /dσ = 0.
(We mean that an arbitrary coordinate system is used.) The same should occur in the P RF used. That is, if the line element of space-time in the P F R is denoted by ds, the 4-velocity vector ζ α = dx α /ds of the point-masses forming the reference body of the P RF should satisfy the equation Dζ α /ds = 0 (10) The equation (10) For example, if the eld F is electromagnetic, then the space-time in such reference frames is Finslerian [1,11]. And the space-time in the reference frames comoving to an isentropic ideal uid is conformal to the Minkowski space. 1 We use notations and definitions, following the Landau and Lifshitz book [16].
In the case of gravity, we proceed from Thirring's assumption that gravity is described by a tesor eld ψ αβ (x) and the Lagrangian describing the motion of test particles has the form L = −mc[g αβ (ψ)ẋ αẋβ ] 1/2 .
Then it is obvious that space-time in the PRF is Riemannian with a linear element of the form ds = g αβ (x)dx α d β 1/2 | 2,755.2 | 2021-03-23T00:00:00.000 | [
"Physics"
] |
Modelling atomic layer deposition overcoating formation on a porous heterogeneous catalyst
Atomic layer deposition (ALD) was used to deposit a protective overcoating (Al 2 O 3 ) on an industrially relevant Co-based Fischer–Tropsch catalyst. A trimethylaluminium/water (TMA/H 2 O) ALD process was used to prepare B 0.7–2.2 nm overcoatings on an incipient wetness impregnated Co–Pt/TiO 2 catalyst. A diffusion–reaction differential equation model was used to predict precursor transport and the resulting deposited overcoating surface coverage inside a catalyst particle. The model was validated against transmission electron (TEM) and scanning electron (SEM) microscopy studies. The prepared model utilised catalyst physical properties and ALD process parameters to estimate achieved overcoating thickness for 20 and 30 deposition cycles (1.36 and 2.04 nm respectively). The TEM analysis supported these estimates, with 1.29 (cid:2) 0.16 and 2.15 (cid:2) 0.29 nm average layer thicknesses. In addition to layer thickness estimation, the model was used to predict overcoating penetration into the porous catalyst. The model estimated a penetration depth of B 19 m m, and cross-sectional scanning electron microscopy supported the prediction with a deepest penetration of 15–18 m m. The model successfully estimated the deepest penetration, however, the microscopy study showed penetration depth fluctuation between 0–18 m m, having an average of 9.6 m m.
Introduction
Atomic layer deposition (ALD) presents an interesting path to modify heterogeneous catalysts with active or inert inorganic compounds. 1ALD overcoating provides a versatile toolbox for catalyst active site modification and protection.3][4] Several publications give examples of ALD overcoatings preventing catalyst deactivation through leaching and sintering, [4][5][6] where catalysts are modified through repeated self-limiting reaction cycles to achieve desired overcoating thicknesses with atomic-level precision.Due to these properties, ALD process is also relevant for very high aspect ratio materials and nanoporous structures, such as catalysts. 7,8After the deposition, overcoating structures can be modified to specific needs with temperature 1,9,10 or reactive treatments. 11Although ALD gives a wide variety of tools to modify different materials, the precursor transport into the porous substrates is no trivial question.
Many of the successful examples 9,10,12 present the ALD overcoating deposition on a catalyst surface and overcoating formation around a specific active metal site.However, less information is available on the precursor penetration into a porous heterogeneous catalyst.Therefore, it is important to understand the ALD process and precursor parameter effect on the achieved penetration depth.In literature, this has been addressed with models to predict a single parameter, such as precursor sticking probability, 13 growth modes (growthper-cycle behaviour) [14][15][16] as well as experimental methods to select ALD process conditions (temperature, pressure and exposure time). 172][23][24] These examples present a foundation for modelling precursor transport and required exposure time to achieve the desired precursor penetration through the porous substrate.However, the previous modelling studies have not extensively been connected to microscopical examination and resulting catalytic performance.Our work presents a coupled diffusion-reaction differential equation model to estimate ALD overcoating penetration into a Co-Pt/TiO 2 porous catalyst, similar to commercial
View Article Online View Journal | View Issue
This journal is © the Owner Societies 2022 Phys.Chem.Chem.Phys., 2022, 24, 20506-20516 | 20507 catalysts listed by Rytter and Holmen. 25The model is used to estimate deposition thicknesses with a given set of catalyst structural parameters and precursor parameters.The presented model is compared against microscopy (TEM and SEM) measurement data.
Atomic layer deposition precursor diffusion model
To address atomic layer deposition precursor transport into the selected TiO 2 catalyst structure, a diffusion-reaction partial differential equation model was prepared based on the work of Yanguas-Gil and Elam, 7,18,22 Keuter et al. 23 and Ylilammi et al. 19 In the presented model, the precursor was considered as an ideal gas with homogeneous concentration over the catalyst surface.The porous substrate (catalyst) was characterised by mean porosity, mean tortuosity and mean pore size, resulting in the utilisation of the mean diffusion coefficient (Knudsen diffusion coefficient, D ki ).As the precursor diffuses through the porous material, the precursor density decreased because of interaction with surface reactive sites, following secondorder reaction kinetics.A practical simplification was made to assume reactive sites having uniform distribution on the porous substrate, having equal reaction probability with the precursor molecule.After the surface reaction, the precursor molecule had no ability for desorption or surface migration.
The prepared model consists of two coupled diffusionreaction differential equations: 7,23 @n P ðt; zÞ @t ¼ D ei @ 2 n P ðt; zÞ @z 2 À s 1 4 v th b 0 Á n P t; z ð ÞÁYðt; zÞ (1) Eqn (1) gives the volumetric precursor density (n P ), dependent upon the time (t) and the depth (z) inside the porous catalyst particle.As the extent of surface reaction depends on both precursor density and available reaction sites, the consumed precursor density is related to the ratio (% s) of surface area (A O ) and pore volume (V P ).The b 0 is a simplified estimation with the Langmuir equation for precursor sticking probability to the coated surface. 4,9,24The second coupled differential equation (eqn (2)) considers the fraction of available reaction sites Y(t, z), where s 0 is the average surface area of adsorption sites, also including the shielding effect of ligands (steric hindrance) on the precursor molecule surface coverage. 8The degree of surface coverage is determined as 1 À Y(t, z).The gas phase diffusion coefficient in porous structures can be divided into two parts, i.e., into molecular diffusion and Knudsen diffusion (eqn (3)).
where the gas phase diffusion coefficient is given by D i , molecular diffusion coefficient by D mi and Knudsen diffusion coefficient by D ki .Due to catalyst pore dimensions and TMA partial pressure (1.3 kPa at 293 K), the transport of precursor molecules is dictated by Knudsen diffusion and therefore, D mi À1 in eqn (3) can be neglected: The estimated TMA mean free path (B18 mm) was significantly larger than the catalyst average pore radius of 28 nm.In the case of porous materials, it should be considered that the whole volume of a catalyst particle is not available for the diffusion.In order to take this into account, the voidage of a porous particle is expressed with porosity value e, and tortuosity (labyrinth factor) is expressed by value t.The Knudsen diffusion coefficient considers molecular movement and interaction with pore walls in a perfectly straight and cylindrical structure.Therefore, the determination of effective diffusion coefficient is expanded to eqn (5): where D ki is multiplied with a simplification factor of the tortuous, discontinuous and complex pore structure.Then, the effective diffusion coefficient (D ei ) for the studied porous catalytic material is based on Knudsen diffusion and is given as: Values of parameters in eqn (6), i.e., mean pore radius (r), porosity (e) and tortuosity (t) are presented in Table 1.In addition to the catalyst structural parameters, Table 2 gives trimethylaluminium (TMA) precursor parameters required for the used model.Fig. 1 presents a schematic for model domains and eqn ( 7)- (10) give the initial and boundary conditions applied to solve partial differential equations (eqn 1 and 2).The table presents only precursor parameters for TMA, neglecting the second half-reaction compound (H 2 O).This was due to the smaller mass and molecular dimensions of water, enabling easier and faster diffusion into the porous material.Therefore, @n P ðt; zÞ @z z¼40 mm ¼ 0; 8t !0 (10)
Solution of partial differential equation system
The partial differential equation system was solved by applying Matlab software.The second derivative of spatial coordinate z (depth in the porous catalyst particle) was discretised using the finite differences method.This was done using a Matlab modification of the dss044 algorithm developed by Schiesser. 32In this algorithm, the second derivative is discretised applying five-point central difference formulas, based on fourth-order approximation.
The formed system of ordinary differential equations (ODEs) was solved using the Matlab algorithm ode15s, suitable for stiff ODE systems.
Catalyst preparation
Stepwise co-impregnation was used to prepare the Co-Pt/TiO 2 catalyst.In the first step, a TiO 2 support (ACCU s SPHERE 0.4 mm ST32244, S BET 52 m 2 g À1 , V pore 0.37 mL g À1 and d pore 28.9 nm) was dried at 100 1C under a vacuum for 1 h.
The support was then co-impregnated twice with an aqueous solution of cobalt nitrate (Co(NO 3 ) 2 Á6H 2 O) and platinum(IV) nitrate (Pt(NO 3 ) 4 ).Between impregnation steps, the catalyst was dried in a rotary vacuum evaporator and calcined at 300 1C for 4 h.The resulting catalyst contained 21.0 wt% cobalt and 0.2 wt% platinum.The ALD (trimethylaluminium (TMA)/ water) process was carried out using Picosun R-200 ALD equipment.Nitrogen (purity 99.999%) was used as a carrier gas and TMA deposition cycles (10, 20 and 30) were performed at 150 1C (TMA purity 99.999%, Sigma-Aldrich), with a stop-flow deposition and purge cycles of 10 s/80 s/10 s/80 s.The stop-flow cycle had a 0.7 s precursor pulse and 9.3 s equilibrium time.Deionised water was used as a second half-reaction reagent (with stop flow equilibrium time).The TMA and water were evaporated at room temperature.After the ALD process, the resulting catalyst samples required heat treatment to re-open catalyst active sites beneath alumina overcoating. 10,33The heat treatment was conducted in the tubular reactor prior to FT reaction.For catalyst characterisation experiments, the heat treatment was performed with a Micromeritics 3Flex 3500 instrument.The heat treatment was done in an N 2 flow (40 N mL min À1 g cat
Catalyst characterisation
2.4.1 Nitrogen adsorption and desorption.The N 2 physisorption experiments were carried out with a Micromeritics 3Flex 3500 instrument.A catalyst sample (B300 mg) was placed in a VacPrep degassing station and kept at 150 1C for 12 h under a vacuum (10 À2 mbar).After degassing, the tube was mounted on the measuring instrument.The catalyst surface area was estimated using the Brunauer-Emmett-Teller (BET) 34 equation, and the Barrett-Joyner-Halenda (BJH) 35 method was used for total pore volume and average pore diameter determination.The average pore diameter was evaluated from the nitrogen desorption branch.
Transmission electron microscopy (TEM).
The transmission electron microscopy studies were performed in a Jeol JEM 2800 analytical HR-TEM at an accelerating voltage of 200 kV.The samples are prepared by crushing the catalyst particles in a mortar and pestle.The powder is redispersed in acetone and sonicated to achieve suitable dispersion.Of this suspension, 10 mL is drop casted on an EMR holey carbon 400 square mesh Cu grid and air dried.Prior to TEM investigation, the grids are preserved under rotary vacuum conditions.
2.4.3Scanning electron microscopy-energy-dispersive X-ray spectrometer (SEM-EDS).Scanning electron microscopy (SEM) and energy-dispersive X-ray spectrometry (EDS) examinations were conducted with a Zeiss Crossbeam scanning electron microscope, having an EDAX (energy-dispersive X-ray) spectrometer.Sample casts were prepared by the addition of epoxy resin (B23.3 g, EpoFix, Struers), mixing with B10 mg of carbon powder (VULCAN XC72R GP-3875, CABOT) and stirred by hand to achieve a homogeneous mixture.Then, 2.8 g of hardener (EpoFix hardener, Struers) was added into the cast mixture.The mixture was poured into a mould after inserting catalyst sample particles into the mould.The resin mixture was cured for 24 hours, and cured cast was ground to expose crosssections of the catalyst particles.Before the SEM experiment, the cured cast was sputtered with Pt to induce a signal from the sample.Electron acceleration voltage was 15 keV.
Results and discussion
3.1 Overcoating penetration to the porous catalyst structure Fig. 2 presents a SEM image of overcoated catalyst sample particles.Catalyst support manufacturer (Saint-Gobain Norpro, ACCU s SPHERE ST32244) reported measured particle size distribution for catalyst support as 396 AE 23 mm.In order to address ALD overcoating penetration experimentally, catalyst particles (similar to Fig. 2) were casted in resin, sanded and polished to hemispherical shape.Fig. 3 gives the SEM-EDS line scan measurement spectra starting from the outside of the catalyst particle until the middle of the measured particle (B200 mm).Alumina overcoating had a clear response at the surface of the catalyst particle, ranging from 0-18 mm.In addition to the ALD overcoating, a clear boundary was observed for Co at B75 mm.At a similar depth, the Ti response increased as relative atomic abundancy was increased from Co to the Ti response.
Line scan (Fig. 3) presents information only from the selected crosscut.Therefore, to address ALD overcoating penetration uniformity, Fig. 4 deepest penetration was between 15-18 mm.The cobalt penetration ranged between 80-140 mm.The cobalt missing from the middle of the particles was an indication of successful grinding close to the assumed hemispherical shape.Interestingly, both cobalt elemental mapping and line scan show varying cobalt loading for distinct locations of the catalyst particle.This unevenness might have resulted from varying support densities having both anatase (80%) and rutile (20%) TiO 2 crystallite phases or non-uniform tortuosity and porosity throughout the catalyst particle.With the precursor diffusion-reaction model, the expected overcoating penetration depth was 19 mm, corresponding well with Fig. 4, a measured maximum penetration depth of 15.2 mm.
Fig. 5 presents a diffusion-reaction model estimate for the degree of surface coverage (Y[t, z]) as a function of the position inside the porous particle.With the used catalyst structural parameters (Table 1), deposition process and precursor parameters (Table 2), the estimated degree of surface coverage remains constant until B19.1 mm, whereafter the precursor coverage depleted rapidly (Y = 0 at 19.9 mm).In addition to the model prediction, Fig. 5 presents the measured SEM-EDS line scan data.When measuring the line scan from the deepest observed penetration depth, the penetration profile corresponds closely to the model prediction.The Gaussian-like distribution with the SEM-EDS measurement data resulted from X-ray scattering at the measured surface.Due to the scattering effect, the Al signal slowly increased until 4-9 mm and started to decrease after 6-10 mm.The deepest penetration depth was similar to SEM-EDS elemental mapping with 15-18 mm.Therefore, the diffusion-reaction model is slightly overestimating the penetration depth.The overestimation might result from several factors (discussed later in detail in the section ''Diffusion-reaction model parameter sensitivity'').However, the most relevant misinterpretation might be related to the used precursor density outside the porous particle.The precursor density (m À3 ) was calculated from TMA vapour pressure (1.3 kPa at 293 K), thus giving the ideal maximum precursor density outside the catalyst particle.
Overcoating thickness and conformality
The previous section discussed the overcoating penetration depth into the catalyst particle.In addition to the penetration profile estimation, the presented diffusion-reaction model was utilised to estimate overcoating thickness on the catalyst surface.The model predictions were compared to experimental measurements.Fig. 6 gives the HR-TEM image of the TiO 2 particle having a 30-cycle amorphous overcoating.The overcoating in Fig. 6 presents a rather uniform thickness of 2.93 AE 0.21 nm (0.98 Å per cycle).However, when considering several microscopy images from separate TiO 2 particles, the average overcoating thickness was 2.15 AE 0.29 nm (0.72 Å per cycle).In literature, the growth-per-cycle of 0.98 Å per cycle for the TMA/ H 2 O process has been reported for lateral high-aspect-ratio (LHAR) test structures at a deposition temperature 300 1C. 20imilar values have been reported by Ott et al. 36 and Puurunen, 16 with GPC decreasing linearly with deposition temperature from 1.2 Å per cycle (180 1C) to 0.9 Å per cycle (300 1C).The deposition temperature in our experiment was 150 1C; thus, a lower GPC was achieved compared to the silicabased test structures.
To present overcoating thickness variation, Fig. 7 gives the thickness measurement results as a histogram with a Gaussian distribution.Interestingly, a rather wide distribution of overcoating thicknesses were measured.The 30c sample deviation was clearly increased compared to the 20c sample.The overcoating distribution for the 30c sample was ranging from 1.25 to 3.75 nm.Table 3 summarises the measured thicknesses for 20 and 30 cycle ALD overcoating with experimental error and sample size.The same table also presents the model prediction thickness values for corresponding overcoated catalysts.3 are in contradiction with these results, as 20 deposition cycles resulted in a slightly lower GPC compared to 30 cycles.However, when considering the experimental error, there is no significant difference between the GPCs.Therefore, making conclusions on the different growth regions was not possible with the given results.To determine overcoating thickness with the diffusion-reaction model, an average of the measured GPC's was used (GPC = 0.681).If the presented model would be used to estimate overcoating thicknesses for higher amount of deposition cycles, the 30c sample GPC should be used in the calculation.
In addition to the prepared diffusion-reaction model and characterization experimental work, the overcoated samples were studied in the Fischer-Tropsch reaction.These results are reported in the supporting information, where 20 and 30 cycle ALD overcoated samples present a decreased rate of deactivation.Although the deactivation rate could be decreased (see Fig. S1, ESI †), this benefit comes with a price of promoted methanation activity and decreasing chain propagation a-value (see Table S2 and Fig. S2, ESI †).The chain propagation a-value was decreasing linearly with respect to overcoating thickness from 0.917 (non-overcoated catalyst) to 0.908 (30 deposition cycles).
Diffusion-reaction model parameter sensitivity
Parameter sensitivity analysis was used to determine each variable effect on the achieved penetration depth.In the presented diffusion-reaction model, there are two sets of parameters: catalyst structural (Table 1) and ALD precursor parameters (Table 2).From these parameters, the most significant effects This journal is © the Owner Societies 2022 were related to catalyst porosity (e), catalyst tortuosity (t), deposition time (t), precursor density outside the porous material (n P ), maximum density of particles adhering to the surface (s P ) and deposition temperature (T P ).
3.3.1 Sensitivity related to deposition process and precursor parameters.Fig. 8 presents achieved penetration depth for varying exposure time (t, seconds).In the fig.a, the slope at the adsorption front stays constant, and the exposure time mainly affects the penetration depth.This is as expected when b 0 , n P and s P remained constant.Although deposition time is an easy parameter to modify, the full ALD process duration will increase rapidly (see Fig. 8b) if no other parameters are changed to achieve deeper penetration into the catalyst particle.
In addition to the deposition cycle time (single half-reaction time), precursor density outside the porous material (n P ) influences the penetration depth.Fig. 9 presents the effect of varying precursor density (precursor pressure) introduced to the ALD chamber.With low precursor partial pressure, the precursor molecules are reacting with a given sticking probability (b 0 ) on the available sites (s P ) and depleting faster compared to higher partial pressures.
The deposition temperature has an effect on both penetration depth and achieved growth-per-cycle (GPC) in the TMA/ H 2 O ALD process.Deeper penetration can be expected in an increased deposition temperature due to enhanced diffusion (T P at the eqn ( 6)) and a decreased surface OH group availability. 28,38Having less OH groups for TMA precursor adsorption, the precursor can travel deeper into the catalyst structure (see Fig. 10).In addition to the penetration depth, the achieved layer thickness will change due to OH group availability.Fig. 11 presents thickness profiles given by eqn (11): where the deposition temperature has an effect on both the degree of surface coverage and the GPC.Increasing the deposition temperature will decrease the achieved layer thickness through decreased GPC.
Maximum density of particles adhering to the surface (s P ) is dictated by the availability of reactive sites on the catalyst surface and precursor dimensions (see Fig. 12).Decreasing precursor size will allow for an increased surface saturation and less steric hindrance from neighbouring molecules.Altering the precursor structure would affect the s P through a different ligand arrangement and the bonding configuration. 8imilar to our work, Keuter et al. 23 present a study with a tetrakis(ethylmethylamino)zirconium (TEMAZ) precursor on ZrO 2 support, where the small number of surface adsorption sites and relatively large TEMAZ molecular dimensions results in film growth restrictions due to steric hindrance.In their study, the porous material surface has open reactive sites; however, most of these sites have been blocked by alreadyadsorbed precursor molecules.With a TMA precursor, the molecular dimensions are smaller, and less restrictions are present for adsorption reactions.Catalyst porosity (e) and tortuosity (t) are present in the eqn (6).Dividing porosity per tortuosity, gives a simplified average of the catalyst structure.Typical high-range tortuosity values range between 6-10, 39,40 therefore, the catalyst support used in our experiments had remarkably high tortuosity value (t = 32.1).Fig. 13 presents the effect of varying tortuosity value to achieved penetration depth.In addition to catalyst tortuosity, porosity influences the achieved penetration depth.Fig. 14 presents the effect of varying values of porosity to the achieved penetration depth.Fig. 15 presents the effect of porosity and tortuosity on the effective diffusion coefficient (D ei ).With high tortuosity values, the porosity effect on the resulting D ei decreases.Although D ei is not greatly affected at the high tortuosity values, the slight change in the D ei (4.56 Â 10 À8 cm 2 s À1 and 9.93 Â 10 À8 cm 2 s À1 at e = 0.2 and e = 0.5 respectively with fixed t = 32.1)has a rather high impact on the achieved penetration depth in Fig. 14.This journal is © the Owner Societies 2022
Catalyst characterisation
The catalyst nitrogen sorption measurement results are presented in Table 4. BET surface area for non-overcoated Catalyst A is 38.3 m 2 g cat À1 , whereas ALD overcoated catalyst samples present the surface area between 41.3-41.6 m 2 g cat À1 .The increased surface area is related to overcoating temperature treatment, resulting in porous Al 2 O 3 structure.Pore volumes remain closely unchanged with ALD overcoated samples, which could be expected, as overcoating thickness was between 0.68-2.04nm; the resulting volume increase cannot be measured with the selected analysis method.However, the pore size measurement was able to give comparison data between catalyst samples.Comparing to the non-overcoated Catalyst A, the pore size was decreasing between 2.6-3.1 nm with overcoated samples.By assuming a cylindrical pore, the pore size decreasing was due to overcoating formation on the inner surface of the cylindrical hole.Therefore, dividing the pore size decrease by two would estimate layer thicknesses for overcoated samples, having a similar range as TEM and modelling results presented in Table 3.
The diffusion-reaction model utilised the Knudsen diffusion coefficient (D ki ) to estimate precursor transport inside the catalyst particle.Pore size distribution analysis was relevant to address whether or not Knudsen diffusion coefficient should be position dependent.If the overcoating layer-by-layer growth has a significant effect on the resulting pore size, the Knudsen diffusion coefficient should take this growth into account. 23In order to determine which form of D ki is suitable for our catalyst selection and target overcoating thicknesses, the pore size distribution from the BJH adsorption measurement presented useful information, with an average pore size ranging between 24.6-27.7 nm.From these results, it was concluded that the targeted maximum deposition thickness (2-3 nm) had no significant effect on the modelled results through layer-by-layer overcoating formation.Although a fraction of small pores (o6 nm) was present, the filling of these pores and coverage of cobalt particles o 6 nm had no significant effect on the Fischer-Tropsch reaction.In the FT reaction, the Co particle size should exceed 10 nm to achieve high turnover frequency. 41,42For these reasons, the Knudsen diffusion coefficient was calculated with eqn (6), not considering layer-by-layer growth.
Conclusions
The Co-Pt/TiO 2 Fischer-Tropsch porous catalyst was overcoated with an ALD (TMA/H 2 O) process.A diffusion-reaction differential equation model was prepared to address overcoating thickness and penetration to the porous catalyst sample.The model results were compared against SEM and TEM microscopy measurements, indicating wide distribution of penetration 0-18 mm, with an average penetration depth of 9.6 mm (corresponding to B5-10% depth from throughout particle penetration).This variation is significantly different to ideal test surfaces and non-porous materials, where ALD overcoating can produce highly conformal and uniform inorganic layers.In addition to penetration depth distribution, the overcoating thickness had an intrinsic variation within the catalyst particle from 1.29 AE 0.16 nm and 2.15 AE 0.29 nm for 20-and 30-cycle ALD overcoated samples, respectively.The presented model was also used to estimate these thicknesses, and the results were corresponding well with 1.36 nm and 2.04 nm for 20-and 30-cycle ALD overcoated samples, respectively.The presented model can be applied with little effort to other catalyst support structures and different precursor compounds.The diffusionreaction model can be utilised to plan the ALD process on given catalyst structures.
Fig. 1
Fig. 1 Precursor depletion profile in depth and time domains of the model (left) with a set of initial and boundary conditions.Surface coverage profile in depth and time domains of the model (middle) with initial condition.A schematic illustration of the used model domains (right).
Fig.3gives the SEM-EDS line scan measurement spectra starting from the outside of the catalyst particle until the middle of the measured particle (B200 mm).Alumina overcoating had a clear response at the surface of the catalyst particle, ranging from 0-18 mm.In addition to the ALD overcoating, a clear boundary was observed for Co at B75 mm.At a similar depth, the Ti response increased as relative atomic abundancy was increased from Co to the Ti response.Line scan (Fig.3) presents information only from the selected crosscut.Therefore, to address ALD overcoating penetration uniformity, Fig.4presents SEM-EDS elemental mapping images for two catalyst particles.Measured from these elemental mapping images (with ImageJ version 1.53 k), the overcoating depth had an average of 9.5 and 9.6 mm for fig. a and c respectively, while the
Fig. 2
Fig. 2 Example SEM image of catalyst particles with measured diameters.
Fig. 3
Fig. 3 SEM-EDS line scan from hemispherical catalyst particle having 30 cycle ALD overcoating.Atomic abundancies for Al, Ti and Co from the surface of the particle to the particle centre at 202 mm.
Fig. 4
Fig. 4 SEM-EDS elemental mapping conducted on hemispherical catalyst particles having 30 cycle ALD overcoating.Elemental mapping channels for Al images (a and c), for cobalt images (b and d).In fig.(a), Al response at the middle of the particle resulted from the grinding process, leaving a small quantity of Al in this unexpected area.
Fig. 5
Fig. 5 Degree of surface coverage as a function of position inside the porous particle.Solid black line for model prediction overlayed with two separate SEM-EDS line scan measurements.First SEM-EDS line scan measurement with round markers and a repetition measurement from different location with square markers.
Fig. 8
Fig. 8 Parameter sensitivity analysis, (a) degree of surface coverage profile variation with three exposure time (t) values, (b) achieved penetration depth (deepest penetration depth at Y = 1) with varying deposition time (t).Used model parameter marked with surrounding circle.
Fig. 7
Fig. 7 Histogram of TEM image analysis for overcoating thickness.Frequency of individual measurements are counted in 0.5 nm intervals.
Fig. 9
Fig. 9 Parameter sensitivity analysis, (a) degree of surface coverage profile variation with three n P values, (b) achieved penetration depth (deepest penetration depth at Y = 1) with varying n P .Used model parameter marked with surrounding circle.Corresponding TMA partial pressures were 100 Pa (penetration depth 5 mm), 460, 800, 1100, 1300, and 1400 Pa.
Fig. 10
Fig. 10 Main figure, the degree of surface coverage profile variation with deposition temperature 150, 200 and 250 1C.Merged figure, the achieved penetration depth with varying deposition temperature.
Fig. 11
Fig. 11 Estimated layer thickness for 10, 20 and 30 deposition cycles.Layer thickness determined with eqn (11) having a growth-per-cycle (GPC) of 0.0681 nm per cycle from the TEM microscopy studies.
Fig. 12
Fig. 12 Parameter sensitivity analysis, (a) degree of surface coverage profile variation with three s P values with 4, 5, and 6 atoms per nm 2 , (b) achieved penetration depth (deepest penetration depth at Y = 1) with varying s P .Used model parameter marked with surrounding circle.
Fig. 13
Fig. 13 Parameter sensitivity analysis, (a) degree of surface coverage profile variation with three catalyst tortuosity (t) values, (b) achieved penetration depth (deepest penetration depth at Y = 1) with varying catalyst tortuosities (t).Used model parameter marked with surrounding circle.
Fig. 14
Fig. 14 Parameter sensitivity analysis, (a) degree of surface coverage profile variation with three catalyst porosity (e) values, (b) achieved penetration depth (deepest penetration depth at Y = 1) with varying catalyst porosities (e).Used model parameter marked with surrounding circle.
Fig. 15
Fig.15The effect of porosity and tortuosity on the Knudsen diffusion coefficient (D ki ).Knudsen diffusion determined with fixed values for the mean pore radius (r), temperature (T P ), and precursor mass (m P ).
Table 1
Catalyst structural parameters
Table 3
Measured (HR-TEM) and predicted overcoating thickness.Diffusion-reaction model uses an average GPC (0.685 Å per cycle) from the TEM measurements to give overcoating thicknesses
Table 4
Nitrogen adsorption/desorption measurement results for the BET surface area, pore volume and pore size.Overcoated samples measured after temperature treatment Experimental error (AE2s) for surface area was AE1 m 2 g À1 , pore volume AE0.01 mL g À1 and pore size AE0.1 nm. | 6,662.6 | 2022-08-22T00:00:00.000 | [
"Chemistry",
"Engineering",
"Materials Science"
] |
Study of Temporal Thermal Response of Microfiber Bragg Grating
Fiber Bragg grating has been successfully fabricated in the silica microfiber by the use of femtosecond laser point-by-point inscription. Temporal thermal response of the fabricated silica microfiber Bragg grating has been measured by the use of the CO2 laser thermal excitation method, and the result shows that the time constant of the microfiber Bragg grating is reduced by an order of magnitude compared with the traditional single-mode fiber Bragg grating and the measured time constant is ~ 21ms.
Introduction
Temperature is one of the principal parameters in thermodynamics and temperature measurement, which plays an important role in industry, medicine, scientific research, and people's daily life [1][2][3].
Over the past few decades, different kinds of thermometers, such as glass liquid thermometer, thermocouple, and resistance thermometer, have been widely used. In the case of extreme environment such as corrosive, strong electromagnetic, flammable, and explosive, most of the traditional thermometers cannot meet the practical needs. Since silica fibers are good at high temperature stability, corrosion resistant, and anti-electromagnetic interference, silica-fiber based temperature sensors have attracted more and more attention recently [4][5][6][7][8]. Temporal thermal response of the sensing element is especially important for some rapidly changing and dynamic systems such as internal combustion engines. Liao et al. [9] have systemically studied temperature response of the femtosecond laser (FS) fabricated fiber Bragg gratings (FBGs) and found the temporal thermal response is dependent on the cross-sectional dimension of the fiber. Temporal thermal response has a linear relationship with the diameter of the fiber.
Experimental details
In this work, the silica microfiber is produced by the use of the flame brushing method [10,11] in the fiber tapering system built in our lab. The single mode fiber (SMF) is fixed between two co-axial translation stages with fiber holders. The hydrogen flame with a diameter of ~10 mm is mounted above the fiber in the middle of the two translation stages. The temperature of hydrogen flame is controlled by adjusting the flow rate of H 2 . The two translation stages with the same translation direction and different speeds of reciprocating motion are used to taper the fiber. Among them, the initial distance between the two stages is set as 15 mm, the speeds of the two translation stages are set as 4.5 mm/s and 5 mm/s, respectively, the time of the single movement is set as 3 s, and the number of reciprocations is set as 25. The microfibers with the diameter of ~10 μm and the uniform length of ~40 mm have been prepared for FBG inscription [12]. Figure 1 shows the schematic diagram of the FBG fabricated in the silica microfiber by the use of the FS point-by-point inscription method. FS laser (Spectra-Physics, Solstice) producing 120 fs pulses at 800 nm and a repetition rate of 1 kHz is focused into the microfiber by the use of an oil-immersion objective with a numerical aperture (NA) value of 1.25. Figure 2 shows the schematic diagram of the FS laser micromachining system. The microfiber is mounted on a 3-dimensional (3D) translation stage (NEWPORT, XMS50/XMS50/GTS30V) to control the translation speed along the X axis (fiber axis) and adjust the position of the microfiber along the Y and Z axes. FS laser energy is attenuated by rotating a λ/2 plate followed by a polarizer [13,14]. The high-speed shutter is used to precisely control laser fabrication time. The charge coupled device (CCD) camera mounted top is used to monitor the fabrication process.
Point-by-point μ-FBG
Objective 100px/oil-immersion Femtosecond pulse laser 120 fs/800 nm/1000 Hz Fig. 2 Schematic diagram of the FS laser micromachining system, where A is an aperture; W is a λ/2 plate; P is a polarizer; S is a shutter; M is a dichroic mirror; OSA is an optical spectrum analyzer.
To fabricate the μ-FBG, the microfiber is firstly placed at the focal point of the FS laser beam and then moved with a designated speed along the X axis. Each laser pulse produces one grating segment in the microfiber. To avoid the cylindrical lens effect from the fiber surface, we choose paraffin oil for the objective with the refractive index similar with silica [15]. In the experiment, the μ-FBG with a grating period of 1.071 μm and a uniform length of ~4 mm has been fabricated in the microfiber with a diameter of 10 μm. The fabrication process takes ~4 s and there is no material damage for the microfiber. The 10 μm microfiber is a multimode fiber in C-band. Figure 4 shows the reflection and transmission spectra of the 10 μm FBG, where more than one Bragg resonance can be observed and it can be explained by different-order reflection modes coupled from the core mode [16,17]. Not all the reflection peaks corresponding to each transmission dip can be observed in the reflection spectrum due to some high-order reflection modes, blocked by the adjacent SMF. Figure 5 shows the measurement setup of temporal thermal response of the μ-FBG. A CO 2 laser with the wavelength of 10.6 μm is used as the heat source since the silica fiber exhibits a high absorption efficiency at this wavelength. The CO 2 laser beam is modulated by the use of an optical shutter (THORLABS, SC10) to warm or cool the μ-fiber, operating as a chopper. The laser beam passes through a convex lens to broaden the beam diameter to irradiate the μ-FBG uniformly. A tunable laser combining with a photodetector and an oscilloscope are used to record the dynamic change of the spectral signal. Figure 6(a) shows that the μ-FBG is warmed by CO 2 laser irradiation, the Bragg reflection peak will be shifted towards the longer wavelength, and as the thermal excitation is stopped, the reflection peak will be shifted back to the initial state quickly. To record the fast signal change with the temperature, it is necessary to use a fast demodulation system. To do so, the wavelength of tunable laser is set to be the left edge of the full width at half maximum (FWHM) of the fundamental-mode reflection peak, which is measured to be ~0.95 nm. As long as the μ-FBG's spectrum is shifted to the longer wavelength by warming with CO 2 laser, the tunable laser signal can be filtered along the rising edge of the μ-FBG, resulting in a change in the photodetector output power. This signal is easily recorded on the oscilloscope, as shown in Fig. 6 Time constant is a feature of lumped system analysis (lumped capacity analysis method) for thermal systems, which is used when objects are cool or warm uniformly under the influence of convective cooling or warming. Theoretically, the time constant represents the elapsed time required for the system response to decay to zero if the system continues to decay at the initial rate, because of the progressive change in the rate of decay, and the response will eventually decrease in value to 1/e ≈ 36.8% at the time. In an increasing system, the time constant is the time for the system's step response to reach 1−1/e ≈ 63.2% of its final value. The heat transfer between the μ-FBG and the ambient at a given time is proportional to the temperature difference between the μ-FBG and the ambient. Hence, warming and cooling of the μ-FBG can be described by the lumped system equation [18,19]: where T(t) is the μ-FBG temperature as a function of time; T α is the temperature of ambient air (295 K); A f is the surface area of the fiber; V f is the volume of the fiber; ρ is the density of the fiber (2 200 kgm -3 ); c p is the specific heat of the fiber (837 Jkg -1 K -1 ); h is the convection coefficient; q(t) is the heat generation rate per unit volume; τ denotes the relaxation time; Ξ(t) is periodic heating of the CO 2 laser beam [20]. From (1), the time constants of the conventional FBG and 10 μm FBG are calculated to be 250 ms and 20 ms, which are in good agreement with the experimental results.
Conclusions
In conclusion, the μ-FBG has been fabricated by the use of femtosecond laser point-by-point inscription. Due to the multimode operation of the μ-FBG, multiple Bragg resonances are observed in the transmission spectrum. A fast-thermal excitation system is built to measure the time constant of thermal response and the time constant of the μ-FBG with a diameter of 10 μm measured to be 21 ms, which is faster by an order of magnitude than that of the conventional FBGs. Such a μ-FBG may be found potential sensing applications in some rapidly changing and dynamic systems.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2,095.8 | 2020-09-11T00:00:00.000 | [
"Physics"
] |
Endoplasmic Reticulum Sorting and Kinesin-1 Command the Targeting of Axonal GABAB Receptors
In neuronal cells the intracellular trafficking machinery controls the availability of neurotransmitter receptors at the plasma membrane, which is a critical determinant of synaptic strength. Metabotropic γ amino-butyric acid (GABA) type B receptors (GABABRs) are neurotransmitter receptors that modulate synaptic transmission by mediating the slow and prolonged responses to GABA. GABABRs are obligatory heteromers constituted by two subunits, GABABR1 and GABABR2. GABABR1a and GABABR1b are the most abundant subunit variants. GABABR1b is located in the somatodendritic domain whereas GABABR1a is additionally targeted to the axon. Sushi domains located at the N-terminus of GABABR1a constitute the only difference between both variants and are necessary and sufficient for axonal targeting. The precise targeting machinery and the organelles involved in sorting and transport have not been described. Here we demonstrate that GABABRs require the Golgi apparatus for plasma membrane delivery but that axonal sorting and targeting of GABABR1a operate in a pre-Golgi compartment. In the axon GABABR1a subunits are enriched in the endoplasmic reticulum (ER), and their dynamic behavior and colocalization with other secretory organelles like the ER-to-Golgi intermediate compartment (ERGIC) suggest that they employ a local secretory route. The transport of axonal GABABR1a is microtubule-dependent and kinesin-1, a molecular motor of the kinesin family, determines axonal localization. Considering that progression of GABABRs through the secretory pathway is regulated by an ER retention motif our data contribute to understand the role of the axonal ER in non-canonical sorting and targeting of neurotransmitter receptors.
Introduction
Polarized protein trafficking in the neuron is critical for synapse formation, synapse maintenance and the regulation of synaptic strength. In all eukaryotic cells the endomembrane trafficking system includes a forward biosynthetic route constituted by the endoplasmic reticulum (ER), the ER-Golgi intermediate compartment (ERGIC), the Golgi apparatus and post-Golgi vesicles, and a recycling-degradative route constituted by endosomes and lysosomes. The unique architecture and size of neurons does not necessarily imply that the structure/function relationship of these organelles and their contribution to the secretory process are different than in other cell types. However, their spatial arrangement and contribution to local processing may be specially adapted to the complexities of the neuronal morphology [1]. How the neuron orchestrates this highly compartmentalized trafficking is poorly understood. In particular, how the local distribution of secretory components in the neuron impinges on intracellular trafficking and availability of neurotransmitter receptors remains for the most part unexplored.
GABA is the main inhibitory neurotransmitter in the nervous system and the metabotropic GABA B Rs are obligatory heteromers composed of two related subunits, GABA B R1 and GABA B R2 (for a comprehensive review of GABA B R structure, function, localization and pathological implications see [2]). Both belong to family C of G protein-coupled receptors, and contain a large extracellular N-terminal domain, seven membrane-spanning domains and an intracellular C-terminal domain. GABA B Rs are expressed in neurons throughout the brain and spinal cord. They are mainly perisynaptic receptors located in gabaergic and glutamatergic presynaptic terminals and postsynaptic sites. GABA B R1 binds agonists with high affinity whereas GABA B R2 couples to G ai establishing a transactivation mechanism between the two subunits. At presynaptic terminals GABA B Rs inhibit voltage gated Ca 2+ channels thereby inhibiting synaptic vesicle fusion and neurotransmitter release. At postsynaptic sites they activate inwardly rectifying K + channels hyperpolarizing the postsynaptic neuron. In addition, stimulation of GABA B Rs decreases the levels of cyclic AMP. GABA B Rs have been implicated in epilepsy, anxiety, stress, sleep disorders, nociception, depression, cognition and addictive mechanisms to drug abuse. The relevance of studying GABA B R availability is further supported by clinical observations that report the appearance of tolerance to GABA B R agonists, an inconvenient side effect to therapy.
GABA B R subunits are synthesized in the soma and glycosylated in the ER [3], [4]. The progression of GABA B Rs through the secretory pathway is regulated by an RXR-type sequence (RSRR) in the C-terminal domain of GABA B R1 that functions as an ER retention motif in the absence of GABA B R2 [5]. The ER retention motif is masked upon association to GABA B R2, and assembled GABA B Rs exit the ER as heteromers destined for the plasma membrane. Consistent with ER retention acting as a limiting step GABA B Rs are abundant within intracellular compartments, especially the ER [6].
GABA B R1a and GABA B R1b constitute the most abundant isoforms for GABA B R1. Heteromers containing GABA B R1a are axonal and somatodendritic whereas those containing GA-BA B R1b are exclusively located in the somatodendritic domain [7]. GABA B R1a and GABA B R1b mediate their different functions only as a result of their specific axonal or somatodendritic localization [7]. The sushi domains located at the Nterminus of GABA B R1a are necessary and sufficient for axonal targeting even in a GABA B R2 knock-out background [8].
However, the precise targeting machinery and the organelles involved in sorting and transport have not been described. Combining conventional optical microscopy and live-cell imaging using organelle reporters and trafficking blockers in cultured hippocampal neurons we describe a mechanism for GABA B R1a axonal localization based on pre-Golgi sorting and ER transport.
Animals
Adult pregnant female Sprague-Dawley rats were purchased from the Central Animal Facility at Universidad Católica de Chile and killed by asphyxia in a CO 2 chamber according to the Guide for Care and Use of Laboratory Animals (The National Academy of Sciences, 1996). GABA B R1-EGFP mice were kindly provided by Bernhard Bettler (University of Basel, Switzerland). They correspond to transgenic animals for the GABA B R1-EGFP BAC in a homozygous knockout background for GABA B R1 as described previously [9], [10].
Cell lines, neuronal cultures and transfection COS-7 cells were maintained and transfected as described previously [3] using a Gene Pulser Xcell (BioRad). Primary hippocampal neurons were cultured from E18 rats or E18 GABA B R1-EGFP transgenic mice according to established procedures [11] and transfected by Ca 2+ phosphate at 14-18 days in vitro (div) [12]. All transfected neurons, except for those in Fig. S2, were analyzed 1 day post transfection (dpt). Transfected neurons for Fig. S2 were analyzed between 1-5 dpt.
Reagents and DNA plasmids
Nocodazole was purchased from Sigma (St. Louis, MO). MYC-GABA B R1, FLAG-GABA B R2, HA-GABA B R2, MYC-GA-BA B R1-AA-ASA, MYC-GABA B R1-DC in pRK5 have been described previously and contain epitope tags on the extracellular N-terminal domains [3], [13][14][15]. MYC-GABA B R1-AA-ASA contains point mutations that replace two leucine residues at a di-leucine motif and two arginine residues at the ER retention motif (RSRR) by alanines. MYC-GABA B R1-DC lacks the complete C-terminal domain. Both mutants escape the ER and traffic to the cell surface in the absence of GABA B R2. GABA B R1a-EGFP, GABA B R2-EGFP and GABA B R1a-monomeric red fluorescent protein have also been described previously and contain the fluorescent proteins attached to the intracellular C-terminal domain [4]. pDsRed-C1 (RFP), pEYFP-Golgi, pEYFP-ER and pDsRed2-ER (KDEL-RFP) were obtained from Clontech (Mountain View, CA). Kif5C-RFP-DN was kindly provided by S. Kindler and H.J. Kreienkamp (Institut für Humangenetik, Universitä tsklinikum Hamburg-Eppendorf, Hamburg, Germany) and corresponds to amino acids 678-955 of KIF5C (NM_001107730) also referred to as DN2 by Falley and collaborators [16]. ARF1-Q71I-HA was kindly provided by O. Jeyifous (University of Chicago, Chicago, IL), Rab11-GFP was kindly provided by F. Bronfman (Pontificia Universidad Católica de Chile), p58-YFP was kindly provided by J. Lippincott-Schwartz (National Institutes of Health, Bethesda, MD). Cytochrome b5-EGFP was kindly provided by C. Hetz (Universidad de Chile). All manipulations and fidelity of DNA constructs were verified by sequencing.
Antibodies GABA B R1 antibodies (which recognize GABA B R1a and GABA B R1b) were raised against the intracellular C-terminal domain in rabbits and affinity purified. GABA B R1 antibodies specifically recognize MYC-GABA B R1a in transfected COS cells and detect the predicted doublet corresponding to GABA B R1a and GABA B R1b in crude brain membranes (Fig. S1). Microtubule-associated protein 2 (MAP2) antibodies were purchased from Chemicon (Temecula, CA). Piccolo antibodies were kindly provided by ED. Gundelfinger and WD. Altrock (Leibniz Institute for Neurobiology, Magdeburg Germany). KDEL antibodies (directed against a 6-residue peptide (SEKDEL) of the rat Grp78 protein) and cis-Golgi matrix protein 130 (GM130) were purchased from StressGen (Ann Arbor, MI). MYC antibodies were purchased from Sigma (St. Louis, Missouri). Influenza A Virus epitope (HA) antibodies were purchased from Roche (Indianapolis, IN). Anti-GFP antibodies (ab6556) were purchased from Abcam (Cambridge, UK). Secondary anti-mouse, antirabbit, anti-guinea pig or anti-chicken antibodies conjugated to Texas Red (TR), tetramethyl rhodamine isothiocyanate (TRITC), fluorescein isothiocyanate (FITC) or cyanine 3 (Cy3) were purchased from Jackson Immuno Research Laboratories (West Grove, PA).
Immunofluorescence, image capture, image processing and time-lapse microscopy Immunofluorescence was performed as described previously under non-permeabilized or permeabilized conditions [4]. Depending on the type of experiment axons were identified by the presence of a positive marker (Tau), the absence of a negative marker (MAP2), or by morphological criteria that included: longer projection, constant diameter and right angle branching. Imageprocessing routines were developed in our laboratory based on of Interactive Data Language (IDL) (ITT, Boulder, CO), including routines for segmentation [4]. Live-cell imaging was performed in a 23uC equilibrated microscopy suite. Images were obtained using an Olympus BX61WI upright microscope and an Olympus diskscanning unit. Consecutive frames were acquired over a period of ,120 s. Kymographs were constructed from an axonal segment 20 mm from the soma using ImageJ from three-pixel-wide axonal traces. The axon-to-dendrite (A:D) ratio of MYC-GABA B R1a was determined using ImageJ according to a procedure based on Biermann et. al., 2010 and modified [8]. Briefly, one-pixel-wide lines were traced along the initial 150 mm of axons and dendrites in images labeled with soluble RFP or Kif5C-RFP-DN. No measurements were carried out beyond 150 mm to prevent artifacts due to axonal length differences between RFP or Kif5C-RFP-DN transfected neurons. Average pixel intensities of MYC-GABA B R1a were determined along the traced lines, background was subtracted and the data was used to determine the A:D ratio. Criteria for cell selection included even distribution of RFP or Kif5C-RFP-DN in axons and dendrites, and cells expressing constructs at very high levels were excluded from the analysis. 9-12 neurons from at least two independent culture preparations were analyzed for each condition. Images of axons from GABA B R1-EGFP mice were acquired using an Olympus FV-1000 confocal microscope. Anti-GFP antibodies were used to amplify the weak axonal signal of GABA B R1-EGFP expressed at physiological levels.
Total Internal Reflection Fluorescence (TIRF) microscopy
For TIRF microscopy 14-18 div hippocampal neurons were transfected with GABA B R1a-EGFP and motility in axons was analyzed 1 dpt. TIRF was carried out on a custom-built TIRF microscope setup: Two lasers (473 nm and 532 nm, both 30 mW, Viasho, USA) were used to excite the fluorophores (GFP and RFP). The lasers were expanded and coupled via a multi-beam splitter (z474/488/532/635rpc, Chroma, USA) off-axis into the oil-immersion objective (Nikon, SFluor 100x, 1.49) to obtain TIRF illumination. The emitted fluorescent light was split in the GFP and RFP signals using a dichroic mirror (525/50, Chroma, USA) then passed trough bandpass filters (530/50 for GFP and 605/70 for TMR, both Chroma, USA) and finally directed via mirrors to separate areas on the chip of a frame transfer CCD camera (Cascasde:512B, Roper Scientific, USA). The CCD camera was controlled via WinSpec32 (Princeton Instruments, USA). The penetration depth of TIRF was 147 nm. Digital images were taken at a frame rate of 2 frames/s and were subsequently analyzed for velocity and direction using kymographs generated with a customwritten LabView (National Instruments, USA) routine. Kymographs were analyzed for velocity and direction by fitting lines to the segments of a trace judged by eye. Stalls were not taken into account.
Results
The delivery of GABA B Rs to the plasma membrane is Golgi-dependent but axonal targeting is not First we carried out a control experiment to validate the use of overexpression of recombinant GABA B R subunits as a strategy to study receptor trafficking. Cultured hippocampal neurons were transfected with MYC-GABA B R1a and the distribution of the subunit at the plasma membrane or in intracellular compartments was evaluated by immunostaining 1-5 dpt. MYC-GABA B R1a was retained in intracellular compartments in the cell body and axons up to 5 dpt in the absence of recombinant GABA B R2 expression (Fig. S2). In contrast, GABA B R1a was readily detectable at the cell surface at 2 dpt upon co-transfection with GABA B R2 (Fig. S2, right column). These experiments indicate that the trafficking properties of recombinant receptors mimic the situation of the native subunits, and that the trafficking of recombinant receptors is not affected by the endogenous subunits. More importantly, they demonstrate that our experiments using transfection of recombinant GABA B R1 subunits exclusively examine their intracellular population.
To determine whether GABA B Rs employ a Golgi-dependent intracellular trafficking route in neurons, primary cultures of hippocampal neurons were transfected with MYC-GABA B R1a and FLAG-GABA B R2 in the absence or presence of ARF1-Q71I-HA, a constitutively active ARF1 mutant that prevents export from the Golgi apparatus [17]. 1 dpt the distribution of GABA B Rs at the plasma membrane and in intracellular compartments was evaluated by immunostaining under nonpermeabilized or permeabilized conditions. We examined somatic or axonal domains as shown in the schematic neuron (Fig. 1A). As reported previously co-transfection of MYC- (Fig. 1B). In contrast, ARF1-Q71I-HA blocked the appearance of both subunits at the plasma membrane and produced accumulation in intracellular compartments (Fig. 1C). These results indicate that the Golgi apparatus is necessary for the delivery of GABA B Rs to the plasma membrane in hippocampal neurons.
GABA B R1a is targeted to the axon in hippocampal neurons [7], [8]. Thus, we determined whether axonal targeting was also Golgi dependent. First we used cultured hippocampal neurons of velocities: anterograde (gray bars), retrograde (white bars). Pie chart represents fractions of anterograde (gray) and retrograde transport (white). Average velocities and direction were obtained from 41 moving puncta from at least three independent culture preparations. doi:10.1371/journal.pone.0044168.g003 1D and 1G, right panels). According to previous reports the predominant axonal variant of GABA B R1 corresponds to GABA B R1a [7], [8]. Therefore, these data are compatible with the idea that GABA B R1a is sorted and targeted to the axon at or prior to the Golgi apparatus. Next we used recombinant receptors to directly compare the axonal targeting of GABA B R1a and GABA B R1b, and their Golgi dependence. Neurons were transfected with MYC-GABA B R1a or MYC-GABA B R1b in the absence or presence of ARF1-Q71I-HA. Consistent with previous reports, GABA B R1a but not GABA B R1b was predominantly targeted to the axon in hippocampal neurons (Figs. 1E-1I, control panels). Importantly, recombinant GABA-B R1a was still targeted to the axon in the presence of ARF1-Q71I-HA (Figs. 1E and 1H). In contrast, GABA B R1b was absent from the axon under all the conditions examined (Figs. 1F and 1I). These findings indicate that axonal targeting is specific to GABA B R1a. In addition, they demonstrate that the sorting and targeting of GABA B R1a to the axon occurs at or prior to the Golgi stage, and therefore suggest a non-conventional modality. They also indicate that axonal targeting of GABA B R1a upon ARF1-Q71I-HA expression is not produced by an overload of the early secretory pathway because the effect was not observed for GABA B R1b, a conclusion further supported by the results in lower expressing transgenic neurons.
GABA B R1a is targeted and transported within the axonal ER
ER resident proteins and components of the protein folding and export machineries localize to the axon [18][19][20]. Thus, to determine the intracellular localization of axonal GABA B R1a neurons were immunostained with antibodies against the GA-BA B R1 subunit and an antibody against the SEKDEL sequence of the rat ER protein Grp78, including a conserved motif present in luminal ER resident proteins responsible for retrieval from the Golgi apparatus [21]. Endogenous GABA B R1 colocalized with the ER in axons ( Fig. 2A, arrows). Control labeling without primary antibodies confirmed the specificity of the signal (Fig. 2B and 2C). In transgenic mouse neurons GABA B R1-EGFP also colocalized with the ER (Fig. 2D, arrows). Importantly, colocalization was still observed upon overexpression of MYC-GABA B R1a and KDEL-RFP, a fluorescent probe widely used for ER visualization that contains the ER targeting sequence of calreticulin and the ER retrieval sequence KDEL (Fig. 2E, arrows). In addition, colocalization was observed with cytochrome b5-EGFP, another fluorescent ER probe (Fig. 2F). The colocalization between GABA B R1a and ER markers was specific because Piccolo, a marker for dense core vesicles and synapses [22] showed a markedly different axonal localization (Fig. 2G). Staining with MAP2, an exclusive dendritic marker confirmed the co-distribution of GABA B R1a and the ER occurs in the axon (Fig. 2H). These results demonstrate that GABA B R1a is enriched in the axonal ER.
To establish whether the ER functions as a transport organelle for GABA B R1a we examined the dynamic behavior of fluorescent versions of GABA B R1a and the ER in axons of live hippocampal neurons. Discrete GABA B R1a-EGFP and KDEL-RFP puncta were distributed along the axons. While the majority of puncta remained static or showed very limited lateral displacement over the examined period, a subset displayed continuous, long-range mobility (imaged at 0.20-0.25 frames/s for a total of ,120 s, Figs. 3A and 3B). GABA B R1a-EGFP and KDEL-RFP moved bidirectionally, with a moderate retrograde bias, and with modal speeds of 100-200 nm/s (Figs. 3E and 3F). Significantly, some GABA B R1a-EGFP and KDEL-RFP puncta moved in synchrony, with similar speed and retrograde predominance (Figs. 3C and 3G). Puncta containing Rab11-GFP, a recycling endosome marker [23], also localized to axons but showed a different dynamic pattern characterized by rapid direction changes and lower overall displacement. We used TIRF microscopy and higher temporal resolution (2 frames/s) to determine the instant velocity of GABA B R1a-EGFP more accurately in hippocampal neurons. Mean anterograde and retrograde instant velocities were comparable (751.70633.20 and 877.73667.59 nm/s respectively) (Figs. 3H and 3I). These values fit conventional kinesin velocities [24] and their slight increase above panels A-G most likely result from excluding stalls in the analysis of our higher temporal resolution imaging.
A proportion of GABA B R1a-EGFP and KDEL-RFP puncta moved independently from each other. This may indicate that a fraction of GABA B R1a-EGFP is transported in a different secretory organelle or that the axonal ER compartment is capable of dynamically segregating cargo. To discriminate between these possibilities we carried out a series of complementary experiments. First we analyzed time-lapse microscopy sequences individually. Interestingly, GABA B R1a-EGFP puncta that initially colocalized with the ER sometimes separated from the organelle, remained segregated for a few frames and fused again with a pre-existing ER compartment (Fig. 4A). It is well known that ER cargo recycles between the ER and the ERGIC using export/retrieval motifs such as the RXR-type sequence present in GABA B R1a [25]. To determine whether the segregated GABA B R1a puncta resided temporarily in the ERGIC, we first visualized the axonal distribution of MYC-GABA B R1a and p58-YFP, an established marker of the ERGIC [26], in fixed cells. Sparse ERGIC puncta were observed in axons and a subset of them colocalized with MYC-GABA B R1a (Fig. 4B). Additionally, live-cell imaging was carried out in neurons transfected with GABA B R1a-RFP and p58-YFP. A small fraction of GABA B R1a-RFP displayed synchronous motility with p58-YFP in axons (Fig. 4C). Taking into account that intra-ER mobility is microtubule dependent but short-range ER to ERGIC transport is not [27], we reasoned that the enrichment of GABA B R1a in the ER should increase upon destabilization of microtubules. Consistent with this prediction, nocodazole blocked the mobility of GABA B R1a-EGFP puncta and the subunits accumulated in a KDEL-RFP compartment (Fig. 4D, top right panels). As expected, a mutant GABA B R1a subunit that is not retained in the ER (GABA B R1a-ASA-EGFP) accumulated in a different compartment after nocodazole treatment (Fig. 4D, bottom right panels). These observations suggest that axonal GABA B R1a is targeted and transported to the axon within the ER, and possibly engages in a local export/retrieval mechanism between the ER and the ERGIC.
Kinesin-1 contributes to the axonal localization of GABA B R1a
Since kinesin-1, an axonal biased molecular motor [28], [29], colocalizes with GABA B R1 in neurons, and associates to the subunit in fractionation and coimmunoprecipitation assays [30] we evaluated its contribution to axonal GABA B R1a targeting. To examine the axonal localization of MYC-GABA B R1a we determined an axon-to-dendrite ratio (A:D ratio) [8]. To study the role of kinesin-1 we used a dominant negative comprising the cargo-binding domain of Kif5C that interferes with endogenous kinesin-cargo interactions fused to RFP (Kif5C-RFP-DN) [16]. The axonal targeting of MYC-GABA B R1a was markedly reduced in the presence of Kif5C-RFP-DN ( Fig. 5A and Fig. S3 Kinesin-1 may control the axonal localization of MYC-GABA B R1a via an adaptor mechanism through the cytosolic C-terminal domain of GABA B R1a. To directly test this we first determined whether axonal targeting required the cytosolic Cterminal domain of GABA B R1a using two C-terminal subunit mutants, MYC-GABA B R1a-AA-ASA and MYC-GABA B R1a-DC [15]. GABA B R1a, GABA B R1a-AA-ASA and GABA B R1a-DC were all abundant in the axon indicating that targeting is independent of the C-terminal domain (Fig. 5C, arrows). Using a quantitative colocalization analysis based on Manders coefficients [31] all GABA B R1a constructs colocalized partially with the ER and no statistically significant difference was observed between GABA B R1a and the two mutants (MYC-GABA B R1a, n = 9 neurons; MYC-GABA B R1a-AA-ASA p = 0.11, n = 5 neurons; MYC-GABA B R1a-DC p = 0.30, n = 5 neurons). These results suggest that the C-terminal domain is not a major determinant of axonal ER distribution and targeting of GABA B R1a.
Finally, we analyzed the kinesin-1 dependence on the axonal targeting of GABA B R1a lacking the C-terminal domain. Axonal localization of MYC-GABA B R1a-DC was still severely impaired by the dominant negative construct (Fig. 5D, arrows. A:D ratio MYC-GABA B R1a-DC 0.4260.06; MYC-GABA B R1a-DC plus Kif5C-RFP-DN 0.0560.01; p,0.01). Combined these experiments demonstrate that kinesin-1 is necessary for the ER axonal targeting and localization of GABA B R1a. In addition, they indicate that the C-terminal domain of GABA B R1a, which is exposed to the cytosol, is not involved in this transport mechanism, further supporting the role of luminal or ER membrane domains in axonal sorting and targeting.
Discussion
We have shown that GABA B Rs require the Golgi apparatus for plasma membrane delivery. Importantly, to our knowledge we have demonstrated for the first time that the sorting and targeting of an axonal neurotransmitter receptor, namely the GABA B R1a subunit, occur in a pre-Golgi compartment. Consistent with these observations, our evidence points to the fact that GABA B R1a traffics along the axonal ER and the ERGIC, and is transported by the molecular motor kinesin-1.
Pre-Golgi sorting and targeting of axonal GABA B R1a
Our study indicates that axonal sorting of GABA B R1 subunits operates in the ER. As reported elsewhere, axonal targeting of GABA B R1a is unaltered in GABA B R2 knock-out neurons [8]. Additionally, the sushi domains located at the N-terminus of GABA B R1a, are sufficient for targeting even when placed in a nonrelated CD8a protein context [8]. Combined with the results presented here these observations imply that sorting and targeting signals exposed to the ER lumen mediate the axonal localization of GABA B R1a.
Historically, sorting of plasma membrane proteins to distinct membrane domains has been thought to occur exclusively at the Golgi or the trans-Golgi network, but accumulated evidence now favors the view that decisions are made at almost every step along the secretory pathway including the ER [32]. For example, mutations in Sec24p, a component of the coat protein complex II (COPII), selectively disrupt recruitment of cargo for ER to Golgi transport [33]. Likewise, synthetic cell penetrating peptides based on the cytosolic domain of the temperature-sensitive VSVG protein only inhibits the transport of a subset of cargo from the ER to the Golgi apparatus [34].
Two possible targeting mechanisms are conceivable for GABA B R1a in axons: (i) a luminal or membrane spanning ER protein enriched in the axonal ER subcompartment may bind the sushi domains and produce the accumulation of GABA B R1a but not GABA B R1b, in the axon (selective retention); or (ii) a protein or protein complex that spans the ER membrane may function as an adaptor between the GABA B R1a subunit and a molecular motor and selectively direct the transport of the subunits to axons (selective transport). Additional axonal scaffolding proteins may anchor the GABA B R1a subunit to strengthen axonal localization.
Although we cannot rule out the first alternative our data are consistent with a selective transport mechanism. Axonal targeting of GABA B R1a is not altered by its cytosolic C-terminal. Thus, transport may be controlled by a specific GABA B R1a N-terminal adaptor or by a general ER adaptor complex. Identification of these molecules in future studies is needed to strengthen this hypothesis. A mechanism compatible with (i) has been described for the rotavirus VP7 glycoprotein, which is retained in the ER. VP7 is still transported to the axon after BFA treatment [35]. Similar to GABA B R1a, VP7 uses a Golgi-independent intracellular sorting mechanism to reach the axon. One may envision the participation of chaperones in ER transport and targeting. For example, Hsc70 may provide a mechanism to release kinesin from cargo in specific subcellular domains, thereby producing the delivery of axonally transported cargo [36].
Kinesin-1-dependent ER transport of GABA B R1a in axons
Since GABA B R1 is transported along the ER and has a limited residency period within the organelle under physiological conditions we refer to it as an ER-boarded protein. The precise transport mechanism of ER-boarded GABA B R1a in the axon is not clear. A kinesin-1-dependent transport of ER proteins has been described in dendrites [37]. Likewise, a microtubuledependent transport of two ER resident proteins in the axon, GFP-SERCA and GFP-IP 3 R, has been observed in cultured chick dorsal root ganglion neurons [38]. Their bidirectionality and average velocities (, 0.1 mm/s) suggest that the transport is likely non-vesicular, and may represent lateral displacement within the continuous axonal ER membrane or mobility of the organelle itself. Directionality at steady state and average velocities observed for GFP-SERCA and GFP-IP 3 R range between slow (0.001-0.03 mm/s) and fast axonal transport (, 1 mm/s) [38,39]. These intermediate velocities are conserved in our study suggesting that similar mechanisms may operate for the transport of ER-boarded GABA B R1a in axons. A conserved transport mechanism is also supported by the microtubule dependence of axonal GFP-SERCA, GFP-IP 3 R and GABA B R1a mobility. Thus, as reported for ER resident proteins al least one component of the mobility of GABA B R1a is not mediated by simple diffusion. However, precisely how the microtubulecytoskeleton mediates the movement of integral membrane proteins within the continuous ER network is still unclear. According to the mechanisms originally put forward by Tsukita and Ishikawa [40], Waterman-Storer and Salmon [41] and others [42] kinesin-1 may control the lateral displacement of the subunit along the ER membrane in a conveyor-like system or mediate the microtubule-dependent ER sliding of the organelle itself, which may contribute to GABA B R1a transport. Since components of both mechanisms are dependent on kinesin-1 and stable microtubules it is difficult to discriminate between them. Because in this study we wanted to eliminate microtubules that serve as tracks to kinesin-1, the concentrations of nocodazole used were ,15 fold higher than those that affect exclusively the population of dynamic microtubules (100 mM versus , 6 mM). Thus, additionally, we cannot rule out the contribution of dynamic microtubules to the transport of GABA B R1a in the axon [41]. The existence of discrete mobile packets for axonal GABA B R1a also raises the possibility that these are not continuous with the ER network. Electron microscope studies in central and peripheral axons have revealed that the structure of the ER is predominantly contiguous but that it contains occasional free elements [40], [43]. Additionally, isolated and mobile compartments have been observed with fluorescent reporters [38]. Puncta-like behavior may also be explained by transitory tubule fission/fusion events [40] or lateral mobility of protein aggregates. The precise relationship between the mobile structures observed in this and other studies with the continuous ER network requires further studies. Nonetheless, our results support a kinesin-1-mediated axonal transport of ER-boarded GABA B R1a that is in agreement with previous reports.
The majority of the axonal GABA B R1a subunit was static during the live imaging conditions and intervals examined in this study. Additionally, the mobile fraction was higher for a mutant subunit that escapes the ER (MYC-GABA B R1a-EGFP: 19% mobile puncta, n = 259; MYC-GABA B R1a-ASA-AA-EGFP 33% mobile puncta, n = 247). It will be interesting to understand the significance and properties of the static component of GABA B R1a in the ER, and whether ER immobility plays a role in processing or trafficking.
Previous studies have demonstrated that GABA B Rs are segregated intracellularly and that blockade of ER exit results in the accumulation of heteromers in the soma and dendrites of hippocampal neurons [4]. These results suggest that GABA B R subunits are transported into dendrites independently and not as assembled heteromers. They also suggest that newly synthesized GABA B Rs assemble in the ER and exit throughout the somatodendritic compartment prior to insertion at the plasma membrane. This idea is in agreement with transport within the ER and disagrees with a long-haul post-Golgi vesicular transport of GABA B Rs, suggesting a non-canonical trafficking modality for GABA B Rs in dendrites. Thus, long-range ER transport may underlie both dendritic and axonal targeting of GABA B Rs. Since the local complexity of the ER network influences intracellular trafficking it may be utilized by neurons to control the availability of dendritic and axonal membrane proteins that are spatially restricted [44], [45].
Axonal trafficking and ER/ERGIC recycling
In central and peripheral axons the ER is a continuous threedimensional network of irregular tubules and cisternae [40], [43]. As mentioned above, several studies have demonstrated the localization of ER resident proteins and components of the protein folding and export machineries in the axon [19][20]. More importantly, the discovery of local assembly of COPII components in the axon supports the existence of functional ER exit sites required to process newly synthesized proteins that contribute to axonal outgrowth during the early stages of development [46]. However, direct evidence for local protein trafficking within early biosynthetic organelles in the axon is still lacking. In Drosophila, polarized secretion of the EGFR ligand from photoreceptor neurons includes local processing and secretion in the axon, and both mechanisms are controlled by ER localization [47].
More functional evidence is needed to conclusively demonstrate the role of a local trafficking pathway for GABA B R1a in axons. However, a fraction of GABA B R1a colocalizes and moves in synchrony with the ERGIC. This indicates that a local route involving export and retrieval between early biosynthetic organelles may contribute to GABA B R1a trafficking in the axon. A coat protein I complex (COPI) dependent retrieval mechanism from the cis-Golgi to the ER has been postulated for GABA B R1a [48], [49]. Whether this occurs in the axon and whether the remaining secretory steps operate locally to deliver GABA B R1a to the axonal plasma membrane remains unclear. In any case, the presence of the same RXR-type ER retention motif in GABA B R1a and GABA B R1b suggests that despite determining a rate-limiting step for ER exit, dwell time in the ER is not a major determinant of spatial range and axonal localization of GABA-B R1a [50].
Overall our study demonstrates that ER sorting and local transport are relevant for axonal GABA B R trafficking. It is of great interest to determine to what extent the axonal ER is involved in the trafficking of other neurotransmitter receptors and ion channels that travel long distances, especially in long peripheral nerves. | 7,547.4 | 2012-08-27T00:00:00.000 | [
"Biology"
] |
Fresh semen characteristics in captive accipitrid and falconid birds of prey
Artificial insemination (AI) is the most frequently used assisted reproductive technique for captive propagation of rare avian species. As semen quality is critical for reproductive success, baseline data are needed for evaluating and selecting the best male bird donors. To this end, we used computer-assisted semen analysis to assess male eastern imperial eagles (n = 7), northern goshawks (n = 24) and peregrine falcons (n = 20). While imperial eagles and northern goshawks donate ejaculate voluntarily, peregrine falcons required cloacal massage. Eight peregrine falcon females were inseminated with semen from eight males, with fresh ejaculates (15 to 50 μl) applied to the pars uterina of the oviduct immediately after collection and examination. All females were inseminated within 2 h of laying an egg. A fertilization rate of 70% was achieved using this method. Minimum semen characteristics associated with egg fertilization included a semen concentration of 115.12 × 106/ml, 33.52% total motility, 1.92% spermatozoa with progressive motility and 0.17% with rapid motility. Comparative data on spermatozoa concentration and kinematics suggest that eastern imperial eagles concentrate on high quality semen investment at the start of the breeding season, northern goshawks compensate for a decrease in motilityassociated parameters with increased semen concentration and peregrine falcons maintain semen production standards throughout the breeding season. Our data show that, in birds of prey, levels of egg fertilization following AI with fresh semen can be almost as successful as after natural mating. Raptors, ejaculate quality, artificial insemination, fertilization rate, endangered avian species conservation Despite some limitations, captive propagation is an essential ex situ strategy for breeding birds of prey, whether for endangered species conservation or falconry purposes (Snyder et al. 1996; Bailey and Lierz 2017). Indeed, such methods have proved important for the recovery of wild populations of critically endangered and charismatic species, such as the Californian condor (Meretsky et al. 2000), peregrine falcon (Fleming et al. 2011) and Mauritius kestrel (Jones et al. 1995). In addition to loss of habitat and resources and over-exploitation, declines in these species are frequently associated with environmental pollutants that induce deleterious effects on avian reproduction (Fry 1995; Pikula et al. 2013; Hruba et al. 2019). Furthermore, as the number of captive-bred birds of prey increases, the international demand on birds for the falconry trade has been saturated, with native species benefiting from a drop in illegal capture of wild birds. In captivity, birds of prey are either allowed to mate naturally or are subjected to assisted reproduction (Bailey and Lierz 2017). To prevent failures in breeding projects, a good knowledge of reproductive physiology is necessary in order to ensure breeding pair quality and general state of health, species-specific differences, male and female synchronization, sperm characteristics, fertilization processes, egg formation, laying cycle and fertilized egg incubation. The most common assisted reproductive technique utilised in birds of prey is artificial insemination (AI) using fresh semen, or semen after no more than two days of ACTA VET. BRNO 2020, 89: 291–300; https://doi.org/10.2754/avb202089030291 Address for correspondence: Veronika Seidlová, Jiří Pikula University of Veterinary and Pharmaceutical Sciences Brno Palackého tř. 1946/1 E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>http://actavet.vfu.cz/
Despite some limitations, captive propagation is an essential ex situ strategy for breeding birds of prey, whether for endangered species conservation or falconry purposes (Snyder et al. 1996;Bailey and Lierz 2017). Indeed, such methods have proved important for the recovery of wild populations of critically endangered and charismatic species, such as the Californian condor (Meretsky et al. 2000), peregrine falcon (Fleming et al. 2011) and Mauritius kestrel (Jones et al. 1995). In addition to loss of habitat and resources and over-exploitation, declines in these species are frequently associated with environmental pollutants that induce deleterious effects on avian reproduction (Fry 1995;Pikula et al. 2013;Hruba et al. 2019). Furthermore, as the number of captive-bred birds of prey increases, the international demand on birds for the falconry trade has been saturated, with native species benefiting from a drop in illegal capture of wild birds.
In captivity, birds of prey are either allowed to mate naturally or are subjected to assisted reproduction (Bailey and Lierz 2017). To prevent failures in breeding projects, a good knowledge of reproductive physiology is necessary in order to ensure breeding pair quality and general state of health, species-specific differences, male and female synchronization, sperm characteristics, fertilization processes, egg formation, laying cycle and fertilized egg incubation. The most common assisted reproductive technique utilised in birds of prey is artificial insemination (AI) using fresh semen, or semen after no more than two days of storage in a refrigerator at 5 °C. Imprinted males are either trained to donate ejaculate or semen is collected by forced massage (Bailey and Lierz 2017).
While assisted reproductive techniques are combined with cryopreservation in mammals (Fickel et al. 2007), cryogenic storage of avian semen is still a problem due to physiological sensitivity and poor survival of sperm, with use of sperm undergoing a freeze/thaw cycle resulting in highly variable fertility rates (Samour 2004;Long 2006). Much research is still needed in order to obtain data on physiological semen quality parameters relevant for testing appropriate techniques on rare avian species and for developing reliable protocols applicable for cryopreservation.
The present study uses computer-assisted semen analysis (CASA) for assessing semen collected from the eastern imperial eagle (Aquila heliaca), northern goshawk (Accipiter gentilis) and peregrine falcon (Falco peregrinus). Quantitative CASA measurements were used to compare spermatological values in the three species and to test predictions of species-specific differences reflecting the mating systems of the species examined, breeding season length and male and female synchrony.
Male birds and their state of health
Samples for the present study were obtained from captive male eastern imperial eagles (n = 7), northern goshawks (n = 24) and peregrine falcons (n = 20) during medical and reproductive assistance services to private falconers. All birds were 5 to 6 years old.
Prior to the reproductive period, the birds were subjected to a thorough clinical examination, including blood chemistry and haematology. Blood (150 µl) was collected from the right jugular vein using a heparinised 1 ml 30 G ½-in Omnican syringe (B. Brown Injekt ® , Germany). Standard blood indices were evaluated using Avian/ Reptilian Profile and EC8+ diagnostic cartridges in VetScan VS2 and i-STAT analysers (Abaxis, Union City, CA, USA), respectively. A cloacal swab was collected for microbiological culture in order to exclude presence of pathogenic microorganisms in the alimentary and/or urogenital system. Parasitological examination of faeces was performed using the flotation method. Only healthy, parasite-free males showing non-pathogenic microflora were used for subsequent semen collection.
Semen quality
In all cases, care was taken to avoid sperm contact with water and prevent exposure to direct sunlight and/ or temperatures outside the range of 19-21 °C. Immediately after collection, fresh semen characteristics were determined using a Sperm Class Analyser -Computer Assisted Sperm Analysis (SCA CASA) system (MICROPTIC S.L., Barcelona, Spain), with concentration and motility determination modules and a Nikon eclipse E200LED MV R camera equipped with a Nikon 10 × 0.25 Ph 1 BM WD 7.0 lens (NICON CORPORATION, Tokyo, Japan) and an acA 1300-200uc Basler c-Mount camera (BASLER AG, Ahrensburg, Germany). Semen concentration (× 10 6 /ml) and total motility parameters were measured in a counting chamber Leja SC 20-01-08-B-CE (CRYO TECH s.r.o., Liběchov, Czech Republic) of 8 × 2 µl volume. The fresh semen characteristics assessed included percentage of spermatozoa showing progressive and non-progressive motility and immotility, the velocity percentage of spermatozoa with rapid (> 100 µm/s), medium (50 µm/s) and slow (< 10 µm/s) motility and the percentage of rapid and medium progressive and non-progressive spermatozoa. Progressive motility refers to spermatozoa that are mostly swimming in a straight line or in very large circles, while non-progressive motility refers to spermatozoa that move but do not make forward progression in straight lines or by swimming in very tight circles.
Artificial insemination using fresh semen
Semen collected from eight male peregrine falcons and examined for semen characteristics was used to inseminate eight females exhibiting reproductive behaviour (e.g. vocalization and nest making in early February). The females were inseminated using a Discovery Comfort 1-250 µl pipette (BIOTECH innovative, Prague, Czech Republic) within 2 h of laying the previous egg. The insemination dose contained fresh semen and was used immediately after collection, the ejaculate volume applied to the oviduct pars uterina ranging from 15 to 50 µl.
The insemination procedure was carried out as quickly as possible in order to reduce stress to the female.
Statistical analysis
Four samples (two from northern goshawk, two from peregrine falcon) were excluded from the statistical analysis owing to extreme differences in fresh sperm motility parameters. Normality was then tested for each species' data subset using Kolmogorov-Smirnov and Shapiro-Wilk tests. Sperm concentration per ml displayed normal distribution and was not transformed. Motility parameters were arcsine transformed to represent the inverse sine of the square root of the proportion, the results of transformation being expressed in radians. The relative date of sampling was calculated with March as the first designated value (1), this being used as an independent variable for the Pearson correlation coefficient. Subsequently, the breeding season for semen sampling was divided into two sub-periods for each species (Table 1). Differences between species and semen sampling sub-periods were tested using factorial ANOVA with Bonferroni correction, with the probability value set at P = 0.005. Interaction between both parameters was included into the model. Semen samples that resulted in egg fertilization were compared with non-fertilizing samples using one-way analysis of variance (ANOVA) and Kruskal-Wallis test.
Results
While semen concentration showed a significant positive correlation with the date of sampling (r = 0.56; P = 0.005) in the northern goshawk (Fig. 1A), there was a non-significant decline during the breeding season in the other two species examined. Despite this, there was no significant difference between the three species as peregrine falcon semen concentration was highly variable during the first sub-period (Fig. 1B). Similarly, there was no significant difference between the overall percentage of motile or immotile spermatozoa between the species or the sampling period (Table 2). Significant differences were only observed for four parameters related to progressive motility, i.e. percentage of spermatozoa showing progressive motility, percentage of spermatozoa with rapid motility, and percentage of rapid and medium progressive spermatozoa. The data on semen quality (Table 3) indicated that the three species utilized different timing strategies for sperm production (Fig. 2). A total of 20 eggs were laid by the eight peregrine falcon females following AI with semen collected during this study, showing a 70% fertilization success rate. Levels of total motility and percentage of spermatozoa with progressive motility were higher in semen samples resulting in egg fertilization (n = 20, P < 0.01). Minimum semen characteristics associated with egg fertilization included a semen concentration of 115.12 × 10 6 /ml, 33.52% total motility, 1.92% spermatozoa with progressive motility and 0.17% spermatozoa with rapid motility (Table 3).
Discussion
In this study, we compared semen quality in captive accipitrid and falconid birds of prey and, in so doing, provided baseline data on concentration and spermatozoa kinematics of fresh semen for future studies. Our data suggest that the eastern imperial eagle makes a high-quality semen investment by producing semen of high concentration with a high percentage of progressive spermatozoa at the start of the breeding season, with semen concentration and progressive movement-associated parameters decreasing in the second part of the breeding season. The northern goshawk compensates for a drop in motilityassociated parameters with semen concentration increasing over time. In comparison, the peregrine falcon maintains semen production with little or no quantitative alteration throughout the course of the breeding season. When used for AI, fertilization success using semen collected from donor male birds was directly related with the total motility and percentage of spermatozoa with progressive motility.
Progressive motility of spermatozoa (alongside semen concentration) is a good indicator of male fertility and both general state of health and health of the reproductive tract (Vyvial et al. 2019). Reduced sperm motility parameters are frequently associated with inflammation and severe infection (Björndahl 2010;Vitula et al. 2011). Likewise, many natural and anthropogenic toxins can, singly or in combination, interfere with normal spermatogenesis, resulting in testicular toxicity and impaired fertility (Damkova et al. 2011;Osickova et al. 2012;Pikula et al. 2013). In general, reproductive system disorders may result from nutritional deficiency, testicular neoplasia and systemic or reproductive tract infections including, for example, Salmonella spp., Escherichia coli, Mycobacterium spp., Chlamydia psittaci, Aspergillus spp. (Samour 2016).
Naturally reproducing, healthy avian populations generally have a fertility rate of 80% or higher (Damkova et al. 2009;Hruba et al. 2019). Interestingly, in the present study, 70% of eggs were fertilized in inseminated falcon females, despite the observed motilityassociated parameters being lower than those recommended for use in AI (Bailey and Lierz 2017). Our data showed that while semen parameters in birds of prey display a rather high level of variation, minimum quality values are required to fertilize the egg (Blanco et al. 2002).
Birds of prey lack a copulatory organ and semen is transferred via a so-called 'cloacal kiss'. During AI, however, the insemination dose is inserted into the pars uterina of the oviduct, i.e. closer to the site of fertilization in the upper oviduct (Blanco et al. 2002;Kharayat et al. 2016), thereby increasing the chance of successful fertilization. A further factor that may have contributed to the relatively high AI fertility rate observed in this study is the regularity of insemination (Kharayat et al. 2016). When an egg is laid, ovulation of the next egg in the clutch occurs within an hour (McWilliams et al. 2016); as such, AI should be performed no later than 2 h after each oviposition.
Female birds are known to lay many fertile eggs after a single mating as they are able to store viable sperm within oviduct storage tubules for days or even weeks (Blesbois and Brillard 2007). As human inseminators have no cues to predict and/or detect the brief fertile period when the first egg in the clutch is produced, it often goes unrecognized and is consequently wasted as infertile. To ensure fertilization of the first egg, naturally breeding birds of prey that lay a single egg or a small clutch may need to invest into higher quality semen at the start of the breeding season, as in the eastern imperial eagle, and/or copulate frequently for an extended pre-laying period, as in the northern goshawk (Møller 1987;del Hoyo et al. 1994;Margalida and Bertran 2010). A frequent copulation strategy also ensures paternity (Mougeot 2004). In some bird species, synchrony with the females' fertility and egg laying period is achieved through responding to a courting male (McWilliams et al. 2016).
Birds of prey are generally solitary territorial breeders with monogamy as the predominant mating system (del Hoyo et al. 1994); however, extra-pair copulations (Mougeot 2004) do occur in polygynous falconid birds. In polyandrous birds (Wedell et al. 2002), however, it is not the mating system that determines semen quality parameters but sperm competition. As such, AI using semen pooled from several male birds could result in unnatural sperm competition.
While egg development represents a major reproductive effort in a laying hen, semen production represents the reproductive cost to males (Wedell et al. 2002). When males copulate at high frequency in the breeding season, available sperm must be proportionately allocated for each mating event and/or sperm maturation has to be efficient (Wedell et al. 2002). Consequently, semen may be collected from male birds of prey on a daily basis. However, stressful handling of donor birds has to be limited.
To conclude, our study showed that sperm quality is critical for successful reproduction in birds of prey. With some limitations, AI using fresh semen (< 2 h after collection) can result in egg fertilization rates almost as successful as natural mating. As sperm donor selection is done by humans with the specific aim of passing on attributes to their progeny, we highly recommend that a sperm quality check is undertaken prior to AI. | 3,721.2 | 2020-01-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
The Effect of Different Operating Parameters on the Corrosion Rate of Carbon Steel in Petroleum Fractions
The corrosion in petroleum pipelines was investigated by the studying the corrosion of carbon steel in crude oil and refined petroleum products which include gasoline, kerosene, and gas oil. Weight loss method was used in which test specimens of carbon steel, with a known weights, were immersed in the test media for a total exposure time of 60 days. The weight loss was measured at an interval of 10 days and the corrosion rate was determined. The results showed that corrosion rate are highest for gasoline followed by kerosene, gas oil and crude oil, in a decreasing order. The observed pattern in the corrosion behavior is consistent with the density and weight percent of hydrogen in the hydrocarbon products. The corrosion rate increases with decreasing density and increasing weight percent of hydrogen. Experiments were carried out at different temperatures (30, 60, 90 and 120 o C) at a constant partial pressure of CO 2 (50psi) in 3.5% NaCl solution. The results indicated that as partial pressure of CO 2 and temperature increase, corrosion rate increases due to due to continuous dissolution of iron ion and formation of weak carbonic acid. The weak carbonic acid dissociates into carbonate and hydrogen ion, which increases the cathodic reaction on the metal surface.The presence of small amount of H 2 S (0.4 ppm) increases the corrosion rate significantly.
INTRODUCTION
arbon steel, the most widely used engineering material, accounts for approximately 85%, of the annual steel production worldwide.Despite its relatively limited corrosion resistance, carbon steel is used in large tonnages in marine applications, nuclear power and fossil fuel power plants, transportation, chemical processing, petroleum production and refining, pipelines, mining, construction and metal-processing equipment.
The selection of pipe for a particular situation is dependent on what is going through the pipe, the pressure and temperature of the contents.Pipes are fabricated from different material types to suit stringent needs and services desired.The most commonly used material for petroleum pipelines is mild steel because of its strength, ductility, weldability and it is amenable to heat treatment for varying mechanical properties [Abdul Hameed, (2005), Bolton, (1994), Davies, and Oelmann, (1983), Smith, and Hashemi, (2006)].However, mild steel corrodes easily because all common structural metals form surface oxide films when exposed to pure air but the oxide formed on mild steel is readily broken down, and in the presence of moisture it is not repaired [Badmos, and Ajimotokan, (2009)].
Carbon steels are generally used for the petroleum industry for transportation of crude oils and gasses from offshore to different refining platforms and from their to different destination of the applications.The carbon steel is susceptible to internal corrosion due to CO2/H2S environment [van Hunnik, Pots, Hendriksen,(1996) of CO2, temperature, chloride concentration, H2S level, surface films and oxide contents [Videm, Dugstad,(1996), Crolet, Thevenot, Nes¢ic, (1989)].
The presence of carbon dioxide (CO2), hydrogen sulphide (H2S) and free water can cause severe corrosion problems in oil and gas pipelines.Internal corrosion in wells and pipelines is influenced by temperature, CO2 and H2S content, water chemistry, flow velocity, oil or water wetting and composition and surface condition of the steel.A small change in one of these parameters can change the corrosion rate considerably, due to changes in the properties of the thin layer of corrosion products that accumulates on the steel surface [Das, Khanna, (2003)].
The surface films mainly consist of FeCO3 and their influence on the corrosion rates were observed in CO2 aqueous solutions [Dugstad, (1998), Nyborg, (1991)].The formation of iron carbonate is temperature dependent and at higher temperature it forms the protective layers over the metal surface [De Waard, Lotz, Milliams, (1991), Palacios, Shadley, (1991)].The effect of H2S on CO2 saturated solution has been investigated by adding little amount of H2S (0.4 ppm) in the same working environment.The H2S level from the surface has been easily skipped off due to the flow effect of liquid and the corrosion rate increases severely [De Waard, Lotz, Milliams, (1991)].
This work examines the corrosion of carbon steel in crude oil and its various refined products which include gas oil, gasoline, and kerosene.The aim of this study is to assess the corrosiveness of the various hydrocarbons to enhance material selection and effective surface treatments of pipelines for apt quality of passivity layers to prevent corrosion.
EXPERIMENTAL SETUP A) Material and Instrument
The crude oil was obtained from the oil field while the refined products of crude oil, gas oil, gasoline, and kerosene were procured from the filling station and the pH of each medium was measured.Commercial grade carbon steel was used for the tests; Table 1 shows that the composition of carbon steel elements (state company of geological survey and mining).Specimens were machined from carbon steel alloy.Specimens were machined from carbon steel of dimensions 3×2×0.2cm and 0.2cm diameter with central hole.The set of instrument and devices which have been used for testing are illustrated in Figure (1(.
B) Procedure
Chemical corrosion tests (ASTM 4562) 1-For all the experiments, the specimens were machined and grinded and polished sequentially with silicon carbide paper of grades 600 grit then wished in distilled water and brush, and then with acetone or alcohol.Drying of the specimens was carried out using drying paper and specimens driers.They were then put in a dissector or (container) containing a silica gel bed.
2-After taking the initial weight and dimension, carbon steel specimen was immersed in various test media in beakers.Each test coupon was exposed for a total period of 60 days with six weight measurements taken at an interval of 10 days.The weighting is carried out after the specimens are cleaned with distilled water and brush, and then with acetone or alcohol.Drying of the specimens was carried out using drying paper and specimens driers.3-Experiments were carried out at four different temperatures (30, 60, 90 and 120 o C) at a constant partial pressure of CO2 (50 psi) in 3.5% NaCl solution.Each experiment was conducted using the same procedures for a total period of 24 hours with four fresh samples and corrosion rates were measured in millimeter per year (mpy).4-The work was repeated in step (2) with different temperatures (30, 60, 90 and 120 o C) by adding little amount of H2S (0.4 ppm) after completion of the immersion test, the specimens were taking out.For preparing, cleaning from any localized attack.This expression is readily calculated from the weight loss of the metal specimen during the corrosion test by the equation given below [Fontana, (1987), Avwiri, (2004)].
RESULTS AND DISCUSSION
The chemical composition, density and pH values of the crude oil and their fractions are shown in Table (2) (Daura refinery).Generally, the density appears to increase with decreasing weight percent of hydrogen content of the media.The densities of crude oil and kerosene are respectively 886 and 794 kg/m 3 while their weight percent hydrogen contents are 12.1 and 13.6 respectively.Corrosion rate is observed to decrease with decreasing weight percent of hydrogen content in the hydrocarbon media.
Uniform corrosion was observed in all the test coupons immersed in the media and the results of the corrosion rate measurements are as shown in Table 3.The corresponding plots of corrosion rate as a function of exposure time are shown in Figure 2. Corrosion rate appear have maximum value in gasoline and other media are in the following decreasing order, kerosene, gas oil and Crude oil.The observed pattern in the corrosion behavior is consistent with the density and weight percent of hydrogen in the hydrocarbon products.The corrosion rate increases with decreasing density and increasing weight percent of hydrogen.The corresponding plots of corrosion rate as a function of temperature are shown in Figures ( 3 and 4).At low temperature, corrosion rate of samples slowly increases due to continuous dissolution of Fe 2+ ion in the solution as a result of formation of porous FeCO3, which is not protective in nature, however as the temperature increases from 30 to 60 o C, the FeCO3 film becomes less porous, more adherent to the metal surface and protective in nature and hence the corrosion rate decreases.Beyond 60 o C, the corrosion rate increases and it is higher at 90 0 C due to accumulation of more porous inner FeCO3 film on the metal surface which initiates formation of cracks and finally spallation of FeCO3 film [Palacios, Shadley, (1991)].The corrosion rates of all the various media are higher at 90 o C. Further increase in temperature the corrosion rate decreases significantly due to the formation of denser, adherent and homogeneous layer of iron carbonate, which is, protects the metal to further corrosion [De Waard, Lotz, Milliams, (1991),and DeWaard, Lotz, (1994)].As the partial pressure of CO2 in the solution increases the formation of weak carbonic acid (H2CO3) favors, which increases the corrosion rate.But at higher temperature the bicarbonate ions (HCO ) results in formation of more insoluble iron carbonate which increases the solution pH and corrosion rate decreases significantly as shown in Figures 3 and 4 at 120 o C. In many literatures, it has been reported that FeCO3precipitation is temperature dependent and for its precipitation super saturation with the Fe 2+ ion is required which is 5-10 times higher than the thermodynamically calculated values of solubility [De Waard, Milliams,(1975), Das, Khanna,(2004)].
The surface morphology of carbon steel in kerosene and gas oil as shown in Figure 5 indicates cracking and spallation of FeCO3 film at 90 o C.However, at 120 o C, the FeCO3 film is showing protective nature and good adherence on the metal surface as shown in Figure 6 Similarly carbon steel in crude oil and gasoline at 90 o C indicate crack formation and less adherence of the protective film with the base metal and thus corrosion rates are higher, but at higher temperature the oxide layer is more protective in nature and adheres on the metal surface, which restricts access of H2O to the surface.Since water reduction is the only cathodic reaction in the deaerated neutral solution therefore, the corrosion rate depends on how rapidly H2O diffuses to the metal surface through the barrier of oxide film and on diffusion of Fe2+ out through the film.Thus any change in temperature, bulk concentration and disturbance of the oxide film will affect the reaction rate [Ismaeel Andijani and Turgoose, (1999)].The corrosion rate can be reduced substantially under conditions where iron carbonate (FeCO3) can precipitate on the steel surface and form a dense and protective corrosion product film.This occurs more easily at high temperature or high pH in the water phase.When H2S is present in addition to CO2, iron sulphide (FeS) films are formed rather than FeCO3, and protective films can be formed at lower temperature, since FeS precipitates much easier than FeCO3 [Das, Khanna,(2004)].
CONCLUSIONS
1. Corrosion rate is highest in gasoline, and it follows decreasing order for the other media, kerosene, gas oil, and Crude oil.2. Corrosion rate is observed to decrease with decreasing weight percent of hydrogen content in the hydrocarbon media.3. The temperature increases the solubility of iron carbonate in the solution resulting in the precipitation of FeCO3 on the surface and forms the protective layer.
, Dugstad,(1998)] .The comparison of their corrosivity in the severe corrosive environment needs further investigation under the dynamic flow condition.The severity of corrosion depends on various working parameters such as partial pressure C
Figure ( 1 )
Figure (1) The set of devices using for corrosion resistance test.
formation of more insoluble iron carbonate which increases the solution pH and corrosion rate decreases.
Figure
Figure (2) Variation of Corrosion Rate Vs Time for different Media.
Figure ( 4 )
Figure (4) Corrosion rate of carbon steel in the test media exposed for 24 h in 3.5% NaCl at constant partial pressure of CO2//H2S (50Psi) .
Figure ( 5 )
Figure (5) ESEM micrographs showing surface morphology of carbon steel in (a) kerosene (b) gas oil exposed at 90 o C and (c) kerosene and (d) gas oil exposed at 120 o C.
Figure ( 6 )
Figure (6) ESEM micrographs showing surface morphology of carbon steel in (a) crude oil (b) gasoline exposed at 90 o C and (c) crude oil and (d) gasoline exposed at 120 o C. | 2,837.8 | 2013-06-01T00:00:00.000 | [
"Materials Science"
] |
A Scaling Transition Method from SGDM to SGD with 2ExpLR Strategy
: In deep learning, the vanilla stochastic gradient descent (SGD) and SGD with heavy-ball momentum (SGDM) methods have a wide range of applications due to their simplicity and great generalization. This paper uses an exponential scaling method to realize a smooth and stable transition from SGDM to SGD, which combines the advantages of the fast training speed of SGDM and the accurate convergence of SGD (named TSGD). We also provide some theoretical results on the convergence of this algorithm. At the same time, we take advantage of the learning rate warmup strategy’s stability and the learning rate decay strategy’s high accuracy. A warmup–decay learning rate strategy with double exponential functions is proposed (named 2ExpLR). The experimental results on different datasets for the proposed algorithms indicate that the accuracy is improved significantly and that the training is faster and more stable.
Introduction
In recent years, with the development of deep-learning technologies, many neural network models have been proposed, such as FCNN [1], LeNet [2], LSTM [3], ResNet [4], DensNet [5], and so on. They also have an extensive range of applications [6][7][8]. Most networks, such as CNN and RNN, are supervised types of deep learning that require a training set to teach models to yield the desired output. Gradient Descent (GD) is the most common optimization algorithm in machine learning and deep learning, and it plays a significant role in training.
In practical applications, the SGD (a type of GD algorithm) is one of the most dominant algorithms because of its simplicity and low computational complexity. However, one disadvantage of SGD is that it updates parameters only with the current gradient, which leads to slow speeds and unstable training. Researchers have proposed new variant algorithms to address these deficiencies.
Polyak et al. [9] proposed a heavy-ball momentum method and used the exponential moving average (EMA) [10] to accumulate the gradient to speed up the training. Nesterov accelerated gradient (NAG) [11] is a modification of the momentum-based update, which uses a look-ahead step to improve the momentum term [12]. Subsequently, researchers improved the SGDM and proposed synthesized Nesterov variants [13], PID control [14], AccSGD [15], SGDP [16], and so on.
A series of adaptive gradient descent methods and their variants have also been proposed. These methods scale the gradient by some form of past squared gradients, making the learning rate automatically adapt to changes in the gradient, which can achieve a rapid training speed with an element-wise scaling term on learning rates [17].
Many algorithms produce excellent results, such as AdaGrad [18], Adam [19], and AMSGrad [20]. In neural-network training, these methods are more stable, faster, and perform well on noisy and sparse gradients. Although they have made significant progress, the generalization of adaptive gradient descent is often not as good as SGD [21]. There are some additional issues [22,23].
First, the solution of adaptive gradient methods with second moments may fall into a local minimum of poor generalization and even diverge in certain cases. Then, the practical learning rates of weight vectors tend to decrease during training, which leads to sharp local minima that do not generalize well. Therefore, some trade-off methods of transforming Adam to SGD are proposed to obtain the advantages of both, such as Adam-Clip (p, q) [24], AdaBound [25], AdaDB [26], GWDC [27], linear scaling AdaBound [28], and DSTAdam [29]. Can we transfer this idea to SGDM and SGD?
This study is strongly motivated by recent research on the combination of SGD and SGDM under the QHM algorithm [30]. They gave a straightforward alteration of momentum SGD, averaging a vanilla SGD step and a momentum step, and obtained good results in experiments. It can be explained as a v− weighted average of the momentum update step and the vanilla SGD update step, and the authors provided a recommended rule of thumb v = 0.7. However, in many scenarios, it is not easy to find the optimal value of v. On this basis, we conducted a more in-depth study on the combination of SGDM and SGD.
In this paper, first, we analyze the disadvantages of SGDM and SGD and propose a scaling method to achieve a smooth and stable transition from SGDM to SGD, combining the fast training speed of the SGDM and the high accuracy of the SGD. At the same time, we combine the advantages of the warmup and decay strategy and propose a warmup-decay learning rate strategy with double exponential functions. This makes the training algorithm more stable in the early stage and more accurate in the later stage. The experimental results show that our proposed algorithms had a faster training speed and better accuracy in the neural network model.
Preliminaries
Optimization problem. f :R n → R is the objective function, and f (θ, ζ) ∈ C 1 is continuously differentiable, where θ is the optimized parameter. Considering the following convex optimization problem [31]: where ζ is a stochastic variable. Applying the gradient descent method to solve the above minimization problem: where η is the constant step size, g t is the descent direction, and θ is the parameter that needs to be updated. SGD. SGD uses the current gradient of the loss function to update the parameters [32]: where f is the loss function and ∇ f (θ, ζ) is the gradient of the loss function. SGDM. The momentum method considers the past gradient information and uses it to correct the direction, which speeds up the training. The update rule of the heavy-ball momentum method [33] is where m t is the momentum and β is the momentum factor. Efficiency of SGD and SGDM. In convex optimization, the gradient descent method can search the optimal global solution of the objective function by the steepest descent. However, it has a series of zigzag paths in the iteration process, which seriously affects the training speed. We can explore the root of this defect using theoretical analysis. Using exact line search to solve the convex optimization problems (1): In order to find the minimum point along the direction d (t) from x (t) , let Thus, We know that shows that the path of the iterative sequence { f (θ t , ζ)} is a zigzag. When it is close to the minimum point, the step size of each movement will be tiny and seriously affects the speed of convergence as shown in Figure 1a. Polyak et al. [9] proposed SGD with a heavy-ball momentum algorithm to mitigate the zigzag oscillation and accelerate training. The update direction of the SGDM is the EMA [10] of the current gradient and the past gradient to speed up the training speed and reduce the vibration in SGD. The momentum of the SGDM is However, in the later stage of training, there is a defect in using EMA for the gradient. The accumulation of gradients may lead to a faster speed of momentum and not stop in time when it is close to the optimum. This may oscillate in a region around the optimal solution θ * and cannot stably converge as shown in Figure 1b. At the same time, the gradients that are too far may have little helpful information, and the momentum can be calculated using the gradients of the last n times, such as AdaShift [34]. The negative gradient direction of the loss function is the optimal direction for the current parameter updating. The updating direction is no longer the fastest descent direction when using momentum. When it is close to the optimal solution, the stochastic gradient descent method is more accurate and is more likely to find the optimal point.
For more information, we use the ResNet18 to train the CIFAR10 dataset, and the first 75 epochs use the SGDM algorithm (hyperparameter setting: weight_decay = 5 × 10 −3 , lr = 0.1, β = 0.9). After the 75-th epoch, we only change the updating direction from momentum to gradient (descent direction if epoch < 76: m t else: g t , no other setting is changed). Figure 2a shows the accuracy curve of the test set during the training. It can be seen that, after the accuracy increases rapidly from starting and then enters the plateau after about the 25-th epochs, the accuracy no longer increases.
After the 75-th epoch, the current gradient is used to update the parameters, showing that the accuracy significantly improved. Figure 2b records the training loss. It can be seen that the loss of the SGD is faster and smaller. If we combine the advantages of SGDM and SGD in this way, the training algorithm can have both the fast training speed of SGDM and the high accuracy of SGD.
TSGD Algorithm
Based on the above analysis and the basis of the QHM algorithm, we provide a middle ground that combines preferable features from both. This includes the advantages of SGDM with a fast training speed and of SGD with high accuracy. A scaling transition method from SGDM to SGD is proposed-named TSGD. A scaling function ρ t is introduced to gradually scale the momentum direction to the gradient direction as the iteration, which achieves a smooth and stable transition from SGDM to SGD. The specific algorithm is represented in Algorithm 1.
Algorithm 1 A Scaling
Transition method from SGDM to SGD (TSGD) Input: initial parameters: In Algorithm 1, g t is the gradient of loss function, f in the t-th iteration, and m t is the momentum.m t is the scaled momentum, θ t is the optimized parameter, ρ t is the scaling function, and ρ t ∈ [0, 1]. We provide recommended rules of the hyperparameters ρ and β, which are ρ T = 0.01 and β = 0.9, where T is the number of iterations, T = ceil(samplesize/batchsize) * epochs .
Gitman et al. [35] made a detailed convergence analysis on the general form of the QHM algorithm. The TSGD algorithm proposed in this section conforms to the general form of the QHM algorithm. Therefore, the TSGD algorithm has the same convergence conclusion as the general form of the QHM algorithm. Theorem 1 gives the convergence theory of the TSGD algorithm based on the literature [35].
Theorem 1. Let f satisfy the condition in the literature [35]. Additionally, assume that 0 ≤ ρ t ≤ 1 and the sequences {η t } and {β t } satisfy the following conditions: Then, the sequence {θ t } that is generated by the TSGD Algorithm 1 satisfies: Moreover, we have: Theorem 1 implies that TSGD can converge, which is similar to SGD-type optimizers. Through exponential decay in step 4 of the TSGD Algorithm, the updated direction smoothly and stably transforms the momentum direction of SGDM to the gradient direction of SGD as iterations. In the early stage of training, the number of iterations is small, ρ t is close to 1, and the updated direction ism which can speed up the training speed. In the later stage, with the increase of iterations, the number of iterations is large, and ρ t is close to 0. The updated direction is gradually transformed tom When the iterative sequence closes to the optimal solution, gradient direction is used to update the parameters and is more accurate. Thus, TSGD has the advantages of both faster speed of SGDM and high accuracy of SGD. TSGD does not need to calculate the second moment of the gradient, which saves computational resources compared to the adaptive gradient descent method.
The Warmup Decay Learning Rate Strategy with Double Exponential Functions
The learning rate (LR) is a crucial hyperparameter to tune for practical training of deep neural networks and controls the rate or speed at which the model learns each iteration [36]. If the learning rate is too large, this may cause oscillation and divergence. On the contrary, training may progress slowly and even stop if the rate is too small. Thus, many learning rate strategies have been proposed and appear to work well, such as warmup [4], decay [37], restart techniques [38], and cyclic learning rates [39]. In this section, we mainly focus on the warmup and decay strategies.
On the one hand, the idea of warmup was proposed in ResNet [4]. They used 0.01 to warm up the training until the training error was below 80% (about 400 iterations) and then went back to 0.1 and continued training. In the early stage of training, due to many random parameters in the model, if a large learning rate is used, the model may be unstable and fall into a local optimum, which is challenging to fix. Therefore, we use a small learning rate to make the model learn certain prior knowledge and then use a large learning rate for training when the model is more stable. It can use less aggressive learning rates at the start of training [40]. Implementations of the warmup strategy include constant warmup [4], linear warmup [41], and gradual warmup [40].
On the other hand, decay is also a popular strategy in neural-network training. The decay can speed up the training using a larger learning rate at the beginning and can converge stably using a smaller learning rate after. Not only does this show good results [24,25,34,42] in practical applications but also the learning rate is required to decay in the theoretical convergence analysis, such as η t = 1/t [43], η t = 1/ √ t [42]. The specific implementation of learning rate decay is as follows: step decay [26], linear attenuation [44], exponential decay [45], etc.
To ensure better performance of the training algorithm, we combined the warmup and decay strategies. The warmup strategy was used for a small number of iterations in the early stage of training, and then the decay strategy was used to decrease the learning rate gradually. In this way, the model can learn stably early on, train fast in the middle, and converge accurately later.
First, an exponential function is used in the warmup stage to gradually increase the learning rate from zero to the upper learning rate as shown in Figure 3a. The numerical formula is as follows lr warmup = lr u 1 − ρ t 1 .
Second, an exponential function is also used in the decay stage to gradually decrease the learning rate from the upper learning rate to the lower learning rate as shown in Figure 3b. The numerical formula is as follows The main idea of the warmup-decay learning rate strategy is that the learning rate can increase in the warmup stage and decrease in the decay stage. Therefore, we combine the above two processes, and the warmup-decay learning rate strategy with double exponential functions (2ExpLR) is proposed as shown in Figure 3c. The numerical formula is as follows Similarly, we also provide recommended rules of the hyperparameters ρ 1 , ρ 2 as with ρ, that are ρ T 1 = ρ T 2 = lr l * 10 −1 . Thus, ρ 1 , ρ 2 can be easily calculated using ρ 1 = ρ 2 = 10 (lg lr l −1)/T .
In Figure 3, we know that 2ExpLR does not need to set the transition point from warmup to decay manually, compared with other strategies [24][25][26]. At the same time, we achieve a smooth and stable transition from warmup to decay through 2ExpLR. This gives the training algorithm a faster convergence speed and higher accuracy. The TSGD algorithm with the 2ExpLR strategy is described as Algorithm 2.
Extension
The scaling transition method on the gradient can be extended to other applications. We can abstract this scaling transition process providing a general framework for the parameter transition from one state to another.
For example, neural networks have been widely used in many fields to solve problems. During the training process, the input data type and amount directly influence the performance of the ANN model [46]. If the data is dirty, contains large amounts of noise, the data size is too small, or the training time is too long, etc., the model can be affected by overfitting [47]. Various regularization techniques have been proposed and developed for neural networks to solve these problems. The regularization technique is used to avoid overfitting of the network and has more parameters than the input data and is used for a network learned with noisy inputs [48], such as L1 and L2 regularization methods.
The model has not learned any knowledge in the early stage of training, and using regularization at this time may result in harmful effects on the model. Therefore, we can train the model generally at the beginning and then use the regularization strategy to train the model in the later stage. Thus, we can implement the scaling transition framework as follows where L( * ) is the loss function, N is the number of samples,ŷ i is the predicted value, and y i is the actual value. λ is a non-negative regularization parameter, R( * ) is a regularization function, such as L 2 = θ 2 2 , or L 1 = θ 1 .
Experiments
In this section, to verify the performance of the proposed TSGD and TSGD+2ExpLR, we compared them with other algorithms, including SGDM and Adam. Specifically, we used IRIS, CASIA-FaceV5, and CIFAR classification tasks in the experiments. We set the same random seed through PyTorch to ensure fairness. The architecture we chose for the experiments was ResNet-18. The computational setup is shown in Table 1. Our implementation is available at https://github.com/kunzeng/TSGD (accessed on 20 November 2022).
IRIS and BP neural network.
The IRIS dataset is a classic dataset for classification, machine learning, and data visualization. This dataset includes three iris species with 50 samples of each and some properties of each flower. An excellent way to determine the correct learning rates is to perform experiments using a small but representative sample of the training set, and the IRIS dataset is a good fit. Due to the small number of samples, we used all 150 samples as the training set. The performance of each algorithm is represented by the accuracy and loss value on the training set. We set the relevant parameters using empirical values and a grid search: samplesize = 150, epoch = 200, batchsize = 1, lr(SGD: 0.003, SGDM: 0.05, TSGD: 0.03), up_lr = 0.5, low_lr = 0.005, rho = 0.977237, and rho1 = rho2 = 0.962708. The experimental results show that the method achieves its expected effect. In Figures 4 and 5, the training process of SGDM has more oscillations, and the convergence speed of SGD is slower. However, TSGD is more stable and faster than SGDM and SGD. It is astonishing that the TSGD+2ExpLR algorithm is stable during training and increases accuracy quickly. CASIA-FaceV5 and ResNet18. The CASIA-FaceV5 contains 2500 color facial images of 500 subjects. All face images are 16-bit color BMP files, and the image resolution is 640*480. Typical intra-class variations include the illumination, pose, expression, and eye-glasses imaging distance, which provides a remarkable ability to distinguish between different algorithms. We preprocessed it for better training and reshaped it to 100 × 100. The parameters were set as: samplesize = 2500, epoch = 100, batchsize = 10, lr = 0.1, lr u = 0.5, lr l = 0.005, then T = ceil(samplesize/batchsize) * epoch = 25,000, and thus ρ = 10 (log(ρ T ,10)/T) ≈ 0.999815. This is also applicable to ρ 1 = ρ 2 = 0.999696. The experi-mental results are in Figures 6 and 7. Although we can observe that the accuracy of the TSGD algorithm does not exceed 1 Adam, it exceeds the SGD algorithm, and the training process is more stable. The main reason may be that the CASIA-FaceV5 dataset has fewer samples and more categories, and each category has only five samples. The Adam algorithm is more suitable for this scenario. The accuracy of the TSGD+2ExpLR algorithm is much higher than that of other algorithms. The loss value is also much lower than other algorithms, and the training process is more stable. Cifar10 and ResNet18. The CIFAR10 dataset was collected by Hinton et al. [49]. It contains 60,000 color images in 10 categories, with a training set of 50,000 images and a test set of 10,000 images. The parameters are adjusted: epoch = 200, batchsize = 128, ρ = 0.9999411, ρ 1 = ρ 2 = 0.9999028, lr = 0.1, lr u = 0.5, and lr l = 0.005. The other parameters were the same as the above or were the default values recommended by the model. The architecture we used was ResNet18, and the learning rate was divided by 10 at epoch 150 for SGDM, Adam, and TSGD. Figure 8 shows the test accuracy curves of each optimization method. As we can see, SGDM had the lowest accuracy and more oscillation before 150 epochs, and Adam had a fast speed but lower accuracy after 150 epochs. It is evident that the TSGD and TSGD+2ExpLR had higher accuracy, and the accuracy curves were smoother and more stable. The speed of TSGD even exceeded the speed of Adam. Figure 9 shows the loss of different algorithms on the training set. At the beginning of training, the loss of TSGD decreases slowly, possibly due to the small learning rate of the warmup stage to make the model stable. However, it can be seen that the loss of the TSGD algorithm decreases faster, and the loss is the smallest in the later stage. Cifar100 and ResNet18. We also present our results for CIFAR100. The task is similar to Cifar10; however, Cifar100 has 10 categories, and each category has 10 subcategories for a total of 100 categories. The parameters are the same as CIFAR10. We show the performance curves in Figures 10 and 11. It can be seen that the performances of each algorithm on CIFAR100 are similar to those of CIFAR10. The two algorithms proposed in this paper can hold the top two spots in all, and the curves are still smoother and more stable. In particular, in TSGD+2ExpLR, the accuracy reaches its peak at about 100 epochs, which reflects the effect of the warmup-decay learning rate strategy with double exponential functions. The loss of the TSGD and TSGD+2ExpLR algorithms decreases faster and finally has a minor loss, almost close to zero. The TSGD and TSGD+2ExpLR are considerably improved over SGDM and Adam, with faster speed and higher accuracy.
Conclusions
In this paper, we combined the advantages of SGDM with a fast training speed and SGD with high accuracy and proposed a scaling transition method from SGDM to SGD. At the same time, we used two exponential functions to combine the warmup strategy and decay strategy, thus, proposing the 2ExpLR strategy. This method allowed the model to train more stably and obtain higher accuracy. The experimental results showed that the TSGD and 2ExpLR algorithms had good performance in terms of the training speed and generalization ability. | 5,349.4 | 2022-11-24T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Aligned Weight Regularizers for Pruning Pretrained Neural Networks
Pruning aims to reduce the number of parameters while maintaining performance close to the original network. This work proposes a novel self-distillation based pruning strategy, whereby the representational similarity between the pruned and unpruned versions of the same network is maximized. Unlike previous approaches that treat distillation and pruning separately, we use distillation to inform the pruning criteria, without requiring a separate student network as in knowledge distillation. We show that the proposed cross-correlation objective for self-distilled pruning implicitly encourages sparse solutions, naturally complementing magnitude-based pruning criteria. Experiments on the GLUE and XGLUE benchmarks show that self-distilled pruning increases mono- and cross-lingual language model performance. Self-distilled pruned models also outperform smaller Transformers with an equal number of parameters and are competitive against (6 times) larger distilled networks. We also observe that self-distillation (1) maximizes class separability, (2) increases the signal-to-noise ratio, and (3) converges faster after pruning steps, providing further insights into why self-distilled pruning improves generalization.
Introduction
Deep neural networks (DNNs) have grown increasingly large in the recent years. This has led to models requiring more storage requirements, more resources for training and inference (e.g., GPUs and TPUs), longer compute times and larger carbon footprints. This is largely due to the rise of masked self-supervised learning (SSL) which trains DNNs (e.g., Transformers in NLP) on a large collection of samples that do not have task labels but instead use a subset of the inputs as labels. Given the aforementioned challenges, it has become more difficult for machine learning practitioners to use these SSL pretrained models for fine-tuning on downstream tasks. While training tricks such as effective batch sizes, gradient accumulation and dynamic learning rate schedules (Howard and Ruder, 2018) have improved the efficiency of fine-tuning DNNs under resource constraints, it can still come at a cost, e.g. gradient accumulation leads to less updates.
Pruning (LeCun et al., 1990;Reed, 1993) is a type of model compression method (Buciluǎ et al., 2006) that aims to address these shortcomings by zeroing out a subset of weights in the DNN, while maintaining performance close to the original model. Retraining is often carried out directly after each pruning step to recover from pruning induced performance drops. This process is referred to as iterative pruning. Although, iterative pruning has been extensively studied in the SSL setting (Hassibi and Stork, 1993;Han et al., 2016;Ding et al., 2018) and the transfer learning setting (Molchanov et al., 2016;Gordon et al., 2020;Sanh et al., 2020), little is known about pruning DNNs in the zeroshot setting 1 where a model is required to make predictions on a set of samples from classes that are unobserved during training. One salient example is pretrained cross-lingual language models (XLMs) (Conneau and Lample, 2019;Conneau et al., 2020a) whereby the model is trained with a masked/translation language model (MLM/TLM) objective to predict tokens for a large set of different languages whereby the objective forces the XLM model to learn similar representations for different languages. After cross-lingual pretraining, the model is further fine-tuned to a downstream task in one language (e.g., English) and then evaluated on different languages in the zero-shot setting (e.g., Spanish, French, Chinese, etc.). In this context, applying current pruning methods can damage the XLM cross-lingual alignment that has been learned during pretraining. Ideally, we would aim to prune XLMs in such a way that avoids this alignment distortion which effects the zero-shot performance of pruned XLMs. Additionally, overfitting to the language used for fine-tuning becomes more of an issue due to the progressive reduction in parameters throughout iterative pruning as the remaining weights are relatively large, moving away from an "aligned" XLM state. This is an important problem to address as the application of large pretrained models in the zero shot-setting for natural language and other modalities (e.g images and audio) is of practical importance e.g., using XLMs in production for multiple languages by only requiring annotations in a single language for fine-tuning, making predictions on unseen classes at test time from pretrained visual representations (Bucher et al., 2017) using only semantic descriptions (i.e., label similarity to known classes) or zero-shot predictions in pretrained multimodal models such as CLIP (Radford et al., 2021).
Hence, this work addresses the alignment distortion pruning problem by introducing AlignReg, a class of weight regularizers for magnitude-based pruning that force pruned models to have parameters that point in a similar direction or have a similar distribution to the parameters of the original pretrained network. To our knowledge, this is the first study on how iteratively pruned models perform in the zero-shot setting and how the solution differs from solutions found in the non-zero shot setting. We believe our findings have a strong practical implication as well-established pruning criteria may not be suitable given the observed discrepancy between zero-shot performance and the typically reported non-zero shot performance. Moreover, our proposed weight regularizer improves overall pruning generalization in zero-shot cross-lingual transfer. Below, we summarize our contributions.
• The first analysis of pruning cross-lingual models, how this effects zero-shot crosslingual transfer and performance differences to pruning in the SSL setup.
• A weight regularizer that mitigates alignment distortion by minimizing the layer-wise Frobenius norm or unit similarity between the pruned model and unpruned model, avoiding overfitting to single language task fine-tuning.
• A post-analysis of weight distributions after pruning and how they differ across module types in Transformers.
Related Work
Below we describe regularization-based pruning, other non-magnitude based pruning and how masked language modeling (MLM) implicitly learns to align cross-lingual representations.
Regularization-based pruning. Pruning can be achieved by using a weight regularizer that encourages network sparsity. Three well-established regularizers are L 0 (Louizos et al., 2018), L 1 regularization (Liu et al., 2017;Ye et al., 2018) and the commonly used L 2 regularization for weight sparsity (Han et al., 2015(Han et al., , 2016. Wang et al. have proposed an L 2 regularizer that increases in influence throughout retraining and shows the increasing regularization improves pruning performance. For structured pruning where whole blocks of weights are removed, Group-wise Brain Damage (Lebedev and Lempitsky, 2016) and SSL (Wen et al., 2016) propose to use Group LASSO (Yuan and Lin, 2006) to learn structured solutions.
Importance-based pruning. Magnitude-based pruning (MBP) relies on the assumption that weight or gradient magnitudes have correlation with its importance to the overall output of the network. Mozer and Smolensky instead use a learnable gating mechanism that approximates layer importance, finding that weight magnitudes reflect importance statistics. To measure weight importance as the difference in loss between pruned and unpruned network, LeCun et al. approximate this difference with a Taylor series up to the second order. This involves the product of the gradient and weight magnitude in the 1st term and an approximation of the Hessian and the square of the weight magnitude for the second term. However, computing the Hessian and even its approximations (LeCun et al., 1990;Hassibi and Stork, 1993;Dong et al., 2017;Wang et al., 2019;Singh and Alistarh, 2020) can significantly slow down retraining. In our work, we avoid the requirement of computing the Hessian or approximations thereof, as it is not scalable for models such as XLM-R (Conneau et al., 2020a). Park et al. have extended MBP to block approximations to avoid pruning lowest weight magnitudes that may be connected to weights in adjacent layers that have high weight magnitude. Lee et al. have provided a method to automatically choose the sparsity of layers by using the rescaled version of weight magnitude to incorporates the modellevel distortion incurred by pruning.
Implicit Alignment in Pretrained MLMs
In context of multi-task learning, Chen et al. (2020) minimize the mean squared error between pretrained weights and weights being learned for a set of different source tasks to avoid catastrophic forgetting in the continual learning setting. Conneau et al. (2020b) have found that multilingual MLM (i.e training with an MLM objective with concatenated text for multiple languages) naturally leads to models with strong cross-lingual transfer capabilities. Additionally, they find that this is also found for monolingual models that do not share vocabulary across monolingual corpora and the only requirement is that weight sharing is used in the top layers of the multi-lingual encoder. In the context of our work, we want to bias our fine-tuned and iteratively pruned model to have similar geometric properties and symmetries to these pretrained MLMs to preserve zero-shot cross-lingual transfer.
Methodology
In this section, we describe how our proposed AlignReg weight regularizers can improve pruning performance in both supervised learning and zero-shot pruning settings. We focus on two regularizers, namely, a neuron correlation-based regularizer (cosine-MBP) and Frobenius layer-norm regularizer (frobenius-MBP). Let where each X i of D training samples consists of a sequence of vectors X i := (x 1 , . . . , x n ) and x i ∈ R d (e.g., d = 512). For structured prediction (e.g., NER and POS), y i ∈ R n×c and for single and pairwise sentence classification, y i ∈ R c where c is the number of classes. Let θ = (θ 1 , . . . , θ L ) be the parameters of a pretrained network f with L layers, where θ l refers to the parameters, including weight matrix W l and bias b l , at layer l. Let fθ be a network with parametersθ consisting of weightsW l ∈ R N l−1 ×N l and biasb l ∈ R N l where N l is the number of units in the l-th layer. Here, W l := W l M l where M is the pruned mask. For MBP (Karnin, 1990) we remove weights of W l , ∀l ∈ L with the smallest absolute weight magnitude until a specified percentage p of weights are removed. Note that this is a layer-wise process and requires the pruned weights to be masked with M l which has 0 entries corresponding to weights to be pruned and 1 entries for unpruned weights W l . Global MBP can also be used whereby the weights {W l } L l=1 are first vectorized and concatenated prior to choosing p lowest weight magnitudes. Unlike layer-wise MBP, the percentage of weights removed in each layer can vary for global-MBP. Typically, weight regularization is used with MBP to encourage weight sparsity. Thus the objective for iterative pruning can be expressed as, where λ controls the influence of the weight magnitude regularization. We now describe our proposed AlignReg.
AlignReg -Pruning-Aware Regularization
AlignReg can be used to align weights unit-wise or layer-wise between unpruned and pruned networks. We initially discuss the cosine-MBP regularizer.
cosine-MBP aims to preserve the inherent crosslingual alignment, during iterative pruning, by minimizing the angle between parameter vectors of the same unit in the pruned and unpruned network. The intuition is that cross-lingual alignment relies more on parameter vector direction than vector magnitudes. Moreover, as the network is being pruned, the weights will consequently change weight magnitude to account for the information loss. To apply AlignReg to linear layers within Transformers, we compute the pairwise cosine similarity between pairs of pruned weightsW l ⊂f and unpruned weights W ⊂ f for all l-th layers. For W l ∈ R N l−1 ×N l of the l-th layer, the average weight correlation is where W li is i-th column of the matrix corresponding to the i-th unit of the l-th layer. Intuitively, ρ(W l ,W l ) is the average cosine similarity between weight vectors of the same unit at the l-th layer of the pruned and unpruned network. Adding AlignReg to the objective results in Equation (3), where λ ∈ [0, ∞) controls the importance of AlignReg relative to the main cross-entropy loss ℓ ce (·, ·). The gradient of the loss w.r.t to θ is then Algorithm 1: AlignReg Pruning 1: Input: Weight tensors W 1 , . . . , W L of a finetuned network, p percentage of weights to remove per layer 2: Output: Pruned weight tensorsW 1 , . . .W L 3: for l = 1, . . . , L do 4: Compute ρ(W l , W l ) with Eq.2.
5:
SetW s i as s l -th smallest element ofW 6: SetW l ← M l ⊙ W l 8: end for 9: Compute L θ according to Eq.3 expressed as equation (4), where W l,(,j) andW l,(,j) are j-th column in W l andW l , respectively. Thus, this regularization favors solutions with high cosine similarity between units of pruned and unpruned networks. We also consider a layer-wise ρ(W,W) that relaxes the unit-level alignment to whole layers. This is partially motivated due to the fact neural networks can exhibit similar output activation behavior even when neuron weights have been permuted within the layer (Brea et al., 2019). To perform this we simply apply Equation (2) with vectorized weights ρ(vec(W l ), vec(W l )) and the subsequent partial derivatives in Equations (4) and (5) are applied for updatingW l . In our experiments we did not see a significant difference using vectorized weights and thus use unit-wise cosine similarity.
Algorithm 1 shows how AlignReg is applied for a single mini-batch update during an iterative pruning epoch.
Relaxing Unit-Wise AlignReg To A Layer-Wise
Frobenius Distortion Formulation Thus far we have described the application of cosine similarity as a measure of similarity between unpruned and pruned weights of the same units. However, this may be a strict constraint, particularly at high compression rates where the remaining weights for a unit are forced to have higher norms to allow zeroed weights. Hence, an alternative measure is the layer-wise Frobenius norm (Frobenius-MBP) regularizer based on the difference between weights ||W −W|| F . MBP itself can be viewed in terms of minimizing the Frobenius distortion (Han et al., 2016;Dong et al., 2017) as min M:||M|| 0 =p ||W − M ⊙ W|| F where ⊙ is the Hadamard product, || · || 0 denotes the entrywise 0-norm, and p is a constraint of the number of weights to remove as a percentage of the total number of weights for that layer. In the zero-shot setting, we need to account for out-of-distribution Frobenius distortions, such as alignment distortion in XLM due to pruning and overfitting to a single language. Taking the view of Frobenius distortion minimization when using our weight regularizer, we reformulate it to include Frobenius-MBP as, where W T are the weights from the pretrained model prior to fine-tuning that is cross-lingually aligned from the masked language modeling (MLM) pretraining objective. In our experiments, λ = 5 × 10 −4 .
frobenius-MBP Implicitly Aligns Eigenvectors
To explicitly show that the Frobenius distortion minimization aligns fine-pruned and pretrained parameter vectors we expect their eigenvectors to also be close. We can use the Eckart-Young-Mirsky Theorem (Golub et al., 1987) to express Frobenius distortion minimization as Equation 7, where the unitary invariance under the 2-norm that U,V vanishes and singular value matrix is left to approximate W T , hence the inclusion of Σ. We express X = U k Σ 12 k , Y = Σ 12 k V ⊤ k and XY = A k . Hence, we can further describe the minimization as ||Σ − U ⊤ W T k V|| 2 F and since X, Y are unitary, ||Σ − Σ k || 2 F .
Connections to Knowledge Distillation
Knowledge distillation (KD) works by using outputs of the last layer (Hinton et al., 2015) or intermediate layers (Romero et al., 2015) operate directly on minimizing a divergence or distance between weight tensors as opposed to their corresponding output activations. Hence, AlignReg does not necessarily need training data as it operates directly on aligning weight tensors. Since the networks that are used for alignment are architecturally identical, we can show that maximizing weight similarity is equivalent to minimizing distance between their corresponding output activations (Romero et al., 2015) when the norm of input Z is smaller than the output range of σ.
For our experiments, we use XLM-RoBERTa Base which contain Gaussian Linear Error Unit (GeLU) activation functions, which can be formulated as σ(Z li ) := Z li /2(1.0 + erf(Z li / √ 2.0)) where erf is an error function, σ(·) is a monotonic activation function and Z li is the input vector. The GELU activation has the properties that for Z li > 0 it is equivalent to the ReLU activation and Z li ≤ 0 it tends to -1. For Z li > 0, ||Z li || 2 ≤ 1 and a monotonic piecewise linear function σ(·), the inequality in Equation 8 holds.
Layer normalization leads to features having zero mean and unit variance and hence ||Z li || 2 ≤ 1. Hence, minimizing the Frobenius distortion of pruned and unpruned weights is equivalent to minimizing the mean squared error (MSE) between output activations, as is the knowledge distillation method used for FitNets (Romero et al., 2015). In contrast, KD using FitNets encourages the student network to have activation outputs that are the same as the teacher with permutation invariance on the units incoming weights, not restricting the weights to be similar. Unlike KD, this minimization can be performance without any data.
Iterative Pruning Details. Texts are tokenized using the SentencePiece BPE tokenizer (Sennrich et al., 2016) with a vocabulary of 250K tokens. For structured prediction tasks (POS and NER), a single layer feed-forward (SLFF) token-level classifier is used on top of XLM-R Base and for sentence-level task a SLFF sentence-level classifier is used. The batch size is 32, the learning rate is 5 · 10 −6 and the maximum sequence length is set to 256 for all tasks, except for POS in which we use a learning rate of 2 · 10 −5 with the adam optimizer (Kingma and Ba, 2015) with weight decay (AdamW) and a max sequence length of 128. We carry out a pruning step after each 15 training epochs, uniformly pruning 10% of the parameters at each pruning step. We omit the pruning of embedding layers, layer normalization parameters and the classification layer as they account for a relatively small number of the total parameter count (< 1%) and play an important role in XLM generalization. Although prior work has suggested non-uniform pruning schedules (e.g., cubic schedule (Zhu and Gupta, 2017)), we did not see major differences to uniform pruning in preliminary experiments. Each task is trained with English data only and evaluated on all available languages for that task. Hence, we expect the percentage of achievable compression to be lower as performance in the zero-shot cross-lingual setting to be more difficult than the monolingual setting (e.g., GLUE tasks).
Empirical Results
We now discuss results on the XGLUE tasks.
News Classification (NC) Figure 1 shows the results on news classification where a category for news article is predicted and evaluated in 5 languages and trained and iteratively pruned on English text. Firstly, we observe the trend in iterative pruning performance degradation is somewhat volatile. From preliminary experiments we found news classification to require only 3 epochs to converge for standard fine-tuning on XLM-RoBERTa Base . We find that this task is relatively "similar" to the pretraining task and therefore able to easier recover from pruning steps. Overall, both Cosine-MBP and Frobenius-MBP consistently lead to the best zero-shot test performance across both pruning steps and languages.
Question Answer Matching (QAM) Figure 3 shows the test accuracy on English and the zeroshot test accuracy on French and German for Question-Answer Matching (QAM). This involves predicting whether a question is answered correctly or not given a question-answer pair. We find that Frobenius-MBP and Cosine-MBP maintain higher accuracy across multiple pruning steps, outperforming baselines. More generally, we see there is close to 2% drop in average test accuracy drop in French and German when compared to testing on samples from the same language used in training.
Named Entity Recognition (NER) The Named
Entity Recognition (NER) cross-lingual dataset is made up of CoNLL-2002NER and CoNLL-2003NER (Sang and De Meulder, 2003, covering English, Dutch, German and Spanish with 4 named entities. From Figure 2 we find that cross-lingual transfer of pruned models is most difficult in German and Dutch, which both come from the same language family, sharing commonalities such as word order and having similar vocabularies. The primary reason for the difficulty in maintaining per- formance in high compression rates for this NER dataset is that there is only 15k training samples, being significantly lower than the remaining XGLUE tasks (the majority contains 100k training samples). Thus, not only is there less training data to recover directly after each pruning step, but the pruning step interval itself is shorter. In contrast, English test performance is close to the original performance up until 25% of remaining weights, unlike the remaining languages. We find that gradient-MBP eventually overtakes MBP approaches past 20% remaining weights. However accuracy has reduced too much at this compression level. We find that Cosine-MBP and Frobenius-MBP weight regu-larizers achieve the best performing pruned model performance above 20% remaining weights, with Lookahead pruning and L 0 regularized MBP being competitive in zero-shot performance.
Part of Speech Tagging (POS) The Part of Speech (PoS) tagging dataset consists of a subset of the Universal Dependencies treebank (Nivre et al., 2020) and covers 18 languages. In Figure 4, we see both Cosine-MBP and Frobenius-MBP tend to outperform baselines, although L 0 -based pruning (Louizos et al., 2018) has similar performance to Cosine-MBP for zero-shot accuracy. There is also a clear discrepancy between SSL accuracy (English) versus zero-shot accuracy (Average), the latter following closer to linear decay after 40-50% of weights remaining. Generally, both Cosine-MBP and Frobenius-MBP outperform baselines with the exception of Thai and Urdu at higher compression rates (< 40%), both being some of the most underresourced languages of all 18 languages.
Web Page Ranking aims to predict whether a web page is relevant (1-5 ratings, "bad" to "perfect") to an input query and it is evaluated for 7 languages using the Normalized Discounted Cumulative Gain (nDCG). From Figure 5, we see that between the 15% -45% region the average zeroshot performance degrades faster than the English language used for training. In contrast, semantically and syntactically different languages from English, such as Chinese, already suffer from loss of alignment due to pruning as the performance gap between proposed methods (and baselines) and random pruning is shortened. Cross-Lingual Natural Language Inference (XNLI) Figure 6 shows the zero-shot crosslingual transfer for various unstructured pruning methods. We find that both the accuracy on the English test (i.e SSL generalization) and the average zero-shot test accuracy are consistently improved using Cosine-MBP and Frobenius-MBP, outperforming L 0 pruning, Lookahead pruning and LAMP. We find that morphologically rich languages such as Arabic, Swahili and Turkish degrade in performance linearly once performance begins to drop after 60% of the remaining weights are pruned. This trend is roughly followed for all MBP-based pruning methods. Additionally, test accuracy on English can be maintained within 10% accuracy drop of the original test accuracy up to 20% of remaining weights for MBP, while Swahili can only be within a 10% accuracy drop up to 55% of the remaining weights. Hence, iterative pruning in the zero-shot setting leads to faster performance degradation for languages that are typologically or etymologically further from the language used for fine-tuning. When comparing, English and the average zeroshot test accuracy we see that the slope is steeper after the inflection point 2 for all pruning methods, not to mention the greater than 10% accuracy drop across pruning steps. Table 1 we show the overall and average task understanding scores on the XGLUE benchmark for our proposed AlignReg weight regularizer and the pruning baselines. We find that the use of AlignReg Cosine-MBP and Frobenius-MBP better preserves cross-lingual alignment during model pruning, thereby outperform other MBP baselines, including LAMP and Lookahead pruning, based on improved zero-shot cross-lingual performance. Discussion From our experiments, we found that layer-wise pruning tends to outperform global pruning. This can be explained by the clear discrepancy between weight norms of different layer types within each self-attention block. Global pruning chooses the majority of weights to prune from the layer type that has the smallest norm, leading to an information bottleneck, or layer collapse (Lee et al., 2018) is due to layer normalization being applied after query, key and value (QKV) parameters, rescaling features such that weight magnitudes remain low. Hence, this motivates why we have focused on the application of AlignReg to layer-wise MBP. This is reflected in Figure 7 which shows the weight norm by layer type for each layer for MBP. We see that QKV weight values are distinctly higher than the remaining fully-connected layers (attention output layer, intermediate position-wise feedforward layer and the blocks output layer), with the exception that the output attention layer norm becomes higher between layer 3-8. For the majority of tasks, the rate of performance drop for zero-shot test performance occurs close to 30% of remaining weights. This is consistent for all pruning methods and therefore the focus of our analysis has been around this operating region.
XGLUE Average Result Finally, in
We also note that the effect of MBP (including our AlignReg regularization-based MBP) on zeroshot performance for different languages heavily depends on the semantic distance of evaluated lan-guage to the single language used for training. For example, in Figure 6 Arabic, Bulgarian, Swahili and Hindi have the largest drops in test accuracy around 20-60% remaining weights. Similarly Arabic, Thai and Hindi suffer most around 20% -60% for PoS tagging in Figure 4. However, we also acknowledge this is partly reliant on the proportion of training data per language used during pretraining the underlying language model, in our case XLM-R Base .
Lastly, to show the representational degradation of pruned networks, in Figure 8 we visualize the class separability via a t-SNE plot of two principal components of the last hidden representation corresponding to the [CLS] token of an iteratively pruned XLM-R Base for PAWSX. Even from only two principal components of a single token input, we clearly see a change in class separability from 31% to 28% remaining weights, reflecting the lack of linear separation.
Conclusion
In this paper, we analysed iterative pruning in the zero-shot setting where a pretrained masked language model uses self-supervised learning on text from various languages but can only use a single language for downstream task fine-tuning. We find that some languages degrade in iterative pruning performance faster than others for some tasks (NER and XNLI) and propose a weight regularizer that biases the iteratively pruned model towards learning weight distributions close to the crosslingually aligned pretrained state. This improves over well-established weight regularization methods for magnitude-based pruning in both the standard supervised learning setting and the zero-shot setting. | 6,249.6 | 2022-04-04T00:00:00.000 | [
"Computer Science"
] |
Temporal and Spatial Characteristics of Hard X-Ray Sources in a Flare Model with a Vertical Current Sheet
We analyzed changes in the height of the coronal hard X-ray (HXR) source for flares SOL2013-05-13T01:50 and SOL2013-05-13T15:51. Analysis of the Reuven Ramaty High Energy Solar Spectroscopic Imager data revealed the downward motion of the HXR source and the separation of the sources by energy and height. In the early stages of the flares, a negative correlation was found between the HXR source area in the corona and HXR flux. For the SOL2013-05-13T15:51 event, an increasing trend in the time delay spectra at the footpoints was obtained. For both events, the spectra of the time delays in the coronal HXR source showed a decreasing trend with energy in certain flare phases. To interpret the observed phenomena, we considered a flare model of collapsing traps and calculated the distribution functions of accelerated electrons along the magnetic loop using a nonstationary relativistic kinetic equation. This approach considers betatron and Fermi first-order acceleration mechanisms. The increasing trend of the time delay spectra at the footpoints was explained by the high mirror ratio in the magnetic loop and betatron acceleration mechanism. The observed features in the spatial and temporal behavior of the HXR sources, such as the negative correlation between the HXR source area and HXR flux, can be interpreted by the collapsing trap model.
INTRODUCTION
According to observations in the extreme ultraviolet (EUV) range and model calculations, magnetic field configurations with a vertical current sheet in the corona are possible in active regions of the solar atmosphere (Warren et al. 2018;Gary et al. 2018;Jiang et al. 2018). In the standard flare model CSHKP (Carmichael-Sturrock-Hirayama-Kopp-Pneuman), the primary acceleration of charged particles occurs in the regions of the cusp and current sheet (Lin & Forbes 2000;Zharkova et al. 2011;Chen et al. 2020) above the soft X-ray loops. The ejection of collisionless plasma from the magnetic reconnection region is accompanied by the transformation of magnetic loops initially extended in the direction of the current sheet to a configuration close to a dipole (magnetic relaxation). The magnetic loops that relax in this manner undergo longitudinal and transverse contraction, which leads to the further acceleration of electrons through betatron acceleration and Fermi first-order acceleration (Somov & Kosugi 1997). Magnetic field dynamic results in spatial and temporal features of hard X-ray (HXR) and radio sources in flares. For example, the relaxing magnetic field in the cusp explains the position of the HXR source above the loop (Masuda et al. 1994;Masuda et al. 1995) and its brightness (Fletcher & Martens 1998;Shabalin et al. 2022).
Analysis of the relative motion of X-ray sources at the footpoints of the flaring loop and looptop allows us to determine the regions of magnetic field reconnection and acceleration of charged particles Krucker et al. 2005;Somov et al. 2005). In the works (Sui & Holman 2003;Sui et al. 2004;Liu et al. 2004Liu et al. , 2008Veronig et al. 2006;Joshi et al. 2007), an observed decrease in the height of the X-ray source in some solar flares was noted at an early stage of the HXR flux rise. Sui et al. (2004) noted a decrease in the height of the coronal HXR sources at the beginning of the flare for three M-class flares that occurred on April [14][15][16]2002. To explain the downward motion of the HXR sources, the authors proposed two mechanisms: one is associated with the relaxation of magnetic fields in the cusp region formed during magnetic reconnection, and the other is related to a change in the type of magnetic field reconnection from slow to Petschek regime. The structure of the magnetic field in the cusp region and the evolution of this structure in the collapsing trap model (hereinafter, the CollTr model) allows us to expect an altitude change of the HXR source, which can be caused by loop shrinkage (Somov & Kosugi 1997). In models, loop contraction is associated with the "unshearing" of magnetic loops during energy dissipation. This process results in the converging motion of the footpoints and downward movement of the X-ray looptop source (Ji et al. 2007;Zhou et al. 2013). In (Li & Gan 2005;Zaitsev & Stepanov 2020), the shrinking of the loop at 34 GHz is explained by a current decrease due to an increase in resistance caused by the plasma instability of Rayleigh-Taylor within the chromospheric footpoints of the magnetic loop.
In addition to changes in the height of the HXR sources, the relaxation process of the magnetic fields in the CollTr model can manifest itself in an affect the hardness of the X-ray and radio spectra, as well as the time delay spectra. Changes in the hardness of the energy spectrum in the X-ray range during solar flares have been extensively studied in many papers. These changes are the most commonly reported soft-hard-soft (SHS) patterns (Parks & Winckler 1969;Kane & Anderson 1970;Benz 1977;Brown & Loran 1985;Lin & Schwartz 1987;Fletcher & Hudson 2002). In addition to the SHS pattern, a soft-hard-harder (SHH) (Frost & Dennis 1971;Cliver et al. 1986;Kiplinger 1995;Lysenko et al. 2019) and hard-soft-hard (HSH) (Shao & Huang 2009) patterns have also been observed. According to the authors, possible reasons for the formation of these patterns may include a change in the efficiency of the accelerator, the presence of several stages of acceleration, the effects of particle transfer in the loop (owing to the return current), and the influence of turbulent modes in the regions of acceleration and radiation. Grigis & Benz (2004) showed that elementary flare bursts also exhibit SHS patterns. According to (Battaglia & Benz 2006), an analysis of spatially separated sources at the top and footpoints showed that the SHS pattern was observed separately in the spectra of each source. The authors concluded that this SHS pattern was generated in the accelerator. The CollTr model assumes a change in the magnetic field configuration associated with the relaxation of the magnetic loops (collapse). This process can affect the formation of the spectrum of accelerated electrons and, hence, the dynamics of the X-ray spectrum.
The relaxation process of magnetic fields can manifest itself in the influence on the dynamic spectrum of the time delays of HXR radiation. Charikov et al. (2015) demonstrated that the time delay spectra (TD spectra) depend on the energy and pitch-angle distributions of the accelerated electrons and the parameters of the magnetic field and plasma. Typically, the analysis of TD spectra in the HXR range is performed using full-Sun observational data (Bai & Ramaty 1979;Bespalov et al. 1987;Tsap et al. 2015). In the full-Sun dataset, researchers encounter superimposed HXR flux originating from the chromospheric and coronal regions of the flare arcade. However, the analysis of HXR fluxes from the local regions of magnetic loops on the millisecond timescale, which is preferable for calculating time delays, is complicated because of poor counting statistics.
This study aims to validate the CollTr model based on HXR radiation data and the features of its evolution in the region below the vertical current sheet. Such manifestations may include a decrease or absence of an increase in the height of the HXR sources, a negative correlation between the area of the coronal source and the HXR flux, and features in the energy spectra and TD spectra, particularly at the initial stage of flare development. The observed features of HXR radiation were substantiated by numerically solving the relativistic nonstationary kinetic equation for accelerated electrons propagating in flaring magnetic loops. In particular, we emphasize that in the simulations, in addition to the transport processes, Coulomb losses and diffusion, return current, and magnetic mirroring, the equation also considers the influence of the magnetic field dynamics (collapse) in the initial phase of the flare loop relaxation, which leads to betatron and Fermi first-order acceleration. Sections 2-4 present the results of the spatial and temporal analyses of the HXR fluxes for the solar events SOL2013-05-13T01:50 and SOL2013-05-13T15:51. The results of modeling the propagation of accelerated electrons in a magnetic loop and the radiation generated by them in the HXR range are presented in sections 5 and 6. The results of the observations and simulations are discussed in Section 7.
SPATIAL CHARACTERISTICS OF FLARE HARD X-RAYS
We present observations of the solar observatory, The Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) , of two X-class flares from the same active region, AR11748. The X-1.7 GOES class flare SOL2013-05-13T01:50 is partially occulted by a solar limb. During this flare, a single HXR source was observed in the corona prior to the main peak ( Figure 1a). Six minutes before flare onset, a precursor was observed in the extreme ultraviolet range and soft X-rays, which were studied in detail by (Shen et al. 2017). A second flare, SOL2013-05-13T15:51, X-2.8 GOES class, occurred 14 hours later. Three HXR sources were observed during this flare: two in the chromosphere and one in the corona (Figure 1b). In both events, flare arcades were visible in ultraviolet data ( Figure 1c-1f) obtained with The Atmospheric Imaging Assembly on the Solar Dynamics Observatory SDO/AIA (Lemen et al. 2012). In panels 1d and 1e, the cusp and the thin structure above it can be observed in the EUV range. The long thin feature is considered as a possible current sheet position. The RHESSI contours in the energy bands of 6-12 and 12-25 keV coincide with the EUV cusp area in both flares. For flare SOL2013-05-13T15:51, the 50-100 keV contours coincided with the footpoints of the EUV loops (panels 1b, 1d, and 1f). The images in panels 1c and 1d were obtained during the initial phase of the flux growth, whereas those in panels 1e and 1f were obtained during the decay phase. RHESSI CLEAN images with an eight-second integration time were used to analyze the height and area of HXR sources. The 4F-8F detectors were utilized for image generation. The HXR source area was estimated as 50% intensity contour for SOL2013-05-13T01:50 and 70% intensity contour for SOL2013-05-13T15:51. The position of the HXR source was determined as the distance from the center of the solar disk to the HXR centroid in the plane of the sky. The time evolutions of the flux, height, and area of the coronal HXR source are shown in Figure 2.
During the initial phase of the SOL2013-05-13T01:50 flare, the altitude of the HXR source decreased by 1 arcsec over a period of 4 minutes. At the subsequent time interval, the height of the HXR source increased by 2 arcsec over a period of 3 minutes, and the HXR flux significantly increased. The average HXR source velocities for different energy bands are listed in Table 1. The velocities of the HXR sources in the downward and upward directions increased with energy. Negative velocity values corresponded to a decrease in the altitude of the coronal HXR sources, whereas positive values corresponded to a height increase. It is worth noting that the centroids of the HXR sources with higher energies are located higher than those of the HXR sources with lower energies (Figure 2a).
The obtained HXR area values were subjected to correlation analyses with the HXR flux in the energy ranges 6-10, 10-16, 16-20, and 20-25 keV. Cross-correlation analysis of the coronal HXR source area and the HXR flux revealed a negative correlation. This was observed in interval 1 with decreasing height and in the subsequent interval 2 with increasing height of the HXR sources. The correlation coefficients are listed in Table 1.
In the SOL2013-05-13T15:51 flare, the change in the height of the HXR source exhibited step-like behavior ( Figure 2b, middle panel). At the onset of the flare, there was a slight increase in the HXR flux, which coincided with a decrease in the height of the HXR source. In addition, the area of the HXR source showed an inverse correlation with the HXR flux increment. After the flare onset, the increase in the HXR flux ceases and is replaced by relatively stable radiation over a period of approximately 4 minutes. During the initial time phases at intervals 1 and 2 ( Figure 2, middle panel on the right), the area values of the HXR sources displayed substantial fluctuations around a mean value of approximately 170 arcsec 2 . The area of the HXR source during second time interval was positively correlated with the HXR flux, whereas the height of the HXR source above the solar limb did not exhibit a clear tendency to increase or decrease. This was followed by a rapid increase in the HXR flux (interval 3). At the beginning of this phase, within the interval 15:59-16:00 UT, the attenuators passed a switch. After the attenuators were switched, a sharp change in the source height was observed. It is possible that a brighter HXR source emerged, which was located approximately 8 arcsec lower. The emergence of this HXR source likely occurred during the transition of the flare to its explosive phase, and the location of this source differed within the considered magnetic configuration. A subsequent increase in the HXR flux was accompanied by an increase in the height of the HXR source above the solar limb. In this phase, the area of the HXR source showed an inverse correlation with the increase in the HXR flux. Observation of the flare revealed the separation of the HXR source centroids at various energy ranges throughout the event. The HXR sources observed at higher energies corresponded to greater heights above the solar limb, as shown in the middle panels of Figure 2b. It should be noted that the velocities of the downward and upward motions differed for different energy channels and had higher values at higher energies, as in the SOL2013-05-13T01:50 flare. Thus, a negative correlation between the area of the corresponding HXR source and the increase in HXR flux was observed during the time intervals with an increase in the HXR flux in the considered flares. A positive correlation between the area and HXR flux corresponded to the time interval in which the area and HXR flux did not change significantly (Table 1). Figure 2 and Table 1 show the downward motion of the HXR sources during the early phases of both flares. Hereafter, the energy and time delay spectral analyses focus on this initial phase. The upper panel of Figure 3 shows the normalized HXR flux in the 46-55 keV energy range and the power-law spectrum index for the initial stage of the SOL2013-05-13T01:50 event obtained from the 4 s resolution RHESSI data. The spectral indices were derived using a two-power function approximation of the emission spectrum in the OSPEX (Schwartz et al. 2002) within the 25-100 keV energy range, where the effect of the thermal plasma was insignificant. During the time interval presented in the top panel of Figure 3, the break energy E break varies within the range of 38-55 keV. Hence, the spectral index corresponds to an energy range of 25-45 keV. The spectral index beyond the break energy was not provided because it was determined to be unreliable owing to insufficient photons in the coronal source at energies greater than 45 keV. At the early stage of HXR flux growth, the count rates at attenuator positions A0 and A1 are subject to the pileup effect (Smith et al. 2002). For the HXR flux shown in Figure 3, the pileup effect was considered for the SOL2013-05-13T01:50 flare using the RHESSI software. Alternatively, the pileup effect can be incorporated into a set of fitting functions in the OSPEX software. No qualitative changes were observed for this particular event when the pileup effect was considered using the OSPEX software. The colored rectangles in Figure 3 indicate the detected SHS patterns across varying time scales ranging from 16 s to approximately 120 s. The lower panel of Figure 3 displays the HXR fluxes across the six energy channels, ranging from 25 keV to 52 keV, for the SOL2013-05-13T15:51 event. Analyzing the spectra from the RHESSI data for local sources with a time resolution of 4 s poses a significant challenge. This is especially the case when examining the spectrum of a coronal source in the presence of chromospheric HXR sources. Given our primary interest in the early flare stage, the analysis becomes even more complex because of the relatively small HXR flux. Therefore, the bottom panel of Figure 3 presents the spectral index derived from the full-Sun RHESSI data. Single-temperature thermal and power-law functions were employed to approximate the HXR spectra. The time interval near 15:56 was excluded from the analysis because of extremely low HXR flux. The time interval near 15:59:30 (light rectangle) was excluded from the analysis because the data were considered invalid owing to the switching of the RHESSI attenuators. The colored rectangles indicate the SHS-type dependencies between the HXR flux and the spectral index (black dashed curve). It is also worth noting that the spectral index curve exhibited a downward trend overall. As a result, the average values of the power-law spectrum index decrease, corresponding to the progressive hardening of the HXR radiation spectrum or the soft-hard-harder (SHH) pattern. Figure 4 shows the TD spectra for the SOL2013-05-13T01:50 event. During this event, the coronal source was predominantly observed, whereas the footpoints of the arcade remained essentially invisible (see Figure 1a). Therefore, data from full-Sun observations can be used to analyze the coronal source for this event. TD spectra were derived through cross-correlation analysis of high-resolution HXR radiation across various energy channels for a coronal source. We utilized 4 s full-Sun RHESSI data (channels 26.6-34.6, 34.6-45.2, 45.5-58.9, 58.9-76.7 keV) and cspec data (detector no. 5) from the Gamma-ray Burst Monitor on the Fermi Gamma-ray Space Telescope (GBM/Fermi) (Meegan et al. 2009). The time resolution varied from 4.096 s at the onset of the flare to 1.024 s later. The GBM/Fermi data were integrated from 29 to 6 channels in the 25-90 keV energy range. In the top panel of Figure 4, the energy values correspond to the midpoints of the analyzed energy ranges. To estimate the error in measuring the time delays, we simulated light curves (50 iterations) within the measurement uncertainties of the RHESSI data during the calculation process. For the GBM/Fermi data, the standard deviation within the investigated time window was considered the measurement error of the initial data. The resulting TD spectrum error for each energy channel was determined as the standard deviation for an ensemble of 50 delay values obtained for the channel under investigation. The TD spectral curve was computed as the mean of the resulting spectral ensemble. The cross-correlation function of the radiation curves in various energy ranges was approximated by using a quadratic function at three points close to its peak .
TIME DELAY SPECTRA OF HARD X-RAY EMISSION
As an example, the bottom panel of Figure 4 displays the radiation curves in two energy channels: 26.6-34.6 keV and 34.6-45.2 keV, retrieved from RHESSI, and 31.9-38.7 keV and 38.7-47.9 keV, obtained from GBM/Fermi. The colored rectangles designate the time domains used to calculate the TD spectra, as illustrated in the top panel of Figure 4. The solid lines in the upper panel represent the TD spectra derived from the GBM/Fermi data, and the dashed lines correspond to those obtained from the RHESSI data. The colors of the curves depicted in the top panel correspond to those of the rectangles in the bottom panel. In this study, we focused on analyzing the HXR emission from the onset of the flare to the emission peak, which occurred at 02:10:22 UT. When nonthermal electrons propagate within flare loops owing to magnetic mirroring, Coulomb, or turbulence scattering, a portion of these electrons may be captured in a magnetic trap for durations ranging from seconds to tens of seconds. The structure of the time profile of the radiation curve may comprise individual pulses superimposed on one another, representing instances of electron acceleration. To minimize the impact of these effects on the TD spectra, we selected the shortest segments of the time profile for analysis. Furthermore, we attempted to analyze the phases of the radiation rise and decay separately. The bottom panel of Figure 4 displays the intervals in which we obtained TD spectra with acceptable error values. These represent two phases of a local increase in the radiation flux and one phase of decay (windows 1, 2, and 3 in Figure 4, bottom). A qualitative agreement can be observed when comparing the GBM/Fermi and RHESSI data (solid versus dashed curves), notwithstanding the temporal resolution and channel count differences. During the rising phases of the HXR flux (windows 1 and 2, represented by the black and green curves in Figure 4, respectively), the TD spectra decreased in the approximate range of 26-70 keV. Above the inflection point in the (50-70) keV region, the TD spectra become flat. During the decay phase (window 3, red curves), the TD spectra exhibit a pronounced V-shaped profile, decreasing in the 26-44 keV range and increasing beyond ∼44 keV. Figure 5 shows the TD spectra obtained for the HXR of the SOL2013-05-13T15:51 event. We calculated the TD spectra for regions of interest (ROIs) in the HXR images. The images were obtained via the Clean imaging algorithm applied to the RHESSI data within the 12. 0-18.3, 18.3-28.0, 28.0-42.8, 42.8-65.4, and 65.4-100 keV channels. The HXR sources from the ROIs at the looptop and two footpoints of the flare arcade were analyzed. The colored rectangles in Figure 5 mark the phases of the rise and decay of the flare HXR, for which the TD spectra were calculated. The shaded white time interval (15:59:04-15:59:44) was excluded from the analysis because of potentially inaccurate flux values. Windows 1 and 3 correspond to the rising phases of the HXR flux. During these time intervals, the TD spectra at the loop footpoints exhibit an increasing trend in the spectral profiles. At the looptop, the TD spectrum exhibited a growth shape (window 1, black curve) and a complex shape (window 3, black curve). During the decay phases (windows 2 and 4), the TD spectral profiles changed to a decreasing shape at the footpoints and looptop. A discussion of the theoretical justification for the TD spectral data is provided in Section 7.
We assume that the initial acceleration within the current sheet defines the injected electrons parameters. In the multitude of relaxing magnetic loops, we modeled one characterized by parameters representing ensemble-averaged loops that exhibit similarity. Consequently, we modeled magnetic loop relaxation accompanied by the injection of primary accelerated electrons at the looptop region.
To model the properties of the HXR mentioned above, we numerically solved the time-dependent relativistic kinetic equation for nonthermal electrons, which has the following form: (Hamilton et al. 1990;Zharkova et al. 2010;Filatov et al. 2013;Minoshima et al. 2010) where f (E, µ, s, t) is the non-thermal electron distribution function, s is the distance along the magnetic field line (s = 0 corresponds to the loop top), t is the time, µ = cos(α) is the cosine of the pitch angle, λ 0 (s) = 10 24 n(s) ln Λ , n(s) -plasma density, Λ = 3kB Te 2e 2 kB Te 8πe 2 n 0.5 (Ginzburg 1981), k B -Boltzmann constant, T e -electron temperature, e -electron charge, β = vc −1 , v -non-thermal electrons speed, c -speed of light, m e -electron mass, γ = E + 1 -Lorentz factor of the electron, E is the kinetic energy of an electron, expressed in units of the electron rest mass energy, Leach & Petrosian 1981;Emslie 1978) -the coefficients take into account the contribution of partially ionized plasma to the energy loss and angular scattering of fast electrons, α Ffine structure constant, x -fraction of ionized hydrogen atoms. Initial plasma density at the looptop is n LT (t=0) = 7 × 10 8 cm −3 . In the chromosphere it corresponds to hydrostatic equilibrium at 10 4 K (Mariska et al. 1989) and is exponential between the looptop and chromosphere. For comparison, we will also use a model with stationary magnetic field, time-independent plasma distribution and density at the looptop n LT = 10 10 cm −3 . For a more detailed description of the terms in equation (1) and the initial conditions, see Shabalin et al. (2022).
If the compensation of the beam current j b by the return current j rc in the flaring plasma is fulfilled, that is, j rc = −j b , the induced electric field must satisfy the following relation: where σ(s) is the classical Spitzer conductivity of flare plasma determined by pair collisions of charged particles. In the case of partial plasma ionization, the conductivity σ(s) was defined in Shabalin et al. (2022).
The source of injected electrons was modeled as the product of four independent factors representing the distributions on the energy, pitch angle, space location, and time, that is, S(E, µ, s, t) = KS 1 (E)S 2 (µ)S 3 (s)S 4 (t), where K is a normalization constant scaled to match the chosen energy flux. The peak electron energy flux was ∼ 10 10 erg cm −2 s −1 , typical for M-X-class solar flares. We consider the cases of stationary isotropic S 2 (α) = 1 and strong anisotropic S 2 (α) = cos 12 (α) pitch-angle distributions.
The time profile of the electron injection function S 4 (t) represent a pulse followed a Gaussian function S 4 (t) = exp − (t−t1) 2 t 2 0 centered at time t 1 = 2.6 s with a width of t 0 = 1.4 s. The electron injection occurred spatially at the looptop, and was modeled as a Gaussian S 3 (s) = exp − (s−s1) 2 s 2 0 centered at location s 1 = 0 cm. The energy dependence S 1 (E) is represented as a broken power law with E br = 200 keV, δ 1 = 3 or 7 below E br depending on the model that corresponds to the typical flare's hard and soft spectra and δ 2 = 10 above E br .
The betatron and Fermi accelerations of electrons in the collTr model were calculated using the equation (Filatov et al. 2013;Minoshima et al. 2010 where B t is the time derivative of the magnetic field. From equation (3), it follows that electrons with pitch angles close to π 2 are mainly accelerated by the betatron mechanism, which boosts the transverse velocity component, while the longitudinal component of the electron velocity increases due to Fermi acceleration when the magnetic mirrors converge (l < 0). The betatron and Fermi pitch-angle terms are set as: The magnetic field in the collTr model with a vertical current sheet cannot be strictly determined. We consider a model of the magnetic field evolution, with an initial rapid change in the magnetic field strength and length of the magnetic field lines mainly in the cusp, in the following form: where the magnetic field at the footpoints is fixed B max = mB 0 , the period of magnetic field oscillations T cusp = 16 s, the initial magnetic field at the top of the loop B cusp = 20 G, b 1 = 0 cm defines the magnetic field minimum location, the damping factor D b = 0.14 s −1 , and the average magnetic field at the looptop of the relaxed magnetic loop The length of the collapsing loops during their relaxation decreases from 1.5l 0 with a looptop in the cusp to l 0 for low closed loops: where l(t) is the semidistance between trap ends, l 0 = 3 × 10 9 cm, l 1 = 1 2 l 0 . The X-ray flux was calculated according to the formulas of relativistic bremsstrahlung (Bai & Ramaty 1978;Gluckstern & Hull 1953;Koch & Motz 1959) for accelerated electrons (see equation (9) in Shabalin et al. (2022)).
RESULTS OF MODELING THE KINETICS OF ACCELERATED ELECTRONS
6.1. Energy spectra Magnetic field reconnection and particle acceleration can occur sporadically in time and with varying intensities as well as with different localizations within the flare region. To identify the main trends in the changes in the energy spectrum of the HXR radiation, we considered a simplified model employing successive collapse of several magnetic loops. The total injection function of the accelerated electrons for such a model appears as two Gaussian-shaped elementary bursts with different peak fluxes, which follow each other for approximately 10 s (gray dashed curve in Figure 6, top panel). The interval between the peaks and the full width at half maximum (FWHM) of the electron source was selected to minimize the influence of one collapsing loop (previously referred to as the CollTr model) on the subsequent loop. Additionally, the top panel displays the integral (over several collapsing loops) HXR flux in the 29-58 keV range from the looptop (solid black curve) and footpoint (dashed black curve). The red curves (solid and dashed lines) represent variations in the power-law spectrum index within the 29-58 keV energy range. The spectrum index values are shown on the right vertical axis. The labels "soft" and "hard" on the top panel indicate the formation of a soft-hard-soft pattern. During the initial increase in HXR flux, the spectrum of HXRs was soft. At the HXR peak and the onset of decay, the spectrum of the HXR radiation became harder. Furthermore, the hardness of the subsequent peaks in the spectrum index curve increased. Thus, on a time scale corresponding to the collapse of several magnetic loops, the dynamics of the spectral indices form the SHH pattern. Hence, consecutive collapses of two magnetic loops form the SHS pattern, whereas a combination of three or more collapsing loops gives rise to the SHH pattern.
Time Delay Spectra
The TD spectra for the CollTr model are shown on the left side of the middle panel in Figure 6. The model parameters are S(α) = 1, δ = 3, and b 1 = 0 cm. Following the same approach as the analysis of flares in Section 4, we calculated the model TD spectra by examining the phases of increasing HXR flux up to and including the peak, as well as the decay phases encompassing the peak. The middle panel on the right displays examples of calculated HXR fluxes at two energies, 15.2 keV and 74.1 keV, for the looptop and footpoint. It is important to note the increasing TD spectra in the CollTr model at both the footpoint and the looptop. The delays ranged from 18 to 1110 ms, depending on the analyzed spatial region of the flare loop. Alternative models were also calculated, including one with a softer electron source S(α) = 1 and δ = 7 and another with an anisotropic source S(α) = cos 12 (α) and δ values of 3 and 7. Graphs for these models are not presented here. However, it is important to note that the same increasing trend in the TD spectra was observed for all calculated cases.
To reveal the features of the CollTr model, we also calculated the TD spectra for the other models, in which the distributions of plasma density and magnetic field along the loop were time-independent (hereafter referred to as the DMTI models). In the bottom left panel of Figure 6, the TD spectra are presented for the DMTI model with the parameters S(α) = 1, δ = 3, and b 1 = 0 cm. It is important to note the decreasing TD spectra at the footpoints of the magnetic loop (black and light-gray curves). Conversely, at the looptop, the TD spectra exhibit an increasing trend. We also examined another DMTI model with the parameter b 1 = −2 × 10 9 cm. In this model, the distribution of the magnetic field along the loop was asymmetric, featuring a B max /B min ratio of 8 at one footpoint and 1.28 at the second one. Consequently, the geometric apex of the loop, where the majority of accelerated electrons are injected, is no longer a magnetic trap for charged nonthermal particles. The TD spectra in this model, with the parameters S(α) = 1, δ = 3, and b 1 = −2 × 10 9 cm, are displayed in the bottom right panel of Figure 6. In this case, the looptop TD spectra showed a decreasing pattern. The delays span from 4 to 100 ms.
Hard X-ray source motion
We found that the height of the coronal HXR source decreased during the early phase of flux rise in the considered flares. According to Table 1, in the selected flares SOL2013-05-13T01:50 and SOL2013-05-13T15:51, the velocities absolute values of the downward and upward motions of the HXR sources varied widely and were (0.6 -23.5) and (0.6 -22.1) km s −1 , respectively. Note that for both flares, the HXR sources with higher energy had higher velocities for downward and upward motions (Table 1, SOL2013-05-13T01:50, intervals 1, 2; SOL2013-05-13T15:51, intervals 1, 3). A decreasing tendency in the height of the X-ray source in solar flares at the initial stage of the HXR flux rise was observed by other authors. Thus, Sui et al. (2004) recorded a decrease in the source height for three flares, and the downward motion velocities were 8-20 km s −1 , after the height decrease phase, the HXR sources moved upward at velocities of 5-40 km s −1 . A similar trend was observed in the events studied by Veronig et al. (2006), where the average velocities of the source within the 10-30 keV energy range increased from 14 to 45 km s −1 with increasing energy. One possible explanation for the downward motion of the HXR source may be the projection effect. That is, the heights of the arcade magnetic loops decrease along the line of sight, and sequential ignition of loops is realized. However, this interpretation contradicts the stereoscopic observations of flares in the EUV range by the Extreme Ultraviolet Imager (Wuelser et al. 2004) onboard the Solar TErrestrial RElations Observatory (STEREO) spacecraft (Howard et al. 2008). These observations indicate that heating of the flare loops occurs unevenly and inconsistently throughout the arcade. Additionally, it remains unclear why the velocity of the height decrease is dependent on the energy of the HXR source, and why the downward motion of the source is observed during the initial phase of the flare only. On the left, TD spectra for the DMTI model with parameters S(α) = 1, δ = 3, b1 = 0 cm; on the right, TD spectra for the DMTI model with parameters S(α) = 1, δ = 3, b1 = −2 × 10 9 cm, representing an asymmetric magnetic field in the flare loop. The plasma density at the looptop in the DMTI models was 10 10 cm −3 .
From another perspective, the downward movement of the HXR source at the initial flare stage was considered to be a result of the sequential relaxation of the magnetic field (Sui et al. 2004;Veronig et al. 2006). It should be noted, that during the relaxation of magnetic field, a decrease in the height of the coronal HXR source may not necessarily be observed. Certain conditions are necessary to observe a bright coronal HXR source: a high plasma density n LT > 10 10 cm −3 and the ability of the trap to accumulate high-energy electrons, which implies an isotropic or quasi-transverse pitch-angle distribution of electrons after their primary acceleration in the current sheet (Charikov & Shabalin 2021;Shabalin et al. 2022). Moreover, the spatial distribution of the magnetic field in such traps should be nearly symmetrical with the same magnetic field values at the footpoints. If we assume a symmetric magnetic field, a sufficient density of plasma and accelerated particles in the region of the magnetic field minimum can be formed shortly before the vertical motion of the relaxing magnetic loop stops. Under these conditions, a minimal decrease in the height of the HXR source may be observed. Therefore, the absence of an increase in the height of the coronal HXR source at an early stage does not contradict the CollTr model.
Spatial separation of X-ray source centroids at different energy ranges
In our opinion, one of the most important results of observations indicating the collapse of magnetic loops in a flare is the spatial separation of X-ray sources in different energy ranges: an increase in HXR source height from lower to higher energies. Sequential involvement in the relaxation process of newly formed magnetic structures results in a configuration of magnetic loops with different heights at different energies. The nonthermal electrons in the formed earlier lower loops lose energy within seconds to tens of seconds. The upper bound of captured accelerated electrons lifetime with energy 100 keV can be estimated from the kinetic equation 1 as follows τ coll = Ė E coll ∝ n −1 E 3/2 whereĖ coll = dE dt determines the rate of energy loss by an electron during scattering by plasma ions. It can be seen that Coulomb loss time depends on the energy as E 3/2 . Therefore, for example, τ coll for a 30 keV electron in a homogeneous plasma with a density of 10 10 cm −3 will be 3.21 s, and for a 98 keV electron 17.4 s. In a flare loop plasma, as follows from equation 1, accelerated electrons, when moving along the magnetic field, pass through the loss cone into the chromosphere, scatter, reflect from magnetic mirrors, and are additionally accelerated by Fermi and betatron mechanisms. In addition, in the collTr model, at the beginning of the collapse (in a higher loop), the efficiency of the capture of accelerated electrons is high due to the large values of the magnetic ratio m ≫ 8, and the Coulomb losses are small, because the density is low n < 10 10 cm −3 . The scenario is different for relaxed low loops. The magnetic ratio in these loops is not as high, with m ∼ 5 − 8, and the Coulomb losses are significantly greater because n > 10 10 cm −3 . Therefore, the lower loops radiate in a softer energy range because of hot plasma only. In magnetic loops at higher altitudes, nonthermal electrons are yet to thermalize. These electrons accumulate at the looptop due to trapping. Therefore, electron propagation in magnetic traps, combined with traps formation at different times, led to height separation of the centroids of the HXR source in various energy ranges in the considered events SOL2013-05-13T01:50 and SOL2013-05-13T15:51. Source centroid separation in various energy channels has also been observed by Veronig et al. (2006) and simulated by Kong et al. (2019). The values of the magnetic field minima are different for relaxing magnetic loops of different heights. For higher loops, these values are lower than those for the lower loops. Therefore, the magnetic loop velocities depend on the height owing to the conservation of the magnetic flux B · v = Const (Veronig et al. 2006). Thus, we observed the effect of increasing the velocities with source energy for the considered flares.
Hard X-ray flux-area correlation
In addition to the height separation of HXR sources at different energies, we examined the temporal variation in coronal HXR source size. Owing to the conservation of magnetic flux, an increase in the magnetic field during collapse is expected to result in a reduction in the cross-sectional area of the magnetic loop. Although this value cannot be measured, it is possible to estimate the size of the HXR source associated with the loop cross-section. During the collapse, the magnetic field in the coronal part of the loop increases several times, which leads to a decrease in the cross section of the loop by the same factor. We are not expecting to observe such a significant change in the HXR source area. Nonetheless, a negative correlation may be anticipated between the cross-sectional area of the HXR sources during collapse and HXR flux. We found a negative correlation between the area of the coronal source and the HXR flux in phases with increasing flux during downward and upward motion of the sources for both considered flares (Table 1). For SOL2013-05-13T15:51, aside from the negative correlation, a positive correlation between the area and flux was observed during the phase without an intensive increase in flux within the time interval of 15:55-15:59 UT.
Hard X-ray energy spectra variations
In the considered flares, we also analyzed the temporal variations in the energy spectrum. To date, numerous hypotheses have been proposed to explain the evolution of the HXR energy spectrum during solar flares. These hypotheses can be divided into four categories. First, changes in the HXR spectrum may be associated with accelerator properties or various acceleration mechanisms during several-step acceleration. Second, electron distribution functions are transformed during transport along magnetic loops via Coulomb collisions and magnetic mirroring. This transformation influences the HXR spectra. Third, the propagation of an accelerated electron beam results in the emergence of physical processes that influence the electrons responsible for initiating these processes. For example, the return current or the generation of various turbulent modes significantly modifies the distribution function of radiating electrons. Fourth, the electron propagation environment can contain MHD waves, shock fronts, and magnetic field fluctuations generated during energy release or by an external trigger.
The types of energy spectrum evolution that correlate with HXR flux are of particular interest for research. We identified the sub-minute and minute SHS patterns in the flares under discussion. SHS patterns are challenging to explain based solely on kinetic effects during electron beam propagation. Magnetic mirroring of electrons, Coulomb losses, and return current leads to a change in the power law spectrum index in the range of 20-100 keV by fractions of units when the integral electron flux (erg cm −2 s −1 ) varies several times according to our numerical simulations. However, in the observations, the spectrum index changes by values comparable to one or more (see, for example, Figure 3). In this study, we interpreted SHS patterns as a superposition of several magnetic loops within the flare region. In the CollTr model, it can be assumed that each HXR pulse, ranging from seconds to tens of seconds, corresponds to a different relaxing magnetic structure. In each of these magnetic structures, accelerated electrons undergo propagation, accumulation, and emission.
Numerical simulations of electron propagation along the loop show that at the looptop, the HXR spectrum hardens over time because low-energy electrons lose energy in Coulomb collisions more efficiently. The rate of spectrum hardening depends on the plasma density in the coronal part of the loop. The plasma density at the looptop can increase in each loop because of magnetic loop relaxation and chromospheric evaporation. In addition, the response in the HXR range exhibited a delay of several seconds relative to the electron injection time in the CollTr model.
Considering the aspects mentioned above, we modeled electron propagation in the CollTr model, assuming that the collapse of different loops occurs sequentially. In a flare environment, the formation of a separate relaxing magnetic structure strongly depends on the local parameters of the plasma and magnetic field. Therefore, the duration of the injection into a separate collapsing structure and its physical parameters can vary. A simplified model of identical successive collapsing traps was suitable to demonstrate the fundamental possibility of SHS pattern formation in the HXR spectrum. The results of this simulation are shown in the top panel of Figure 6. As discussed in Section 6.1, the successive relaxation of several magnetic loops results in the formation of an SHS pattern in the spectrum index dynamics. Furthermore, a global (over several HXR peaks) SHH pattern in HXR radiation is formed owing to the gradual hardening of the distribution function of the accelerated electrons, which is integrated over several magnetic loops. Hardening is caused by Coulomb losses, which transform the distribution function of the accelerated electrons in the low-energy region and due to the effective trapping of electrons in the high-energy region.
Hard X-ray time delays spectra
The HXR TD spectrum is an additional tool for diagnosing accelerated electrons during flares. The dynamic of the acceleration and propagation of high-energy electrons in the flare-loop plasma determines the HXR TD spectra and their evolution. In the simplest models of electron propagation, the TD spectra are determined by a power-law dependence of approximately E −1/2 in the free-streaming model and ∼ E 3/2 in the trap-plus-precipitation model . Due to the speed difference between electrons, high-energy electrons reach the chromosphere before lower-energy electrons. The TD spectra exhibit a descending profile in the absence of particle scattering and trapping. The processes of electron scattering and mirroring in a magnetic field lead to the trapping of electrons. These electrons spend some time in a magnetic trap before falling into the loss cone and radiating. If there is an increase in the plasma density at the loop top and/or the generation of turbulence that scatters longitudinally distributed electrons, the trapped particles will radiate more intensely in the coronal region of the loop. This results in a complex emission pattern, leading to the formation of different types of TD spectra. Numerical calculations of the propagation of accelerated electrons and bremsstrahlung HXR radiation enable a thorough analysis and identification of the factors responsible for the formation of a particular time delay spectrum.
In Section 4, for flare SOL2013-05-13T01:50, decaying TD spectra were observed in the coronal HXR source during two bursts (windows 1 and 2), according to data from both GBM/Fermi and RHESSI instruments (Figure 4). During the decay phase (window 3), the TD spectrum also exhibited a decrease in the ∼(26-45) keV range. The observation of decreasing TD spectra at the looptop is unusual when considering the accelerated electron propagation and emission simulation.
For the second flare, SOL2013-05-13T15:51 ( Figure 5), a decrease in the TD spectra at the looptop was observed in time windows 2 and 4, corresponding to the local decay phases of the HXR flux. In this event, at the footpoints during the rising phases of the HXR flux (windows 1 and 3, Figure 5), increasing TD spectra, which are unusual for this location, were also observed. Based on the accelerated electron propagation DMTI model, decreasing spectra are expected at the footpoints of the flare loop, as illustrated in Figure 6 (bottom left panel). Nonetheless, in the CollTr model, the TD spectra at the footpoints exhibited growth. This is attributed to a combination of factors: a high initial B max /B min ratio during the onset of magnetic loop relaxation, additional electron acceleration, and an increase in plasma density at the looptop. The modeling revealed that betatron acceleration has a dominant influence on the formation of growing TD spectra at footpoints. Because betatron acceleration is significant only during the growth phase of the HXR flux, the transition from increasing TD spectra at the footpoints to decreasing patterns in windows 2 and 4 in Figure 5 can also be explained. That is, during the increase of the HXR flux, growing TD spectra are formed at the footpoints owing to the betatron acceleration of the electrons. Subsequently, during the decay of the HXR flux, when the betatron acceleration was no longer effective and the B max /B min ratio was not substantial (approximately 8), the TD spectra transformed to a decreasing pattern, as anticipated by the DMTI model of electron propagation with a stationary magnetic field.
Justifying the decreasing TD spectra at the looptop is more challenging. As demonstrated on the right-hand side of the bottom panel of Figure 6, for the model S(α) = 1, δ = 3, b 1 = −2 × 10 9 cm, with an asymmetric magnetic field, the TD spectra decrease at the looptop. This occurs because, in an asymmetric magnetic loop, the geometrical top of the loop does not serve as a trap for accelerated electrons. The transition from increasing TD spectra at the looptop to decreasing ones in the SOL2013-05-13T15:51 event may be due to a reduction in the B max /B min ratio over time during magnetic field relaxation. Assuming an initial asymmetric distribution of the magnetic field along the flare loops before flare onset, a decrease in the B max /B min ratio from large values of approximately 80 at the beginning of the relaxation (collapse) process to B max /B min ≃ 8 would lead to a more pronounced manifestation of the magnetic field asymmetry during the decay phases of the HXR flux. The results of magnetic field approximations using the nonlinear force-free field (NLFFF) method for solar disk flares suggest that asymmetry in the magnetic field distribution along magnetic loops in active regions is a relatively common occurrence. Hence, the hypothesis regarding the asymmetric magnetic field in the examined limb flares appears entirely plausible. This was further supported by the diverse brightness of the footpoints in the SOL2013-05-13T15:51 event, indirectly indicating different values for the loss cone.
Conclusions
In summary, we outlined the key conclusions. Numerical simulations of accelerated electron kinetics and the associated HXR emission in the CollTr model are consistent with the observed HXR data. First, the separation of the HXR sources by height and energy occurs naturally during magnetic field relaxation in a model with a vertical current sheet in the corona. Second, there was a negative correlation between the area of the HXR sources in the corona and HXR flux for both flares examined in this study. Third, there were particular characteristics of the TD spectra in the investigated flares, specifically the decreasing TD spectra in the coronal source and the increasing TD spectra in the footpoints of the SOL2013-05-13T15:51 event. We explain these characteristics using numerical simulations conducted within the CollTr model framework. Fourth, our modeling of the HXR spectrum index evolution suggests that the SHS and SHH patterns may not appear due to specific processes in the accelerator or the region of suprathermal electron propagation but rather as a consequence of the superposition of HXR fluxes from various magnetic structures, which are at different stages in terms of accelerated electron accumulation and radiation. A potential extension of this research is to calculate the gyrosynchrotron radiation in similar events using the CollTr model and compare the results with observational data obtained from radio telescopes. Various turbulence modes (such as Langmuir, ion-acoustic, and whistler modes) and magnetic fluctuations may arise in secondary acceleration and electron radiation regions. Incorporating these processes into the CollTr model is another avenue of future research. | 11,734.8 | 2023-08-10T00:00:00.000 | [
"Physics"
] |
Significance of molecular classification of ependymomas: C11orf95-RELA fusion-negative supratentorial ependymomas are a heterogeneous group of tumors
Extensive molecular analyses of ependymal tumors have revealed that supratentorial and posterior fossa ependymomas have distinct molecular profiles and are likely to be different diseases. The presence of C11orf95-RELA fusion genes in a subset of supratentorial ependymomas (ST-EPN) indicated the existence of molecular subgroups. However, the pathogenesis of RELA fusion-negative ependymomas remains elusive. To investigate the molecular pathogenesis of these tumors and validate the molecular classification of ependymal tumors, we conducted thorough molecular analyses of 113 locally diagnosed ependymal tumors from 107 patients in the Japan Pediatric Molecular Neuro-Oncology Group. All tumors were histopathologically reviewed and 12 tumors were re-classified as non-ependymomas. A combination of RT-PCR, FISH, and RNA sequencing identified RELA fusion in 19 of 29 histologically verified ST-EPN cases, whereas another case was diagnosed as ependymoma RELA fusion-positive via the methylation classifier (68.9%). Among the 9 RELA fusion-negative ST-EPN cases, either the YAP1 fusion, BCOR tandem duplication, EP300-BCORL1 fusion, or FOXO1-STK24 fusion was detected in single cases. Methylation classification did not identify a consistent molecular class within this group. Genome-wide methylation profiling successfully sub-classified posterior fossa ependymoma (PF-EPN) into PF-EPN-A (PFA) and PF-EPN-B (PFB). A multivariate analysis using Cox regression confirmed that PFA was the sole molecular marker which was independently associated with patient survival. A clinically applicable pyrosequencing assay was developed to determine the PFB subgroup with 100% specificity using the methylation status of 3 genes, CRIP1, DRD4 and LBX2. Our results emphasized the significance of molecular classification in the diagnosis of ependymomas. RELA fusion-negative ST-EPN appear to be a heterogeneous group of tumors that do not fall into any of the existing molecular subgroups and are unlikely to form a single category. Electronic supplementary material The online version of this article (10.1186/s40478-018-0630-1) contains supplementary material, which is available to authorized users.
Introduction
Ependymal tumors are classified into four histopathological subtypes including subependymoma (grade I), myxopapillary ependymoma (grade I), ependymoma (grade II), ependymoma, RELA fusion-positive (grade II or III), and anaplastic ependymoma (grade III) according to the World Health Organization (WHO) Classification of Tumours of the Central Nervous System [8]. Each of the latter two, which are clinically malignant, is defined as "A circumscribed glioma composed of uniform small cells with round nuclei in a fibrillary matrix and characterized by perivascular anucleate zones (pseudorosettes) with ependymal rosettes also found in about one quarter of cases (ependymoma), a high nuclear-to-cytoplasmic ratio, and a high mitotic count (anaplastic ependymoma)" [8]. Ependymal tumors, which may arise from any part of the neuroaxis, are identified as supratentorial (ST), posterior fossa (PF) or spinal (SP) ependymomas (EPNs). Malignancy grading for EPN and anaplastic EPN is often inconsistent, and the clinical significance of EPN pathological grading is controversial [20,25,28,32]. Regardless of location, standard treatment for these tumors involves maximal safe surgical resection followed by local radiation therapy [29]. Incomplete resection or recurrence predicts a dismal prognosis. At present, there are no reports of chemotherapeutic agents with proven efficacy against these tumors [11,35]. Therefore, further clarification of molecular mechanisms underlying the genesis of EPN, as well as development of new treatments for these tumors may be essential.
A series of extensive molecular analyses has demonstrated that supratentorial and posterior fossa EPNs may have distinct molecular profiles and are most likely separate diseases [19,25,27,29,37]. Recently, a consensus scheme for the molecular classification of EPNs based on these studies has been proposed [25]. ST-EPNs were segregated into two molecular subgroups denoted as ST-EPN-RELA and ST-EPN-YAP1. ST-EPN-RELA tumors, which are characterized by the presence of various types of C11orf95-RELA fusion genes, account for approximately 70% of ST-EPNs [27]. Some C11orf95-RELA fusion genes have been experimentally demonstrated as oncogenic. ST-EPN-YAP1 subgroup is characterized by the YAP1. ST-EPN-YAP1 fusion gene, which is much less common than ST-EPN-RELA. The oncogenic potential of most YAP1 fusions remain to be determined. However, fusion genes are often intimately implicated in tumorigenesis. Therefore, fusion genes that are highly specific to ST-EPN are likely to be promising therapeutic targets for these particular tumors. However, diagnosis and clinical significance of the C11orf95-RELAor YAP1-fusion negative ST-EPNs remains controversial. Whether these tumors harbor an unidentified driver event or belong to an entity that is entirely different from EPN, needs to be determined. Thus, a detailed investigation of the molecular profiles of these tumors was felt to be imperative.
According to the methylation pattern, PF-EPNs are segregated into two main molecular subgroups termed PF-EPN-A (PFA) and PF-EPN-B (PFB) [19,25]. PFA subgroup tumors are characterized by an increased DNA methylation pattern in the CpG islands, which is different from that seen in PFB ependymomas. PFA patients are mostly infants or young children associated with a poor prognosis whereas PFB patients are older with better prognoses [19,25]. No recurrent driver mutation, identifiable as a therapeutic target or diagnostic marker, has been found in either subgroup. Because the presence of biologically distinct subgroups within PF-EPN has significant clinical implications, validation of these sub groups as well as development of robust diagnostic tools for these groups are deemed essential.
For the purpose of molecular and clinical characterization of ependymal tumors and identification of therapeutic targets, we performed molecular analyses on a considerable series of ependymal tumors collected via the Japan Pediatric Molecular Neuro-oncology Group (JPMNG). These cases were examined using detailed clinical information and centrally reviewed histopathology. We confirmed that RELA fusion is a highly specific diagnostic marker for ST-EPN, and that methylationbased classification of PF-EPN is robust and may serve as an independent prognostic marker. We found that a considerable proportion of histopathologically diagnosed ST-EPN does not contain the RELA fusion, and that the molecular pathogenesis of these tumors may be complex.
Tumor material
A total of 113 locally diagnosed ependymal tumors collected from 107 patients through JPMNG were examined in this study (Table 1). Among these, 38 were supratentorial, 63 were posterior fossa and 12 were spinal tumors. In all cases, a consensus diagnosis was made by three neuropathologists (A.S., T.H., and J.H.) following an extensive microscopic review of slides stained with hematoxylin, eosin and other immunohistochemistry procedures. In addition, 69 PF-EPN samples from the Hospital for Sick Children, Toronto, Canada, were included for validation of a pyrosequencing assay developed for molecular classification of PF-EPN (see below). This study was approved by the ethics committees of the National Cancer Center as well as the respective local institutional review boards.
DNA/RNA extraction, cDNA synthesis, and bisulfite modification of DNA DNA and RNA were extracted from the tumor samples, using a DNeasy Blood & Tissue kit (Qiagen, Tokyo, Japan) and Qiagen miRNeasy Mini Kit, respectively. First
Genome-wide DNA methylation analysis
Genome-wide DNA methylation analysis was performed using an Infinium HumanMethylation450 BeadChip array (Illumina, San Diego, CA, USA, hereafter 450 array) which includes 485,512 CpG sites for analysis, as described previously [11,38]. For the methylation array, 500 ng of DNA extracted from fresh frozen specimens and 100 ng DNA from formalin-fixed paraffin-embedded (FFPE) specimens, repaired using an Infinium HD FFPE Restore Kit (Illumina, San Diego, CA, USA), was used. We removed 11,551 probes mapped on sex chromosomes and employed the remaining 473,961 probes for analysis. The methylation level of each CpG site was expressed using beta-values, ranging from 0 (unmethylated) to 1 (fully methylated). A total of 3086 probes showing a high standard deviation (SD > 0.25) on CpG islands, were selected for PF-EPN classification. Unsupervised hierarchical clustering was performed using R software (version 3.0.1), as described previously [15]. Methylation data of 48 EPNs (GSE42752) and 6 normal cerebellums (GSE44684) [17,19,33] were obtained from the Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/) for comparison with our data. The methylation profiling classifier developed by the German Cancer Research Center (DKFZ)/University Hospital Heidelberg/German Consortium for Translational Cancer Research (DKTK) (the DKFZ classifier, molecularneuropathology.org) was used via their website to assign subtype scores for each tumor [4].
Copy number analysis
Copy number alterations were evaluated using signal data from the methylation array. Following an evaluation of methylated and unmethylated signals in the six normal cerebellum samples, probes showing high variability were excluded from the analysis [17]. Probes outside the 0.05 and 0.95 quantiles of median summed values, as well as probes over the 0.8 quantile of the median absolute deviation were excluded. Sample to median Log2-ratios of control samples at each probe were calculated and normalized against the median log2-ratio. Copy number data were obtained using the DKFZ classifier. Table S1).
Fluorescence in situ hybridization (FISH)
Break-apart FISH was used to detect RELA or YAP1 fusion. FISH was performed on 4-5 μm sections of each FFPE specimen. FISH probes were derived from the following BAC clones: for RELA fusion detection, RP11-472D15 and RP11-58D3 probes were labeled with spectrum orange, whereas RP11-692F22 and CTD-2121 M3 probes were labeled with spectrum green; for YAP1 fusion detection, RP11-732A21 and RP11-640G3 were labeled with spectrum orange and RP11-1082I3 and RP11-315O6 with spectrum green. Briefly, BAC DNA from an overnight culture of the corresponding BAC clones was purified using a GenElute Plasmid Miniprep Kit (Sigma-Aldrich, Tokyo, Japan), amplified using a Templiphi Amplification Kit (GE Healthcare, Tokyo, Japan) and labeled using a Nick Translation Kit (Abbott Molecular, Abbott Park, IL) with appropriate dye-coupled dUTP, as per manufacturer's instructions. Fluorescence in situ hybridization was performed as previously described [22]. Scoring of FISH results was performed using a BZ-9000 fluorescence microscope (Keyence, Osaka, Japan) with appropriate filters at 1000× magnification. A tissue microarray containing a tumor with a known YAP1 fusion, kindly provided by Dr. David Ellison from St. Jude Children's Research Hospital, was used as a positive control.
Expression analysis
mRNA expression levels were evaluated via real-time quantitative PCR (qPCR) using the LightCycler 480 SYBR Green I Master and the SYBR Green I (483-533 nm) detection format on a CFX96 (Bio-Rad Laboratories, Inc., Hercules, CA, USA), according to the manufacturer's instructions. The primer pairs used to perform qPCR were as follows: TERT -forward primer (P570) located in exon 6 and reverse primer (P571) located in exon 7; and EZH2 -forward primer (P563) located in exon 2 and reverse primer (P564) located in exon 3. The expression level of H6PD, determined via the primer pair, (P114) and (P115), was used as an internal reference for normalization. PCR conditions were as follows; 95°C for 5 min, 45 cycles of 10 s at 95°C each, 55°C for 10 s and 72°C for 10 s. A standard curve was generated using serially diluted cloned PCR products of both the internal reference and target genes. Expression was measured relative to the human total brain RNA (Clontech Laboratories, Mountain View, CA, USA). Primer sequences are described (Additional file 1 Table S1).
PF-EPN subgroup prediction by pyrosequencing
A set of pyrosequencing assays was developed in order to sub-classify PF-EPN into PFA or PFB effectively. At first, 10 PFA and 10 PFB cases were investigated as a discovery set (Results and Additional file 2 Tables S2 and Additional file 3 Table S3). Highly methylated probes (mean beta-value ≥0.5) in PFA cases and hypomethylated probes (mean beta-value ≤0.2) in PFB cases, were selected from our discovery set and archival data set (n = 48) with an Infinium HumanMethylation450 BeadChip array [19]. Out of the 414,634 probes used, 13 probes covering 3 genes, CRIP1, LBX2, and DRD4, located in the autosomes were selectively identified, (Additional file 2 Table S2). Next, a set of pyrosequencing assays was designed to examine the methylation status of CpG sites, as well as their flanking CpG sites, targeted by the probes cg04411625 (CRIP1), cg03270710 (LBX2), cg20931042 and cg06825142 (DRD4). Primers and dispensation orders are listed (Additional file 1 Table S1). Methylation levels were measured using the CpG assay of PyroMark Q96 (see above). Mean methylation levels of all CpG sites included in each assay were used to represent the methylation status of each gene. Methylation levels at these CpG sites measured via pyrosequencing showed good concordance with those of the 450 K array (data not shown). The dataset containing 54 cases from the JPMNG cohort and 69 cases from the SickKids cohort was used in the training and validation processes for the prediction rule design, (Additional file 4 Figure S5a). The original dataset of JPMNG and SickKids was preprocessed via random sampling and divided on a 1:2 basis into a training dataset containing 41 cases (PFA: 30, PFB: 11) and a validation dataset containing 82 cases (PFA: 60, PFB: 22), while maintaining the PFA:PFB ratio. Statistical analysis and determination of the cutoff are described in Results.
Whole transcriptome sequencing
The TruSeq RNA Sample Prep Kit (Illumina, CA, USA) was used to prepare RNA sequencing libraries from total RNA. Samples with an RNA integrity number of 6 or less were prepared using TruSeq Stranded Total RNA with Ribo-Zero Gold LT Sample Prep Kit (Illumina, CA, USA). The resultant libraries were subjected to pairedend sequencing of 75-bp reads on a HiSeq 2000 (Illumina, CA, USA). Fusion transcripts were detected using the TopHat-Fusion algorithm [14] (Additional file 5 Table S4).
For expression analysis, we established "virtual" Agilent 8x60K array data from RNA sequencing data by counting the number of inserts expected to hybridize on probe positions of the array (VA). The data was normalized by the median number of inserts.
Immunohistochemical analysis of H3K27me3
Of the 60 PF-EPNs that were subjected to consensus diagnosis and molecular subclassification into PFA or PFB, 44 cases were available for evaluation using immunohistochemistry. Four-micrometer-thick sections cut from blocks representing each tumor were deparaffinized. The preparations were autoclaved in citrate buffer (pH 6.0), and endogenous peroxidase activity was blocked with 3% hydrogen peroxide. The primary antibody used was the anti-H3K27me3 rabbit monoclonal antibody (C36B11, dilution 1:200; Cell Signaling Technology, Danvers, MA, USA). Slides were incubated for 1 h at room temperature with the primary antibody, and subsequently labeled by using the EnVision system (Dako, Glostrup, Denmark). Diaminobenzidine was used as the chromogen, and hematoxylin as the counterstain. The entire area of the stained slides was visually inspected, and the percentage of cells that lacked staining was assessed semi-quantitatively. Cases were categorized as showing intact expression when over 80% of tumor cells were intact for H3K27me3, or as showing reduced expression when 0-80% of tumor cells were labeled as intact [26]. Staining was deemed evaluable only if endothelial cells in the tumor tissue showed intact reactivity. Staining intensity was not used as a parameter for evaluation.
Statistical analysis
Comparison between subgroups was performed using the Student's t-test, Pearson's chi-square test, Fisher's exact test and the Wilcoxon rank-sum test. Overall survival (OS) was defined as the probability of survival, with death as the only event. Progression-free survival (PFS) was defined as the probability of being alive without a risk of progression or relapse. Survival curves were plotted using the Kaplan-Meier method. The log-rank test and Cox proportional hazards model were used to detect differences in survival between different groups of patients. Two-sided tests were used for all analyses, and the significance level was set at P < 0.05. JMP 10 (SAS Institute Inc., Cary, NC, USA) was used for all analyses.
Central pathology review
A total of 113 locally diagnosed ependymomas (38 supratentorial, 63 posterior fossa and 12 spinal) analyzed in this study were subjected to a central review of histopathology (Table 1). Following this review, 1 myxopapillary EPN (grade I), 33 EPNs (grade II) and 67 anaplastic EPNs (grade III) were identified. Nine supratentorial and 3 posterior fossa tumors were re-classified as non-ependymal tumors. As a result, 29 supratentorial, 60 posterior fossa tumors (not including 3 re-reclassified posterior fossa tumors) and 12 spinal tumors were subjected to molecular analysis. Detailed results of histopathology related analyses will be published elsewhere (Sasaki, submitted).
C11orf95-RELA fusion negative ST-EPNs are highly heterogeneous
It has been proposed that ST-EPNs may be divided into three molecular subgroups; ST-EPN-RELA (RELA fusionpositive), ST-EPN-YAP1 (YAP1 fusion-positive) and ST-SE (subependymoma) [25,27]. To validate the above molecular classification, we sought to identify fusions using a combination of RT-PCR, FISH, and/or RNA-sequencing analysis in ST-tumors (n = 38), including 9 tumors re-classified as non-ependymoma following a central review.
C11orf95-RELA fusions were detected in 19 out of 29 ST-EPNs using RT-PCR and/or FISH (Fig. 1). All 19 RELA-fusion positive ST-EPNs were diagnosed as grade III after the central review. The RT-PCR used in this study detected 4 out of 7 C11orf95-RELA fusion transcripts reported so far [27]. A novel C11orf95-RELA fusion transcript, in which exon 2 of C11orf95 was fused to exon9 of RELA in-frame, was detected via RNA sequencing in an ST-EPN with a RELA fusion identified by FISH, but not by RT-PCR (EP15, Additional file 6 Figure S1a). C11orf95-RELA fusion was not detected in any of the 9 tumors re-classified following central review. One case (EP33) in which RELA fusion was not detected by either RT-PCR or FISH was classified to be RELA fusion-positive using DKFZ classifier results (see below). A copy number analysis using the 450 K array showed a copy number loss of upstream exon 2 of RELA, the most common break point of the fusion gene. Immunohistochemical staining of L1 cell adhesion molecule (L1CAM) showed strong positivity. These findings corroborated the result of the classifier (Additional file 7 Figure S6). EP33 was not subjected to RNA sequencing due to insufficient amount of RNA, and likely to have RELA fusion other than those examined by RT-PCR. As a result, out of 29 histologically verified ST-EPNs a total of 20 (68.9%) were identified as RELA fusion-positive ependymomas.
YAP1 fusion was detected via FISH in only one grade III tumor (EP117, 1/29, 3.4%). In this case, the fusion could not be studied further due to insufficient specimens. Thus, 8 histologically verified ST-EPN had neither RELA nor YAP1 fusions (Fig. 1). Chromothripsis, frequently found on chromosome 11 in ST-EPN is highly indicative of C11orf95-RELA fusion, and therefore considered to be one of its causal mechanisms [27]. Copy number analysis using 450 K array data indicated that of 20 ST-EPNs, 5 (25%, data not shown) exhibited a highly unstable chromosome 11, which is highly suggestive of chromothripsis. No evidence of chromothripsis was observed in ST-EPNs, in the absence of RELA fusion or nonependymal tumors. There was no preferred age of onset, intracerebral location or tumor form (cystic or solid) in C11orf95-RELA-positive EPNs, as compared to RELAnegative tumors (Additional file 3 Table S3a). Eight ST-EPNs negative for either C11orf95-RELA or YAP1-related fusion (4 grade II, and 4 grade III) were further investigated. Among those 8 cases, 3 demonstrated histological features of classic, Grade II ependymoma (EP97) or Grade III anaplastic ependymoma (EP3, EP32) whereas the remaining 5 exhibited evidence of ependymal differentiation as well as a variety of unusual features including astrocytic cells, tanycytic cells, vacuolated cells or microcysts (EP116, EP57, EP50, EP92, EP37, Additional file 8 Figure S7). RNA sequencing was performed in 5 tumors with sufficient amounts of RNA. We identified 2 novel in-frame fusion genes, EP300-BCORL1 (exon 31-exon 4) in one grade III tumor (EP3) and FOXO1-STK24 (exon 1-exon 3) in one grade II tumor (EP57) (Fig. 1, Additional file 6 Figure S1 and Additional file 9 Figure S4). EP57 with FOXO1-STK24 fusion also exhibited copy number oscillations equivalent to chromothripsis in both chromosome 6 and 13, on which these genes are located (Additional file 6 Figure S1b). EP3 showed significantly higher expression of BCORL1 than RELA-fusion cases and the other non-RELA-fusion cases (Additional file 10 Figure S8). EP57 showed increased expression of both FOXO1 and STK24 compared to other RELA-fusion or non-RELA fusion ST-EPNs. The histology of EP3 and EP57 is presented in Additional file 8 Figure S7. In addition, a BCOR tandem duplication recently reported in central nervous system-primitive neuroectodermal tumors (PNETs) was found in one case (EP116) with grade III ST-EPN [30] (data not shown). We further examined fusion-negative ST-EPNs by pyrosequencing for hot-spot mutations in IDH1, IDH2, TERT, BRAF V600E, H3F3A, HIST1H3B and FGFR1 [1,2] (Additional file 1 Table S1). TERT promoter mutations (C228T) were observed in one EPN grade II and one EPN grade III. The patients in these 2 cases were 45 and 56 years old. No mutations of the alterations examined were detected in the remaining cases.
In order to further validate our molecular classification, the DKFZ classifier was applied to all cases via the DKFZ molecular neuropathology website (see Materials and Methods), except EP111 (RELA fusion) and EP117 (YAP1 fusion) which had insufficient material for an analysis to be performed. All RELA-positive ST-EPNs matched "methylation class ependymoma, RELA fusion" by the DKFZ classifier (score > = 0.90); ( Fig. 1 and Additional file 3 Table S3). The RELA-negative ST-EPNs displayed variability in regard to methylation classes as follows: 3 (EP50, EP92, EP37) with no matching methylation classes (calibrated score > = 0.3; Fig. 1), 2 (EP116, EP3) with CNS high grade neuroepithelial tumors carrying the BCOR alteration (BCOR altered tumor), 1 (EP97) with ependymoma PFA, 1(EP57) with ependymoma PFB and 1 (EP32) with glioblastoma IDH wildtype subclass RTK II. Among the 3 cases with no matching methylation cases, 1 case carried a TERT promoter mutation and the other 2 cases exhibited no alterations via pyrosequencing of selected genes or RNA sequencing. Of the 2 tumors carrying BCOR/BCORL1 alterations, EP116 with a verified BCOR tandem duplication was classified as "CNS high grade neuroepithelial tumor with BCOR alteration" (score = 0.99), whereas EP3 with the EP300-BCORL1 fusion with no match, was classified as "CNS high grade neuroepithelial tumor with BCOR alteration" with a low score (0.44). EP57, with the FOXO1-STK24 fusion, was classified as ependymoma PFB with a low score (0.44). EP57 was a left occipital lobe tumor extending to the lateral ventricular wall, which was completely removed by surgery. EP97 was located in the right lateral ventricle, which was partially removed. Notably, of the ST-tumors re-classified as non- Additional file 3 Table S3). A BCOR tandem duplication was found in a high grade malignant tumor, not otherwise specified. These genotypes were matched with the DKFZ methylation classes with high scores.
PF-EPNs are subclassified into PFA and PFB by methylation profile
Although no recurrent genetic alterations have been identified in PF-EPNs, it was proposed that the PF-EPNs be segregated into two subgroups; PFA and PFB [19,25]. To validate methylation-based classification, we investigated genome-wide methylation status of 60 PF-EPNs together with clinical information. Our 450 K array analysis segregated these PF-EPNs into two subgroups with distinct methylation profiles (Fig. 2). When our PF-EPNs were combined with a published PF-EPN dataset, the Toronto cohort (Material & Methods), and analyzed, each of these two subgroups was clustered with published PFA or PFB, indicating that the 450 K array analysis was robust and accurately identified these two subgroups (data not shown). Our PFAs were generally matched with the DKFZ classifier results, although with lower scores in some cases, except 2 PFAs for which no match could be found. These 2 could not be even assigned as normal tissue. Posterior fossa PFB were mostly correctly diagnosed by the classifier as well, except for 3 tumors that were classified as a pituitary adenoma (EP96), ependymoma and a myxopapillary (EP86) and no matching class (EP40). When PF-EPN and SP-EPN were collectively analyzed, all but one spinal tumors were segregated with PFB (Additional file 11 Figure S2). Nine SP-EPN were classified Fig. 2 Classification of posterior fossa ependymomas (PF-EPNs) using genome-wide methylation profiling. A heatmap analyzed by 3086 probes which showed high standard deviations (SD > 0.25) on CpG islands for unsupervised hierarchical clustering of 60 centrally-diagnosed posterior fossa ependymomas shows that the tumors are divided into two clusters as PFA and PFB. The following information is indicated below the heatmap: tumor location, a pattern of PF tumor extension, pathological grading, the presence of 1q gain, age at onset, and the DKFZ classifier results by the DKFZ classifier as spinal ependymomas, one as an adult plexus tumor, one as a pituitary adenoma and one with no matching class.
When molecular classification results were compared with clinical characteristics of intracranial PF-EPNs, excluding spinal EPN, PFA tumors (n = 45) occurred predominantly in younger patients (p < 0.001) and were laterally rather than medially located (p = 0.028) compared to PFB tumors (Additional file 3 Figure S3a, b). The great majority of PFAs were grade III while most PFBs were grade II (p < 0.001, Additional file 3 Figure S3c). There was no difference in the resection rate between PFA and PFB (Additional file 3 Figure S3d). All 1q gain but one occurred in PFA (Fig. 2). PFA with 1q gain was observed in older patients (p < 0.001) and tended to develop spinal dissemination at onset (p = 0.09) as compared to PFA without 1q gain (Additional file 3 Figure S3e, g).
PFA is the most significant prognostic factor in all EPNs
We evaluated the prognosis prediction efficacy of molecular markers as well as clinical/pathological factors potentially associated with the survival of EPN patients. Only primary tumors were included in the survival analysis. High levels of EZH2 protein, TERT mRNA expression and hypermethylation of TERT upstream transcription starting sites (UTSSs) have previously been reported to be correlated with negative prognoses for EPN patients [5,13,16,18,21]. We investigated the status of these genes in the EPN cohort. Among the 4 molecular groups, ST-EPNs showed the highest EZH2 and TERT mRNA expression (Additional file 5 Figure S4a and S4b). Notably, TERT mRNA expression was 10 to 100 times higher in the C11orf95-RELA fusion-positive EPNs compared to all other EPN groups and even adult GBMs with the TERT promoter mutation (Additional file 5 Figure S4b). TERT UTSSs were highly methylated in the C11orf95-RELA fusion-positive ST-EPNs with high TERT mRNA expression (Additional file 5 Figure S4c).
A univariate analysis of all above data was performed to examine the efficacy of predicting prognosis of incomplete resection, WHO grade III, C11orf95-RELA fusion, PFA, 1q gain, high EZH2 expression, high TERT expression, and high TERT UTSS methylation status in all EPN patients whose clinical and molecular information was available (30 ST, 67 PF + SP). High expression/methylation in EZH2/TERT was determined to be above the respective median value. The results showed that WHO grade III (p = 0.006), PFA (p = 0.0004) and 1q gain (p = 0.0003) were significantly associated with progression-free survival (PFS) ( Table 2). WHO grade III (p = 0.001) and PFA (p = 0.0004) were significantly associated with shorter overall survival (OS). C11orf95-RELA fusion, high EZH2 expression, high TERT expression, or high TERT UTSS methylation were not associated with survival. Incomplete resection was not significantly associated with survival, even though there was a tendency towards predicting shorter survival among all ependymomas or in PFA or PFB (Additional file 15 Figure S9; Discussion). Multivariate analysis using Cox regression of the same set of clinical factors and molecular markers showed that PFA was the only factor that was independently associated with PFS and OS among all EPNs (p = 0.002 for PFS; p = 0.01 for OS; Table 2). Univariate and multivariate analyses of the tumors in each location are described (Additional file 12 Table S6).
Finally, the efficacy of molecular classification in predicting prognosis in EPN patients was investigated separately for ST-or PF-EPN patients. No significant difference in survival was observed between the C11orf95-RELA fusion positive and negative ST-EPNs (Fig. 3a, b). Survival data was not available for EP116. Patients with PFA tumors showed a tendency towards shorter progression free survival (p = 0.06, Fig. 3c) and significantly shorter overall survival (p = 0.009, Fig. 3d) than those with PFB tumors. Among patients with PFA, those with 1q gain showed significantly shorter PFS than those without 1q gain (p = 0.02; Fig. 3e). There was no significant difference in overall survival between PFA patients with and without 1q gain (p = 0.44, Fig. 3f ).
PF-EPN subgroup prediction by methylation thresholds of the three genes
Next, we developed a PF-EPN subgroup prediction assay using DNA methylation percentage thresholds for CRIP1, DRD4, and LBX2. R version 3.
(R Foundation for
Statistical Computing, Vienna, Austria) was used for all analyses. In the training process, likelihoods for each gene and subgroup were calculated using the training dataset assuming beta distribution, and thresholds for each gene were determined to be 25, 11, and 23%, respectively based on the likelihood ratio (Figs. 4a, b). In the validation process, prediction results for each gene were obtained from both training and validation datasets, and three rule candidates were validated by the results (Additional file 3 Table S3, Additional file 13 Table S5; Additional file 4 Figure S5b). Table S5).
Our results thus indicated that the methylation status of these three genes may predict the molecular subgroup of PF-EPNs with 100% specificity for PFB.
Discussion
Molecular classification is essential for integrated diagnosis of central nervous system tumors in modern diagnostic pathology. However, using such classification in ependymomas is challenging due to the very limited number of available markers. In this study, the proposed molecular classification of supratentorial and posterior fossa ependymomas was extensively investigated in an independent set of 113 locally diagnosed ependymal tumors in Japan. Our study confirmed that C11orf95-RELA fusion is a unique genetic feature of ST-EPN and that its presence is consistent with histopathological diagnosis. On the other hand, pathogenesis of ST-EPN in the absence of C11orf95-RELA fusion remains unresolved. There were 9 C11orf95-RELA fusion-negative ST-EPNs including 4 ependymoma grade IIs and 5 anaplastic ependymoma grade IIIs. Thus, those tumors were histologically confirmed as ependymoma, by definition. None of these were diagnosed as subependymoma following central review, where only a single case of YAP1 fusion was identified by FISH (EP117). Instead, 2 novel fusion genes, EP300-B-CORL1 and FOXO1-STK24, were detected in single cases (EP3 and EP57).
EP300 (E1A binding protein p300, located at 22q13) is a transcriptional coactivator that binds to a variety of transcription factors and bridges them to basal transcription machinery, and additionally functions as histone acetyltransferase that relaxes chromatin structure [5]. BCORL1 (BCL6 Corepressor Like 1, located Xq26.1) is a transcriptional corepressor that interacts with histone deacetylases to repress transcription of genes such as E-cadherin [23].
The EP300-BCORL1 fusion found in EP3 retained nearly all functional domains of both genes, but BCORL1 expression was significantly increased in EP3 (Additional file 10 Figure S8). Interestingly, 2 ossifying fibromyxoid tumors with a CREBBP-BCORL1 fusion have been reported [12]. CREBBP (CBP) is a paralog of EP300 and as their functions mostly overlap they are often collectively described as CBP/EP300) [5]. BCORL1 was overexpressed in CREBBP1-BCORL1 fusion-positive tumors, suggesting that activation of BCORL1 may be a consequence of CREBBP1-BCORL1. Thus, it is likely that BCORL1 activation, a consequence of EP300-BCORL1, may lead to deregulation of chromatin remodeling through recruitment of histone deacetylase. In addition, BCOR (BCL6 corepressor) internal tandem duplication (ITD), which acts as an activating oncogene [34], was also found in a single ST-EPN in our cohort (EP116). The DKFZ classifier matched the BCOR ITD tumor to "CNS high grade neuroepithelial tumor with BCOR alteration (CNS HGNET-BCOR)" with a high score (0.99). The BCORL1-fusion tumor was interpreted by the DKFZ classifier as a 'no match, ' although it was also classified as CNS HGNET-BCOR with a low Approximately 38% of them showed reactivity in less than 5% of tumor cells (a, e), and the remaining cases showed labeling in 5-50% of tumor cells (b, e). In contrast, most PFB tumors retained intact H3K27me3 expression (> 80% labeled nuclei) (c, e). A few PFB tumors, however, showed labeling in 10-60% of cells, which were categorized as reduced expression of H3K27me3 (d, e). e, a histogram of the percentage of labeled nuclei in PFA and PFB tumors. f, a confusion matrix for actual and predicted subgroup by H3K27me3 immunohistochemistry when PFB was defined as intact H3K27me3 expression and PFA as reduced expression score (0.44) (Fig. 1, Additional file 3 Table S3). Thus it is likely that these tumors may belong to a new entity within the unclassified heterogeneous high grade neuroectodermal/glial tumors of children [30]. An HDAC inhibitor may potentially be effective for tumors with activated BCOR/BCORL1.
Much less is known about the FOXO1-STK24 fusion found in another ST-EPN (EP57). FOXO1 is a transcription factor that is involved in the maintenance of cellular homeostasis [36]. PAX3-FOXO1 fusion, which acts as a highly activated transcription factor, is found in 60% of alveolar rhabdomyosarcomas [36]. STK24 (also known as MST3) is a serine-threonine kinase that functions upstream of the mitogen-activated kinase (MAK) signaling pathway. STK24/MST3 is overexpressed in breast cancers and promotes proliferation and tumorigenicity [30]. Recurrent mutations or fusions of STK24 have not been reported. The DKFZ classifier found no match for this ST-EPN tumor (classified as PFB, score = 0.44). Interestingly, this tumor showed copy number oscillation compatible with chromothripsis on chromosomes 13, on which FOXO1 and STK24 are located, strongly suggesting that this may be the mechanism underlying the gene fusion. Both FOXO1 and STK24 were overexpressed in EP57 (Additional file 10 Figure S8), suggesting that either of them may carry an oncogenic property. Although a detailed study of individual cases is beyond the scope of this paper, this tumor may warrant further investigation.
None of the other RELA fusion-negative ST-EPN were classifiable even with the DKFZ classifier. In summation, our findings suggest that RELA/YAP1 fusion-negative ST-EPNs may be a heterogeneous group of tumors that consist of a variety of mutations or rare fusion genes, which are unlikely to belong to a single category. Further studies using a vast number of tumors may help in clarifying whether tumors with similar genetic changes and/ or DNA methylation profiles truly define a new tumor entity. Considering the high homogeneity of RELA-fusion positive ST-EPNs, it is doubtful whether these are biologically equivalent to ependymoma. According to the latest WHO Classification [8], ependymomas are primarily diagnosed via histology. As such, they may be diagnosed as ependymomas, at least for the time being. Nonetheless, it is important to be aware that histologically diagnosed RELA-fusion negative ependymomas may have a biology which is different from that of quintessential RELA-fusion positive ependymomas. Further molecular classification and incorporation into future WHO Classification criteria is warranted.
In contrast to a previous large series, no significant association between the presence of C11orf95-RELA fusion and patient survival was noticed in our series [25]. Furthermore, RELA fusion status was reportedly not related to a significant difference in the survival of ST-EPN patients [9]. In addition, the rate of GTR in RELA fusion-positive ST-EPN was not statistically significant compared to that in RELA fusion-negative ST-EPN (p = 0.55) in our cohort. The impact of C11orf95-RELA fusion on patient survival needs to be further investigated. These findings may reflect the fact that RELA fusion-negative ST-EPNs are a biologically heterogeneous group of tumors. Interestingly, median progression-free or overall survival was not reached for C11orf95-RELA fusion positive ST-EPNs. Other proposed prognostic molecular markers of ependymomas include TERT and EZH2 expression [18,21,31]. Although we confirmed elevated EZH2 and TERT expression in RELA fusion-positive ST-EPNs, they were not associated with patient survival. Nonetheless, it may be of interest that TERT mRNA expression was elevated in RELA fusion-positive ST-EPNs, to an extent which far exceeded that in glioblastomas with TERT promoter mutations (Additional file 9 Figure S4b). None of the ST-or PF-EPNs in this cohort carried the TERT promoter mutation (data not shown). This phenomenon has also been described elsewhere [10]. Costelo-Branco et al., found that the methylation status of some CpG sites upstream of transcription starting site of TERT, were positively correlated with TERT mRNA expression in childhood malignant brain tumors and were also associated with the prognosis of patients with PF ependymoma [5]. Although neither TERT mRNA expression nor TERT UTSS methylation was associated with patient prognosis in this series, TERT UTSSs were highly methylated in the RELA fusion-positive ST-EPNs with elevated TERT mRNA expression. The mechanism of TERT upregulation appears to be complex and warrants further investigation.
We validated the proposed molecular classification of PF-EPN for efficacy in predicting clinical characteristics including that of patient survival. The 450 K analysis accurately classified the published reference PF-EPN dataset, confirming the robustness of the analysis. PFA showed a minor but significant increase in methylation levels and distinct methylation profiles when compared to PFB (Fig. 2). With a few exceptions, PFA patients were mostly infants and the ages of the PFB patients were significantly higher than those of PFA (Additional file 14 Figure S3a). PFA tumors showed significantly more lateral extension compared to PFB, most of which were medially located (Additional file 14 Figure S3b). DKFZ classifier results were mostly consistent with our analysis with a few exceptions. Two PFAs showed no match. One PFB (EP96) was classified as pituitary adenoma and another PFB (EP86) as myxopapillary ependymoma. These classifications were not compatible with their histology or location.
Our multivariate analysis using Cox regression showed that the PFA subgroup was the only molecular marker which was independently associated with patient PFS and OS among all ependymomas. Among PF-EPN, PFA patients showed significantly shorter PFS and OS compared to PFB patients. These findings corroborated previous reports [19,29] and consolidated the significance of proposed molecular classification, indicating that PFA and PFB may be biologically distinct subgroups of PF-EPN. The important clinical implication of the PFA/PFB classification is its potential to aid therapeutic decision making. Based on the results of a study conducted on a large series of PF-EPN, Ramaswamy suggested that a substantial proportion of totally resected PFB patients may be treated with surgery alone, without radiotherapy [29]. Although this suggestion needs to be tested in a randomized clinical trial, it is evident that molecular classification may play an important role in the clinical management of ependymomas. Although resection rate was not significantly associated with survival in our survival analysis, there was a tendency for gross total resection (GTR) to predict longer survival (Additional file 15 Figure S9). This may be due to the relatively small number of cases screened in the study. Retrospectively, the extent of resection although determined locally was not centrally reviewed which may be a limitation of the multi-institutional nature of the study. Data from the Collaborative Ependymoma Research Network (CERN), which was also a multi-institutional study, did not indicate statistically significant differences in PFA survival due to resection rate [29]. A prospective clinical trial for ST-and PF-EPN with molecular classification and a standardized central review for the extent of resection is on-going in Japan Children's Cancer Group (JCCG).
In spite of its usefulness, a practical problem associated with methylation classification, is its cost as well as limited availability as a routine diagnostic test. To overcome this issue, a simplified methylation test was developed to determine PFA/PFB for PF-EPNs by examining the 3 most highly methylated regions in PFA via pyrosequencing and rigorously validating it by using an extended PF-EPN cohort of 123 PF-EPNs which combined our cases and an independent set of samples from Toronto (Results and Fig. 5). With this novel assay, we were able to diagnose PFB with 100% specificity. We also confirmed the efficacy of anti-H3K27me3 immunohistochemistry, recently reported by Panwalkar et al. [26], by predicting PFB with 100% specificity in a selected Japanese cohort of 44 PF-EPNs (Fig. 5 ). It may be noteworthy, that the proposed cutoff of 80% for H3K27me3 immunohistochemistry, though appropriate, may prove to be somewhat counterintuitive for judging reduced PFA expression. Among the 4 PFBs examined via both methods, certain single cases were misclassified as PFA in each method. Although methylation assessment at individual CpG sites has its own limitation such as potential heterogeneity of methylation across CpGs as well as masking by co-existing non-neoplastic cells, these assays may serve as clinically applicable techniques for rapid molecular classification of PF-EPN, which are also suitable for risk-grouping in clinical trials. Hopefully, these may lead to better treatment decision making for the ependymoma patients in the future.
It has been suggested that the presence of 1q gain is associated with poor prognosis in ependymomas [13,16,24,25]. A large cohort study indicated significant differences in PFS between patients with and without 1q gain in both PFA and PFB, whereas significantly shorter overall survival in patients with 1q gain was seen only in PFA patients but not in PFB or ST-EPN-RELA patients [6,24,25]. In our cohort, 1q gain was highly enriched in a subset of PFA, while only PFB had 1q gain (Fig. 2). PFA patients with 1q gain were older at onset (p < 0.001; Fig. 3a) and exhibited significantly shorter PFS than those without 1q gain (p = 0.016, Fig. 4e). However, there was no significant difference of OS between those patients (p = 0.51, Fig. 4f). The reason for discrepancies related to the impact of 1q gain on OS of PF-EPN between our study and others is currently unknown. It has been recently proposed that PFA and PFB may further be divided into 9 or 5 subgroups [6,24]. Although our cohort was too small to validate such subgrouping, it is likely that PF-EPN are a heterogeneous group of tumors. The significance of molecular markers/subgroups in patient prognosis needs to be examined by a prospective study.
Our study demonstrated that histopathological diagnosis of ependymomas such as ST-EPN is often challenging. Eight locally diagnosed ST-EPNs were re-classified as non-EPN tumors following a pathology review. None of them carried RELA-fusion. The significance of WHO grading of ependymomas is highly controversial [7,25]. On the other hand, our histopathological review classified most PFA as WHO grade III, and PFB as WHO grade II. Current WHO Classification bases the diagnosis of ependymomas solely on the histopathology of tumors. Thus the role of histopathology needs to be revisited.
Conclusion
Our results showed that C11orf95-RELA fusion is a unique and highly specific diagnostic marker for ST-EPN. However, histologically verified RELA fusion-or YAP1 fusion-negative ST-EPN also exists. These cases were neither histologically nor molecularly subependymomas, and thus did not fall into any of the 3 proposed molecular groups of ST-EPN [25]. They appear to be a very heterogeneous group of tumors distinct from the RELA fusionpositive ST-EPN, and are unlikely to fall into a single category. However, most if not each one of them may rather belong to different, possibly new entities, considering the DKFZ classifier results. A more thorough implementation of molecular diagnosis may hopefully resolve unanswered | 9,802.8 | 2018-12-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Hierarchical edge-aware network for defocus blur detection
Defocus blur detection (DBD) aims to separate blurred and unblurred regions for a given image. Due to its potential and practical applications, this task has attracted much attention. Most of the existing DBD models have achieved competitive performance by aggregating multi-level features extracted from fully convolutional networks. However, they also suffer from several challenges, such as coarse object boundaries of the defocus blur regions, background clutter, and the detection of low contrast focal regions. In this paper, we develop a hierarchical edge-aware network to solve the above problems, to the best of our knowledge, it is the first trial to develop an end-to-end network with edge awareness for DBD. We design an edge feature extraction network to capture boundary information, a hierarchical interior perception network is used to generate local and global context information, which is helpful to detect the low contrast focal regions. Moreover, a hierarchical edge-aware fusion network is proposed to hierarchically fuse edge information and semantic features. Benefiting from the rich edge information, the fused features can generate more accurate boundaries. Finally, we propose a progressive feature refinement network to refine the output features. Experimental results on two widely used DBD datasets demonstrate that the proposed model outperforms the state-of-the-art approaches.
Introduction
Defocus blur is a very common phenomenon in digital photos, arising from that the scene point is not at the camera's focal distance. Defocus blur detection (DBD) aims to distinguish blurred and unblurred regions from a given image. Defocus blur detection benefits much attention due to its practical applications such as salient object detection [1], defocus estimation [2], image restoration [3], blur region segmentation [4], and so on.
In the past decade, many defocus blur detection methods have been proposed. These methods can be simply divided into two categories: traditional methods and deep learning University of Chinese Academy of Sciences, Beijing 100049, China based methods. The former one is based on hand-crafted features and utilizes low-level cues to predict DBD maps, such as frequency [5][6][7][8][9] and gradient [10][11][12][13][14][15]. However, these traditional methods can not well obtain global information of high-level semantic features; thus they can not accurately detect the low contrast focal regions (see green box region of Fig. 1a) and suppress the background clutter (see red box region of Fig. 1b). Otherwise, as shown in the blue box region of Fig. 1a, the boundaries of in-focus objects have not well been detected.
Recently, convolutional neural networks (CNNs) have been widely used in various computer vision tasks because of its powerful extraction capabilities, such as image denoising [19], image classification [20], super-resolution [21], salient object detection [22], and object tracking [23]. Similarly, CNNs have also been well applied in DBD [16,17,[24][25][26][27][28][29][30][31][32][33][34][35]. Although deep learning based approaches achieve higher performance and significant improvements compared with the traditional methods, there remain several problems that need to be further addressed: (1) the complementary of local and global information generated by different layers can not be well utilized, which causes ambiguous detection of lowcontrast regions and background clutter of the final DBD Fig. 1 Qualitative comparison of three models on the Shi's dataset [8] and DUT dataset [16], the first and the second columns show input images and their ground-truth images, respectively. From the third to the last columns, including Our DBD maps, DeFusionNet [17], and LBP [18]. Images in the green boxes are low contrast focal region patches, the red boxes are background clutter patches, and the blue boxes are the boundaries of in-focus objects patches map; (2) the boundaries of in-focus objects can not be fully distinguished.
In this paper, we exploit a hierarchical edge-aware network (HEANet) to improve above-mentioned problems, which consists of four sub-networks: hierarchical interior perception network (HIPNet), edge feature extraction network (EFENet), hierarchical edge-aware fusion network (HEFNet), progressive feature refinement network (PFR-Net). Specifically, considering the contextual information can benefit for detecting low contrast focal regions, we design a receptive field context module (RFCM) to capture multireceptive field features. In addition, we cascade three RFCMs and form a top-bottom manner as the HIPNet. Then, we develop an EFENet to obtain the edge information of in-focus objects from feature maps. Subsequently, the multi-scale contextual features and the edge information are transmitted to the HEFNet, which consists of some progressive edge guidance aggregation modules (EGAMs). With this module, the edge cues and multi-scale semantic features can be hierarchically fused, making better performance on localization. Finally, we design a PFRNet to refine the feature maps to generate a DBD map with clear region boundaries, and supervise the predictive DBD map with the ground truth.
Our major contributions can be summarized as follows: 1. We propose a hierarchical edge-aware network (HE-ANet) for DBD, to the best of our knowledge, it is the first trial to develop an end-to-end network with edge awareness for DBD. 2. We design a receptive field context module (RFCM) to capture local and global context information, which aims to distinguish low contrast focal regions and suppress the background clutter. In addition, we cascade three RFCMs as HIPNet to extract the multi-scale contextual features hierarchically. 3. We develop an edge guidance aggregation module (EGAM), which incorporates edge information into the hierarchical feature maps to guide the DBD maps to possess clear region boundaries. 4. Compared with 10 state-of-the-art approaches on two widely used datasets, our method outperforms the stateof-the-art approaches under five evaluation metrics.
Related work
In the past years, many DBD methods have been proposed. Traditional methods based on the hand-crafted features, such as frequency [5][6][7][8][9], gradient [10][11][12][13][14][15], and so on [18,36,37]. Shi et al. [8] propose a few local blur features, such as image gradient, Fourier domain, and data-driven local filters, to enhance the capabilities of defocus blur detection. Pang et al. [14] develop a new kernel-specific feature vector for DBD, which incorporates the multiplication of the variance of filtered kernel and the variance of filtered patch gradients. Yi et al. [18] present a sharpness metric based on local binary patterns to distinguish defocus regions. Tang et al. [36] design a blur metric based on the log averaged spectrum residual to obtain a coarse blur map, then an iterative updating mechanism is used to refine the blur map. Golestaneh et al. [37] propose a novel method based on high-frequency multi-scale fusion and sort transform of gradient magnitudes to compute blur detection maps. These traditional methods can be effective in some cases; however, they are the limited capacity to obtain high-level semantic information in complex scenarios. Due to the powerful multi-level feature extraction capabilities, most deep learning based models can achieve better performance than traditional hand-crafted methods. In recent years, many approaches have adopted CNNs for DBD. Among these methods, Park et al. [25] propose a deep learning model to extract high-level features, then integrate the hand-crafted and high-level features to obtain a DBD map. Karaali et al. [24] develop an edge-based defocus blur estimation method. In this method, two CNNs are utilized to compute an edge map and estimate the unknown defocus blur amount, a fast image-guided filter is designed to propagate the sparse blur estimation to the whole image. However, each of these two methods is not a complete end-to-end network, the edges of the in-focus objects they generated are mostly blurry. Zhao et al. [16] adopt a multi-stream bottom-topbottom fully convolutional network (BTBNet) to aggregate the multi-scale low-level and high-level features to predict the DBD map. Zhao et al. [26] propose a cross ensemble network to enhance the diversity of the features for DBD. Ma et al. [27] present an end-to-end local blur mapping algorithm for better detecting defocus blur regions. Lee et al. [28] develop a defocus map estimation network for spatially varying defocus map estimation and produced a novel depth-of-field dataset for the training network. Lately, Tang et al. [17] design a cross-layer structure to integrate low-level Fig. 2 The architecture of our HEANet. EFENet represents the edge feature extraction network. HIPNet is the hierarchical interior perception network. HEFNet represents the hierarchical edge-aware fusion network. PFRNet is the progressive feature refinement network and high-level features step by step. Tang et al. [29] build a cross-layer framework and utilized an attention mechanism to integrate multi-level features. Tang et al. [30] propose a bidirectional residual feature refining method and introduce channel-wise attention to extract valuable features. Tang et al. [31] present a residual learning strategy to learn the residual maps, then use a recurrent method to combine the low-level and high-level features. Li et al. [32] design a complementary attention network by exploiting the complementary information of defocus feature maps. Zhao et al. [33] propose a cascaded DBD map residual learning architecture to recurrently refine the DBD maps. Zhao et al. [34] present two deep ensemble networks to boost diversity while costing less computation for DBD. Zhao et al. [35] adopt a method to train the model without using any pixel-level annotation that introduces dual adversarial discriminators, then, the generator is forced to generate an accurate DBD mask. Inspired by but different from these approaches, in this paper, we concentrate on fusing the edge cues and semantic information hierarchically with a complementary mechanism. Experimental results show that our method has been achieved promising results.
Proposed HEANet
The framework of our method is illustrated in Fig. 2. Our approach includes four sub-networks: hierarchical interior perception network (HIPNet) which captures multi-scale contextual information, edge feature extraction network (EFENet) which extracts edge information, hierarchical edge-aware fusion network (HEFNet) which guides the extracted features hierarchical fusion by taking advantage of the edge information of low-level features, finally, progressive feature refinement network (PFRNet) is used to fuse and refine features progressively to generate the defocus blur map. These sub-networks consist of different modules. The details are introduced as follows.
Hierarchical interior perception network
The HIPNet consists of three channel attention modules (CA) [38] and three receptive field context modules (RFCM), first, we use CAs to reduce redundant information, then we cascade three RFCMs to hierarchically extract multi-scale contextual features from multi-level feature maps.
In HIPNet, the key requirement is to capture multi-scale contextual features. To expand such capability, we design a receptive field context module (RFC-M) to extract multiscale contextual information to detect the low contrast focal regions.
The proposed RFCM consists of 5 parallel branches, and we show the structure of RFCM in Fig. 3. First, we use 1 × 1 convolution to compress the channel of the feature map. Then, four branches from left to right, we employ a convolutional layer and dilated convolutional layer in each branch. The global convolutional network (GCN) [39] is utilized in the convolutional layer, we use GCNs with k = 1, 3, 5, 7 to obtain multi-scale features in the four branches. As shown in Fig. 4. The k × k convolutional operation is replaced by the combination of k × 1 + 1 × k and 1 × k + k × 1 convolutions to reduce parameters. In the dilated convolutional layer, we utilize 3 × 3 kernels but different dilation rates in the four branches to expend receptive fields and obtain local information. The dilation rates of the four dilated convolutional layers are set to {1,3,5,7}, respectively. To obtain the successive dilation rates, we add three inter-branch short connections from the first branch to the fourth branch. In this way, the feature maps generated from the previous branch are encoded in the feature maps of subsequent branches. After that, the feature maps of four branches are up-sampled and concatenated, merging into a convolution array. Furthermore, an average pooling branch is adopted to obtain global information of feature maps. Finally, the convolution array of four branches and the output features of the pooling branch is integrated with an add operation, a ReLU layer is used to ensure the nonlinearity.
Edge feature extraction network
In this network, we intend to effectively extract edge features of in-focus objects. Inspired by the work of [40], we embed a channel attention (CA) module [38] to reduce the redundant information. The structure of CA is shown in Fig. 5. In order to enhance edge features, we embed self refinement (SR) module [41] on the side path to refine the final edge features. The structure of self refinement (SR) module is shown in Fig. 6. Specifically, the prediction of the edge map is supervised by the defocus blur edge ground truth.
Hierarchical edge-aware fusion network
We utilize EFENet to obtain low-level edge cues, and leverage three RFCMs in the HIPNet to hierarchically extract multi-scale contextual features at three different levels of backbone. These different levels have different discriminative information. High-level features have semantic and global information, these features can recognize the position of defocus blur regions. Low-level features retain spatial and local information, which can help divide the blur and clear regions. Fig. 3 The structure of receptive field context module (RFCM). "3 × 3, r = 3" represents the "3 × 3" convolutional operation and the dilation rate 3. "Avg_pooling" represents an average pooling operation. The symbol "c" denotes concatenation w × h × c Fig. 4 The structure of global convolutional network (GCN). The k × k convolutional operation is replaced by the combination of k × 1 + 1 × k and 1 × k + k × 1 convolutions to reduce parameters Fig. 5 The structure of channel attention (CA) module After obtaining the low-level edge cues and high-level semantic features, we aim to leverage the edge information to guide the semantic features to perform better in localization. Therefore, as shown in Fig. 2, we develop an HEFNet, which uses multiple edge guidance aggregation modules (EGAMs) to embed the edge information into hierarchical feature maps, and guide them to possess clear region boundaries.
In order to integrate low-level edge cues and high-level semantic features effectively, we propose an edge guidance Fig. 6 The structure of self refinement (SR) module Fig. 7 The structure of edge guidance aggregation module (EGAM). f e represents the input of edge features, f h is the input of high level semantic features. f out is the output of EGAM aggregation module (EGAM). As shown in Fig. 7. EGAM receives two inputs, including the high-level features from the output of HIPNet, and the low-level edge cues from the EFENet. Specifically, its inner structure can be divided into two stages: fusion strategy and features refinement.
The fusion stage consists of two branches, from left to right, the first branch is to enhance the edge information of feature maps, we adopt the multiplication operation to strengthen the boundaries of defocus blur regions, meanwhile, suppressing the background noises. In this manner, we use the nature of edge cues f e to guide semantic features f h . At first, the channels of low-level edge features f e are compressed to the same number of high-level features f h through a 1 × 1 convolutional layer Conv1. Then, the edge cues f 1 and semantic features f h through multiplication operation and feed into one 3 × 3 convolutional layer Conv2. Furthermore, the fused features f 2 will be added to the edge features f 1 for refine representations. The above process can be formulated as: (1) The second branch is to capture consistent semantics of high-level features. First, we combine edge features f 1 and high-level semantic features f h by concatenation, one 3 × 3 convolutional layer Conv3 is used to obtain more local information, and then we add the fused feature f 3 to the highlevel semantic features f h . Further, the aggregated features f 12 of the first branch and the features f 3h of the second branch will be added. The output of the fusion stage is then passed to the features refinement stage. The above process can be described as: As shown in Fig. 7, the features refinement stage also consists of two branches, one connects the input and output directly, the other branch consists of two 3 × 3 convolutional layers. Two branches are fused by an add operation, which is beneficial to learn the edge information and semantic information, thus the features f 4 from the first stage can be refined. The whole process can be defined as follows: With this design, the output of the first stage will obtain the properties of clear boundaries and consistent semantics. Each of the above-mentioned 3 × 3 convolutional layers consists of a convolutional layer with 3 × 3 kernel size, a batch normalization layer, and a ReLU layer. The output of the HEFNet is then fed to the PFRNet.
Progressive feature refinement network
In order to aggregate the multi-scale features from HEFNet effectively, we develop a PFRNet, which is inspired but different from coarse-to-fine residual learning in [33], the method in [33] only applies residual learning to reconstruct the output to the original resolution from the small scale to the large scale step by step. In our PFRNet, we combine coarse-to-fine residual learning and cross-level features fusion manner to enhance residual learning. At first, the multi-scale output features of EGAMs are cascaded fusion through cross-level features fusion manner as the input features of PFRNet, which are guiding the current step to learn residual features. Then, coarse-to-fine residual learning strategy is utilized to reconstruct the output to the original resolution through multiple SR modules. The SR module is used to refine and enhance the feature maps. After multiple adding operations and SR modules in PFRNet, we utilize a convolutional layer with 1 × 1 kernel size to obtain the final DBD map.
Loss function
In defocus blur detection, binary cross-entropy (BCE) is widely used as a loss function, which calculates the loss between the final DBD map and ground truth. However, the BCE loss function does not consider the structural information of the defocus blur region, which may reduce the performance of the model. Inspired by the work of [42], we use a pixel position-aware (PPA) loss as our loss function, which is formed as: where p i j and g i j represent the DBD prediction and ground truth of the pixel (i,j), respectively. L bce is the binary crossentropy loss, L wiou is the weighted IOU loss. α i j is the edgeware weight, which is defined as : where γ denotes the hyper-parameter , it is set as 5 in this work. L wiou is formed as: where inter = p i j × g i j , and union = p i j + g i j .
The dominant loss of output corresponds to the L ppa ( p i j , g i j ), we use the binary cross-entropy (BCE) loss as the edge loss function, the total loss is defined as: where λ represents the weight of different loss, λ is set to 0.3, L ppa ( p i j , g i j ) and L bce ( pe i j , ge i j ) denote the output loss and edge loss, respectively. The pe i j and ge i j are the edge prediction and ground truth of the edge pixel (i,j), respectively.
Datasets and evaluation metrics
Datasets The proposed approach is evaluated on two public blurred image datasets, including Shi [8], DUT [16]. Shi's dataset [8] is the earliest public blurred image dataset. There are 604 defocus blurred images for training and 100 defocus blurred images for testing. DUT [16] consists of 500 challenging defocus blurred images. There are complex background and low contrast focal regions in many images. Evaluation metrics Five standard metrics are used to evaluate the model, including E-measure [43], S-measure [44], mean absolute error (MAE), precision and recall (PR) curve [8,24,37] and F-measure. E-measure metric is used to evaluate the similarity between the prediction and the ground truth. S-measure aims to evaluate region-aware and object-aware structural similarity between the defocus map and ground truth. More details about the E-measure and S-measure can be found in [43,44]. F-measure denotes an overall performance measurement, and it is formed as: where β 2 is 0.3. MAE is used to evaluate the average difference between prediction map and ground truth, and it is defined as: W and H represent the width and height of images, respectively.
Implementation details
We utilize Pytorch to implement our model. ResNet-50 [45] is used as the backbone network, which is pre-trained on Ima-geNet. 604 defocus blurred images of Shi's dataset are used to train HEANet and other above-mentioned datasets are used to test HEANet. Our Method requires ground truth of regions and edges for training, while the above datasets can not provide the ground truth of edges. As shown in Fig. 8, the ground truth of edges is generated through the gradients of the ground Table 1 Quantitative comparison including F-measure (F β , larger is better), MAE (smaller is better), S-measure (larger is better) and E-measure (larger is better) over two widely used datasets 36 14 18 37 28 27 25 16 17 34 35 The best two results are marked in red, blue truth of the images. For data augmentation, we use multiscale, random crop, and horizontal flip input images. The initial learning rate is set to 0.05. We use stochastic gradient descent (SGD) to optimize the network. Warm-up and linear decay strategies are used to adjust the learning rate. Momentum and weight decay are set to 0.9 and 0.0005, respectively. The batch size is set to 10 and the whole training process is completed in 6K iterations with the maximum epoch of 101. The training process is about 1.5 h. Two RTX 3090 GPUs are used for acceleration. During testing, we resize each image to 320 × 320 and then feed it to HEANet to predict defocus blur maps without any post-processing.
Comparison with state-of-the-art methods
To evaluate the proposed HEANet, we compare it against 12 state-of-the-art algorithms, including defocus blur detection via recurrently fusing and refining multi-scale deep features (DeFusionNet) [17], defocus map estimation using domain adaptation (DMENet) [28], high-frequency multiscale fusion and sort transform of gradient magnitudes (HiFST) [37], multi-scale deep and hand-crafted features for defocus estimation (DHDE) [25], local binary patterns (LBP) [18], discriminative blur detection features (DBDF) [8], spectral and spatial approach (SS) [36], multi-stream bottom-top-bottom fully convolutional network (BTBNet) [16], deep blur mapping via exploiting high-Level semantics (DBM) [27] and classifying discriminative features (KSFV) [14], defocus blur detection via boosting diversity of deep ensemble networks (DENets) [34], self-generated defocus blur detection via dual adversarial discriminators (SG) [35]. For the results of these methods except DENets and SG, we download the results from Tang's [17] homepage. As for DENets and SG, we use the authors' recommended and original implementations parameters. Table 1 shows our method outperforms other approaches under four evaluation metrics, including F-measure, MAE, S-measure, and E-measure. Our model achieves the top two results on Shi's dataset and DUT dataset for four metrics. It demonstrates the superior performance of our proposed HEANet. Fig. 9 shows the precision-recall curves of above-mentioned approaches on two datasets, from these curves, we can observe that the performance of HEANet is better than other models. It means that our method has a good capability to detect defocus blur regions as well as generate accurate defocus blur maps. Qualitative comparison In Fig. 10, we visualize some defocus blur maps produced by our model and other methods to evaluate the proposed HEANet. It can be seen that the HEANet clearly detects defocus blur regions and suppresses the background clutter. The HEANet is superior in handling a variety of challenging scenes, including low contrast focal regions (row 1 and row 6) and cluttered backgrounds (row 3 and row 4). Compared with other counterparts, the HEANet can not only distinguish the blur and clear regions but also retain their sharp boundaries. The edges of in-focus objects predicted by our HEANet are clearer, and the DBD maps are more accurate.
Ablation studies
The proposed HEANet contains four sub-networks, including the HIPNet, the EFENet, the HEFNet, and the PFRNet. Among them, the EFENet and the HEFNet are combined to extract and fuse edge information. In this section, we carry out a series of experiments to investigate the effectiveness of each component. The quantitative results of ablation studies are summarized in Table 2. In addition, the qualitative results are shown in Figs. 11 and 12. PR and F-measure curves of 12 state-of-the art methods over two datasets. The first row shows comparison of PR and F-measure curves on Shi's dataset [8]. The second row shows comparison of PR and F-measure curves on DUT dataset [16] Effectiveness of HIPNet We utilize the HIPNet to capture multi-scale global contextual features, which is the key subnetwork to detect low contrast focal regions. As it can be seen in the 1st and 2nd rows of Table 2, when we add the HIPNet to the backbone (HIPNet + ResNet-50), the quantitative results of HIPNet + ResNet-50 can comprehensively surpass the performances of ResNet-50. To further verify the effectiveness of the HIPNet, we show a visual comparison in Fig. 11. It can be seen that our proposed HIPNet is more deliberate to deal with the complex scene and can detect low contrast focal regions. Both results can illustrate the effect of the HIPNet in our model.
Effectiveness of EFENet and HEFNet The EFENet and
HEFNet are the key sub-networks for our model to introduce and incorporate edge information, to investigate the effect of our proposed EFENet and HEFNet, we have done two experiments across all two datasets comparisons. One is without EFENet and HEFNet, the other is embedded with EFENet and HEFNet. By comparing the 3rd and 5th rows of Table 2, the model embedded EFENet and HEFNet has much better performance than that without edge information.
Several visual examples are shown in Fig. 12, with the help of EFENet and HEFNet, our method retains both accurate semantic information and edge information.
Effectiveness of PFRNet As shown in the 4th and 5th rows of Table 2, it can be observed that the model wih PFRNet has a better performance than that without PFRNet. Fig. 10 Qualitative comparisons of the state-of-the-art methods and our approach. The first and the second columns show input images and their ground-truth images, respectively. The third column are the output images of our approach. The fourth to last columns are the state-of-the-art methods, including defocus blur detection via boosting diversity of deep ensemble networks (DENets) [34], self-generated defocus blur detection via dual adversarial discriminators (SG) [35], defocus blur detection via recurrently fusing and refining multi-scale deep features (DeFusionNet) [17], multi-stream bottom-top-bottom fully convolutional network (BTBNet) [16], multi-scale deep and handcrafted features for defocus estimation (DHDE) [25], defocus map estimation using domain adaptation (DMENet) [28], high-frequency multi-scale fusion and sort transform of gradient magnitudes (HiFST) [37], local binary patterns (LBP) [18], spectral and spatial approach (SS) [36] and discriminative blur detection features (DBDF) [8]
Conclusion
In this paper, we propose a DBD approach named HEA-Net. To our knowledge, it is the first trial to develop an end-to-end network with edge awareness for defocus blur detection. First, we adopt an HIPNet to efficiently extract and aggregate multi-scale contextual information. Furthermore, the EFENet is used to capture the edge features of in-focus objects. Then, we propose an HEFNet to hierar-chically fuse edge cues and semantic features to perform better in localization. Finally, we develop a PFRNet to refine the feature maps to generate a DBD map with clear edges. Experimental results demonstrate that our network outperforms state-of-the-art methods on two widely used datasets without any pre-processing or post-processing.
Conflict of interest
Corresponding authors declare on behalf of all authors that there is no conflict of interest. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. | 6,652.8 | 2022-03-29T00:00:00.000 | [
"Computer Science"
] |
Turning On uracil-DNA glycosylase using a pyrene nucleotide switch.
Base flipping is a highly conserved process by which enzymes swivel an entire nucleotide from the DNA base stack into their active site pockets. Uracil DNA glycosylase (UDG) is a paradigm enzyme that uses a base flipping mechanism to catalyze the hydrolysis of the N-glycosidic bond of 2'-deoxyuridine (2'-dUrd) in DNA as the first step in uracil base excision repair. Flipping of 2'-dUrd by UDG has been proposed to follow a "pushing" mechanism in which a completely conserved leucine side chain (Leu-191) is inserted into the DNA minor groove to expel the uracil. Here we report a novel implementation of the "chemical rescue" approach to show that the weak binding affinity and low catalytic activity of L191A or L191G can be completely or partially restored by substitution of a pyrene (Y) nucleotide wedge on the DNA strand opposite to the uracil base (U/A to U/Y). These results indicate that pyrene acts both as a wedge to push the uracil from the base stack in the free DNA and as a "plug" to hinder its reinsertion after base flipping. Pyrene rescue should serve as a useful and novel tool to diagnose the functional roles of other amino acid side chains involved in base flipping.
A remarkable and evolutionarily conserved aspect of enzymatic recognition of damaged bases in DNA is the process of base flipping (1). This enzyme-induced conformational change in the DNA is a prerequisite for many enzymes to catalyze various chemical transformations on the base that require access to its functional groups. DNA glycosylases, which catalyze the first step in DNA base excision repair, are one general enzyme class that must act through a base flipping mechanism (2).
The most prevalent type of spontaneous DNA damage is that brought about by cytosine deamination, or the misincorporation of dUTP into DNA during replication, resulting in the presence of uracil in DNA. The cytosine deamination route leads to G 3 U mismatches that ultimately can lead to G 3 C to A 3 T transition mutations after two rounds of DNA replication. Thus, a uracil DNA glycosylase activity has evolved to combat this unrelenting source of genomic instability. Flipping of 2Ј-dUrd by UDG 1 has been proposed to follow a "pinch-pushpull" mechanism in which a trio of serine residues pinches the DNA backbone producing a localized stress in the DNA (3)(4)(5), a completely conserved leucine residue (Leu-191) pushes through the minor groove to expel the uracil from the major groove (see Fig. 1A), and several enzyme hydrogen bond donors and acceptors pull and stabilize the extrahelical uracil in the active site. The pinch-push-pull mechanism appears to represent a highly conserved mechanism to promote base flipping, because corresponding interactions have been found in the structures of all DNA glycosylase-DNA complexes (6,7).
The kinetic mechanism of base flipping has been studied in significant detail for Escherichia coli UDG (8). In stopped-flow fluorescence studies, the overall base flipping process has been shown to be extremely rapid (k flip ϳ 1200 s Ϫ1 ), assisted by the enzyme, and about 10-fold faster than the chemical step of glycosidic bond cleavage (k cl ϳ 150 s Ϫ1 ). The specificity of the enzyme for flipping and cleavage of uracil, as opposed to all other naturally occurring DNA bases, was shown to be derived from steric exclusion of other bases and from hydrogen bond complementarity of the active site with uracil (9). Consistent with these observations, UDG does not appear to be highly processive (10,11), which argues against a scanning mechanism involving transient flipping of normal DNA bases (12).
Although the structural biology of DNA glycosylase-mediated base flipping has seen tremendous advances in the last few years (2), our understanding of the forces that give rise to extrahelical bases and the nature of the reaction pathway for base flipping is lacking. One approach to test our understanding of this process is to generate substrate analogs or enzyme mutants that lack functional groups that are hypothesized to be essential for the process, and then probe the damaging effect of the perturbation using biophysical methods. Conversely, the generation of substrate analogs or small molecules that "rescue" the damaging effects of enzyme mutations can provide valuable insights into the role of a functional group in the mechanism. In this report we use both approaches to test the proposed pushing role of Leu-191 of UDG in uracil flipping (see Fig. 1A). Deletion of this side chain results in a 10-to 625-fold decrease in k cat /K m using duplex DNA substrates containing a single U/P or U/A base pair (where P is the fluorescent adenine analog 2-aminopurine). We then tested the proposed role of this residue by inserting a pyrene nucleotide analog opposite to the uracil (Fig. 1A). We surmised that the bulky pyrene "base," which fills the entire space normally occupied by the normal U/A or U/P base pair (see Fig. 1B), might serve as a mechanical wedge to either force the uracil from the DNA base stack in the free DNA (pushing), or hinder its reinsertion once it is expelled in the UDG complex (plugging). We show here that pyrene completely rescues the damaging effects of the L191A and L191G mutations on site-specific DNA binding. In addition, pyrene totally rescues the damaging kinetic effects of the L191A mutation and partially rescues the kinetic parameters of L191G. The data support a mechanism in which pyrene wedges the uracil from the base stack in the free DNA, and then plugs the hole after the base departs (see Fig. 1C). One major role of Leu-191 appears to involve plugging the cavity that is left behind after the pinching forces have expelled the uracil, thereby increasing the lifetime of the extrahelical base. Because similar bulky amino acid side chains are involved in other DNA glycosylase base flips, a revised "pinch-push-plugpull" mechanism for base flipping is suggested. These results further demonstrate the remarkable utility of nonpolar nucleoside analogs in the study of nucleic acid-protein recognition (13).
EXPERIMENTAL PROCEDURES
Oligonucleotide Synthesis-The substrates and substrate mimics were synthesized using standard phosphoramidite chemistry with an Applied Biosystems 390 synthesizer. The nucleoside phosphoramidites were purchased from Applied Biosystems or Glen Research (Sterling, VA), except for the  anomer of the pyrene nucleoside phosphoramidite and 2Ј--fluoro-2Ј-deoxyuridine phosphoramidite, which were synthesized as described previously (8,14,15). The identity of the  anomer of the pyrene phosphoramidite was established by proton NMR and electrospray ionization-mass spectrometry. After synthesis and deprotection, the oligonucleotides were purified by anion exchange HPLC and desalted by C-18 reversed phase HPLC (Phenomenex Aqua column). The size, purity, and nucleotide composition of the DNA was assessed by analytical reversed phase HPLC, matrix-assisted laser desorption mass spectrometry, and denaturing polyacrylamide gel electrophoresis. The DNA strands were hybridized as previously described to form the duplexes used in the kinetic and binding studies as shown in Table I (8). In these sequences, P ϭ 2-aminopurine deoxynucleotide, U  ϭ 2Ј-fluoro-2Ј-deoxyuridine nucleotide, and Y ϭ pyrene deoxyribonucleotide. The concentrations of the oligonucleotides were determined by UV absorption measurements at 260 nm, using the pair-wise extinction coefficients for the constituent nucleotides (16) and the measured extinction coefficient of 9.6 mM Ϫ1 cm Ϫ1 (260 nm) for the pyrene nucleoside in 40% methanol.
Purification of UDG-As previously described, UDG from E. coli strain B was purified to Ͼ99% homogeneity using a T7 polymerasebased overexpression system (9,17). The concentration of the enzyme was determined using an extinction coefficient of 38.511 mM Ϫ1 cm Ϫ1 (18). The L191A and H187G mutants were generated using the QuikChange double-stranded mutagenesis kit from Stratagene (La Jolla, CA), and the mutations were confirmed by sequencing both strands of the DNA. The 6xHis-tagged mutant proteins were purified using nickel chelate chromatography as previously described (5). The His tag was removed by cleavage using biotinylated thrombin followed by purification using streptavidin beads and nickel chelate chromatography.
Thermal Denaturation Studies-Solutions of 3 M of each DNA duplex were melted in Teflon-stoppered 1-cm path length quartz cells on a Varian Cary UV-visible spectrophotometer equipped with a thermoprogrammer. Absorbance was monitored at 260 nm, and the temperature was ramped from 5 to 80°C at a rate of 0.5°C/min. In all cases, the duplexes displayed a sharp and apparently two-state transition. Melting temperatures were determined from first derivative fits of absorbance with respect to 1/T. The DNA was then precipitated by the addition of 3 volumes of ice-cold 100% ethanol. Modification-specific strand cleavage was performed by the addition of 100 l of 1 M piperidine to the DNA pellet and heating at 90°C for 20 min. The DNA samples were then ethanol-precipitated, washed twice with 80% ethanol, combined with 6 l of denaturing load buffer (80% formamide, 1ϫ TBE, 0.02% bromphenol blue, 0.02% xylene cyanol), and resolved on a 20% sequencing gel (65 watts of constant power for 1 h).
Steady-state Kinetic Measurements-The steady-state kinetics of uracil glycosidic bond cleavage were determined at 25°C in TMN buffer using the 2-aminopurine continuous fluorescence assay (19) or an analogous assay in which the increase in pyrene fluorescence due to glycosidic bond cleavage is followed. The steady-state kinetic parameters k cat and k cat /K m were obtained from plots of the observed rate constants (k obsd ) against substrate concentration ([S] tot ) using a standard hyperbolic kinetic expression and the program Grafit 5 according to eqs. 1 and 2, k obsd ϭ ͑1/⌬a͒͑⌬F/⌬t͒͑1/͓UDG] tot ) (Eq. 1) In eq. 1, ⌬F/⌬t is the initial rate in fluorescence units s Ϫ1 , ⌬a ϭ ⌬F tot /[S] tot , where ⌬F tot is the total fluorescence increase for 100% conversion of a given substrate concentration ([S] tot ) to product, and [UDG] tot is the total UDG concentration. The values for ⌬a were determined by either letting the reaction go to completion, or by adding 10 -20 nM wild-type UDG to rapidly bring the reaction to its endpoint after completing the initial rate measurements. For the time-based scans with 2-AP, an excitation wavelength of 310 or 320 nm was used, and the emission was observed at 370 nm. For pyrene-based measurements, an excitation wavelength of 350 nm was used, and the emission was observed at 380 nm. The reliability of the new pyrene fluorescence assay was validated by direct measurements of the time dependence of uracil release using an HPLC-based method (20).
Binding and Inhibition Studies-The dissociation constants (K D ) for binding of UDG to the DNA molecules indicated in Table I were determined using two methods. First, for the 2-aminopurine (2-AP) and pyrene-labeled molecules, direct fluorescence binding measurements were made by titrating fixed concentrations of the DNA with increasing amounts of UDG. To minimize background fluorescence from tryptophan residues of the enzyme, an excitation wavelength of 320 nm was used when changes in 2-AP fluorescence were followed, and 2-AP emission spectra in the range 340 -450 nm were collected. The binding data were fitted to eq. 3 after the background fluorescence of UDG was subtracted from each spectrum (F o and F f are the initial and final fluorescence intensities, respectively). When pyrene fluorescence was followed, excitation was at 350 nm and emission spectra from 370 to 450 nm were collected. The fluorescence intensity (F) at 380 nm was plotted against [UDG] tot to obtain the K D from eqs. 3 and 4, In For the inhibition studies, a sensitive HPLC kinetic assay for monitoring the formation of the abasic product was employed using the previously characterized trinucleotide substrate ApUpAp (k cat ϭ 23 s Ϫ1 , K m ϭ 20 M) (20). Molecular Modeling-A model for the pyrene nucleotide in the context of a duplex DNA bound to L191A UDG was generated from the crystal coordinates of the ternary complex of human UDG bound to the products uracil and abasic DNA (Protein Data Bank entry 1SSP). A truncated model was used that included the flipped-out abasic nucleotide, the two flanking DNA base pairs, uracil and Leu-191 (Fig. 1A). The corresponding leucine side chain was mutated computationally to ala-nine, and the orphan adenine base was replaced with the pyrene moiety. This starting structure was then energy-minimized using the molecular mechanics module in PC Spartan Pro 1.03 (Wavefunction Inc., Irvine, CA). In this minimization, the DNA backbone, Ala-191, and uracil were frozen, whereas the DNA bases were allowed to move to accommodate the pyrene within the base stack. This procedure resulted in only minor alterations to the initial structure and was performed, not to establish a unique molecular model for the L191A-pyrene DNA complex, but to qualitatively illustrate that pyrene can be easily accommodated without structural conflicts.
Design and Characterization of Normal and Pyrene Wedge
Substrates-Nonpolar nucleoside analogs that lack the hydrogen bond donor-acceptor groups of the natural DNA bases, yet mimic the shape of natural bases or even base pairs, have attracted much interest as probes of protein-nucleic acid recognition (21). Most notably, the pioneering work of Kool and colleagues (22) has shown that DNA polymerase will specifically incorporate a pyrene (Y) nucleoside triphosphate (dYTP) opposite to DNA sites that lack bases, confirming that steric complementarity is an important component of high fidelity DNA replication. Although no high resolution structures of pyrene-containing DNA are yet available, thermal melting ex-periments and circular dichroism measurements indicate that pairing of pyrene against an abasic site, or even a natural base, is only modestly destabilizing and retains the B-form of the duplex (23).
On the basis that pyrene occupies the entire volume normally occupied by an entire DNA base pair, we envisioned that placing Y opposite to U might substitute for the role of Leu-191 in the process of uracil flipping by uracil DNA glycosylase (UDG). Our approach was to design duplex substrates for UDG in which Y was substituted for A or P opposite to U. We then tested whether the U/Y substrate selectively rescued the impaired binding and base-flipping activity of the L191A and L191G enzymes as compared with substrates with U/A or U/P base pairs. In addition, the pyrene rescue hypothesis predicts that other mutations of UDG that do not affect the baseflipping step would not be rescued by pyrene substitution. Two such mutations that have been previously investigated by our group are D64N, which removes the water-activating group, and H187G, which removes the catalytic electrophile (17,24,25). Thus, these mutant enzymes were also tested with the pyrene substrates.
The DNA sequences used in this study are shown in Table I. In addition to the insertion of the Y nucleotide, these sequences were designed to facilitate the kinetic and binding measurements. The incorporation of the fluorescent base 2-aminopurine (P) both opposite and adjacent to the excised uracil (AUA/ TPT, PUA/TAT) allows continuous rate measurements of glycosidic bond cleavage (19). Both substrates were investigated to assess if kinetic differences exist between substrates with U/A versus U/P base pairs. In addition, the incorporation of the nonreactive deoxyuridine analog, 2Ј-fluoro-2Ј-deoxyuridine (U  ) allows binding measurements in the absence of bond cleavage (PU  A/TAT, AU  A/TYT) (8). All of the DNA sequences in Table I were shown to be entirely in the duplex form as judged by electrophoresis using a 19% native polyacrylamide gel with visualization by UV shadowing (not shown). The T m values are found to be similar, falling in the range of 42. 8 -48.8°C, which is about 18 -24°C higher than the temperature used for the kinetic and binding measurements (25°C). These results confirm the previous observation that pyrene incorporation opposite to normal bases does not significantly disrupt the overall duplex stability (23).
KMnO 4 Sensitivity of U or T Paired with Pyrene-The sensitivity of the 5, 6 double bond of pyrimidines to oxidation, leading to cleavage of the glycosidic bond in the presence of piperidine, has long been used in DNA sequencing strategies (26), and as a probe of the solvent accessibility of T in singlestranded and duplex DNA (27). Here, we have used KMnO 4 sensitivity to address the question of whether Y pushes the opposite U or T from the DNA base stack, rendering it more accessible to oxidation. Indeed, the results shown in Fig. 2 clearly show that nucleotides U 6 and T 6 paired with A are relatively insensitive to KMnO 4 oxidation, but that U 6 and T 6 paired with Y show an enhanced sensitivity to oxidation. Careful quantification of the normalized intensities of the cleavage bands in Fig. 2 and two other experiments shows that U and T are 65 to 70% as sensitive to oxidation as the same bases in a single-stranded context (see legend to Fig. 2 for details). It should be clearly pointed out that the cleavage intensities of U 6 cannot be directly compared with T 6 in these experiments, because free uridine is about 5-fold less reactive than free thymidine to oxidation by KMnO 4 (28). Therefore, the sensitivities of U 6 or T 6 in the duplex context must be compared with the corresponding bases in the single-stranded form as we have done in Fig. 2B. It is also important to note that the enhanced sensitivity to oxidation cannot be explained by pyrene-induced duplex denaturation, because the T 9 base that is paired with A shows the same sensitivity regardless of whether there is a U 6 (T 6 )/A or U 6 (T 6 )/Y pair at position six (Fig. 2B). We conclude that the placement of pyrene opposite to U or T significantly increases the solvent accessibility of these bases and may preorganize these bases into an extrahelical conformation.
Catalytic Activity and DNA Binding of wtUDG with U/A, U/P, and U/Y Duplexes-The steady-state and single-turnover kinetic activity of wtUDG has been extensively studied in this laboratory using a variety of duplex and single-stranded substrate DNA molecules (5,17). This work has led to the conclusion that the overall rate for UDG under k cat conditions is severely limited by the product release step. This complicates the rigorous interpretation of the damaging effects of mutations derived from k cat measurements, because the mutated enzymes are invariably rate-limited by the chemical step, rather than product release, leading to a large underestimate of the effects of the mutations. Fortunately, k cat /K m does not suffer the same shortcoming, because the product release step is not involved in this kinetic parameter (29), and comparisons between k cat /K m values for wild-type and mutant UDG enzymes generally provide quite good estimates of the true damaging effect of the mutations (20). Important for interpreting the studies here, the same complications hold true when comparing the k cat values for substrates that have different product release rates. Thus, the most meaningful kinetic parameters for comparison under these conditions are k cat /K m and K m .
Representative steady-state kinetic data for wtUDG with substrates containing U/A and U/Y pairs are shown in Fig. 3, and the complete set of kinetic parameters for all substrates are reported in Table II. Inspection of Table II reveals that the k cat /K m values for the single-stranded substrate and the U/P and U/A duplexes are nearly identical (ϳ35 M Ϫ1 s Ϫ1 ). Surprisingly, the U/Y pair is not disruptive to catalysis and actually increases the k cat /K m by 3.8-fold as compared with U/P and U/A. In addition, the K m values for the U/Y, U/P, and U/A duplexes are the same within a factor of two. Thus, pyrene substitution has a small salutary effect on the kinetic param- Table II. eters of wtUDG. Comparison of the k cat values for the duplex substrates containing U/A, U/P, and U/Y pairs, shows that pyrene enhances k cat by 3.5-to 8-fold, which is similar to the enhancement seen with ssU single-stranded DNA (2.4-to 5.6fold). We have previously shown in single turnover experiments that uracil is excised from single-stranded and duplex DNA forms with equal facility by UDG (ϳ120 s Ϫ1 ) and that the enhanced reactivity of ssDNA under k cat conditions is due to its faster dissociation from the product complex (8,17). The same phenomenon likely explains the increased k cat of the substrate with the U/Y pair observed here. This conclusion is fully supported by the kinetic findings that pyrene has no effect on the k cat values of the D64N and H187G mutants, for which the chemical step is fully rate-limiting (see below). In accord with the kinetic measurements, pyrene substitution is found to have a small beneficial effect on the binding affinity of wtUDG to DNA containing the nonreactive 2Ј--fluoro-2Ј-deoxyuridine substrate mimic (Fig. 4), with the binding constant for AU  A/ TAT being 2.6-fold weaker than for AU  A/TYT (Table II). The simplest interpretation of these kinetic and thermodynamic results for wtUDG is that the placement of pyrene opposite to U increases the proclivity of uracil to be in a catalytically productive extrahelical state. As will be shown below, the pyrene rescue effect is significantly greater for the L191A and L191G mutants that are deficient in stabilizing the extrahelical state.
Detrimental Effects of Removing Leu-191-A leucine residue corresponding to Leu-191 of E. coli UDG is completely conserved in UDG sequences from bacteria to humans, suggesting a strategic role for this side chain in the cellular function of UDG. Despite its conservation and obvious key structural role FIG. 4. Representative binding of wild-type UDG and L191A to substrate mimic DNA. A, binding of AU  A/TYT to wtUDG as determined using the competitive inhibition assay. The line is the best fit of the data to eq. 5. B, binding of AU  A/TYT to L191A as determined using the direct pyrene fluorescence binding assay. The line is the best fit of the data to eqs. 3 and 4 (K D ϭ 35 Ϯ 5 nM). in base flipping (Fig. 1A), the steady-state kinetic studies using L191A and L191G indicate that uracil can still be excised from both duplex and single-stranded DNA substrates without this functional group, although much less efficiently (Table II). The detrimental effect of removing the two methyl groups of Leu-191 on the specificity constant k cat /K m is found to be both sequence-and DNA structure-dependent. The k cat /K m values of L191A with the AUA/TPT and PUA/TAT duplex substrates are 8.3-and 56-fold less than wtUDG, whereas the activity with the single-stranded substrate ssPUA is reduced by 14.7-fold. The data also reveal that the L191G enzyme is even more catalytically damaged than L191A (Table II). The k cat /K m for L191G is reduced by 125-to 625-fold depending on the substrate, resulting from both k cat and K m effects. The binding affinities of L191A and L191G for the substrate mimic AU  A/ TAT are decreased by 14-and 43-fold, confirming the observed mutational effects on K m . The greater catalytic activity of L191A as compared with L191G likely reflects the residual beneficial effect of the additional methyl group, as might be expected from the proposed steric role of the full leucine side chain. The detrimental effects on k cat /K m observed here fall in the same range recently reported for the L191A and L191G mutations using different substrates (8-to 80-fold). However, in this previous study binding measurements and DNA sequence effects were not investigated (30).
We conclude that Leu-191 is important but not essential for base flipping, and that removal of this residue has a similar detrimental effect on the excision of uracil from single-stranded and duplex DNA. This observation requires that the effect of Leu-191 be realized at a step that is shared by both singlestranded and duplex DNA. This step likely follows the partial or complete expulsion of the uracil from the duplex, because significant differences would be expected in the action of L191A or L191G on single-stranded and duplex DNA substrates if this residue were involved in actively pushing the uracil from its nestled position in the DNA base stack. This conclusion is further supported by the pyrene rescue results.
Specific Rescue Using the Pyrene Wedge-In substantial contrast with wtUDG, the specificity constants of L191A and L191G for the substrate containing the U/Y pair are increased by 16-to 195-fold as compared with the substrates containing the U/P and U/A pairs, respectively (Table III and Fig. 5). In fact, pyrene completely restores the damaging effect on each kinetic parameter resulting from the L191A mutation (Tables II and III) and produces a 28-to 170-fold increase in the binding affinity of L191A and L191G for the substrate mimic DNA. It is interesting that both L191A and L191G show 5-to 6-fold larger pyrene rescue effects for k cat /K m when the reference substrate has a U/A base pair as opposed to a U/P base pair (Table III). A likely explanation for this difference is that the ϳ0.5 kcal/mol lower stability of the U/P base pair (as estimated from stability measurements of T/P pairs (31)) renders the uracil easier to flip as compared with the U/A pair. In control experiments, the substrate AUA/TAT gave indistinguishable kinetic results from PUA/TAT, eliminating the possibility that the reactivity difference between AUA/TPT and PUA/TAT arises from the substitution of P for A at the 5Ј position (data not shown). The identical or very similar rescue effects for the substrates AU  A/TAT and PU  A/TAT further reinforce this kinetic control (Table III).
The rescue effect is specific for the Leu-191 deletion mutations, because the kinetic and binding parameters for D64N are essentially the same for the U/A and U/Y pairs (Table III). The D64N mutation, which removes the carboxylate group that activates the water nucleophile and increases the kinetic barrier of the chemical step by 3000-fold (17), has been previously shown to have only a modest effect on DNA or uracil binding. Thus, the null effect of pyrene substitution for this mutant provides a dramatic demonstration of the specificity of the pyrene rescue for mutations that affect the base-flipping step. A similar null pyrene effect was obtained with the H187G mutant (not shown), which forms a strong interaction with uracil O 2 in the transition state, yet also has little effect on DNA binding or base flipping (17). We anticipate that pyrene rescue will be a useful tool to further elucidate the functional roles of other participants in base flipping such as the serine side chains involved in phosphodiester compression (pinching).
A Conceptual Framework for Uracil Flipping-In a previous rapid kinetic study of base flipping by UDG, we proposed that the enzyme paid the energetic cost for flipping the uracil base by deforming the duplex before the base-flipping step (8). This proposal was supported by the surprising observation that b Control experiments demonstrated that the substrate AUA/TAT was indistinguishable kinetically from PUA/TAT, eliminating the possibility that the reactivity difference between AUA/TPT and PUA/TAT arises from the substitution of P for A at the 5Ј position (data not shown). Thus, the observed differences are due to the placement of A or P opposite to the uracil.
FIG. 5. Relative effects of pyrene on the kinetic and binding parameters for wtUDG and mutants. The pyrene rescues for the indicated kinetic and binding parameters are shown as a -fold change compared with substrates with a U/A or U/P pair as indicated (see Table III). duplexes containing U  /A and U  /G pairs, as well as singlestranded U  DNA had similar internal equilibrium constants for base flipping (K flip ) and essentially identical rate constants for docking the uracil into the active site pocket. This led to the conclusion that, during the initial encounter complex with the DNA, UDG used binding energy to destabilize the duplex such that extrusion of the uracil was facilitated. In the discussion of the current results, it is useful to divide the overall free energy change for base flipping (⌬G obsd ) into two discrete free energy components, a destabilization term (⌬G D ) that represents all unfavorable changes that are necessary to set up for the base flip, and a favorable term (⌬G S ) that represents all the stabilizing interactions that are gained as the base is nestled into its final resting place in the active site (Fig. 6). Thus, with sufficient structural and functional information, it may be possible to classify mutational (or substrate) effects as contributing to destabilization (⌬G D ) or stabilization (⌬G S ). Interpretations using this simple framework are not without peril, because the only measurable parameter is the net effect ⌬G obsd ϭ ⌬G D ϩ ⌬G S , and some mutations could act to both stabilize and destabilize. Nevertheless, this framework is useful in the current case and provides a qualitatively consistent view of the role of Leu-191 and pyrene in the process of stabilizing the extrahelical uracil.
Consideration of these results suggests distinct energetic roles for pyrene and Leu-191 in promoting productive binding of the extrahelical base. Because the removal of Leu-191 has a similar detrimental effect on the excision of uracil from single-stranded and duplex DNA, and the rates of base flipping for wtUDG are indistinguishable for duplex and single-stranded DNA (8), then Leu-191 is likely to act after the initial duplex destabilization step. This conclusion seems inescapable, because, if the mechanism only involved active expulsion of the uracil by Leu-191, then the detrimental effect of its removal would certainly be greater for duplex DNA than for singlestranded DNA. This suggests an additional role for Leu-191 as a block, or plug, to hinder reinsertion of the uracil into the duplex or single-strand stack. 2 In support of a similar role for Leu-191 in enhancing the binding of single and doublestranded DNA, the recent crystal structure of E. coli UDG bound to UAAp shows that Leu-191 resides only 3.5 Å from the modeled position of the 3Ј-flanking deoxyribose in this complex, suggesting it could easily serve as a plug to impede exit of the single-stranded DNA from the active site (5). A similar position for Leu-191 is seen in the structures of duplex DNA bound to human UDG (3,4).
Pyrene is likely to serve an analogous plugging role as Leu-191 but may also serve to diminish the penalty for duplex destabilization by preorganizing the U in an extrahelical position (upper pathway in Fig. 6). 2 Such extrahelical preorganization by pyrene would be expected to enhance the binding of wtUDG to the U/Y duplex, which is indeed observed (Table II), although the magnitude of this effect is fairly small (ϳ0.7 kcal/mol). The much larger effect of pyrene on the binding affinities of the L191A and L191G mutants, which are defective in positioning or holding the uracil in its productive extrahelical position, suggests that the major role of pyrene, like Leu-191, is to increase the lifetime of the extrahelical uracil by a plugging mechanism.
A model depicting the roles of Leu-191 and pyrene in promoting productive base flipping is shown in Fig. 6. We speculate that serine pinching is involved in destabilization of the duplex before the extrusion step (⌬G D ), which leads to the stabilizing interactions of Leu-191 and the other active site groups that interact with the extrahelical uracil (⌬G S ). Further support for these ideas will follow from rapid kinetic investigations of the various base flipping mutants of UDG. 3 Implications-The crystal structures of three other glycosylase-DNA complexes have revealed that a bulky amino acid side chain consistently resides in the position occupied by Leu-191 of UDG, suggesting that plugging may be a common component in the mechanism for stabilizing extrahelical bases (6,7). Can UDG mutants that are defective in flipping be targeted to specific uracils or even other bases in DNA by pyrene antisense rescue? Although we have shown at most a 200-fold pyrene rescue effect on the specificity constant of L191A, it may be possible to increase selectivity for U/Y sites by making further mutations directed at the flipping step or perhaps by 2 If the major role of Leu-191 is to increase the lifetime of the extrahelical base by a plugging mechanism, then it would be expected that the L191A mutation would increase the dissociation rate of the productively bound DNA, with a lesser effect on the association rate. Detailed stopped-flow fluorescence studies on the binding of U  /A and U  /Y DNA to wtUDG, L191A and other base-flipping mutants will be reported elsewhere (Y. L. Jiang and J. T. Stivers, manuscript in preparation). However, the present data using L191A clearly demonstrate that this mutation decreases the off-rate of U  /A DNA by over 7-fold, with only a 3-fold effect on the on-rate. In addition, pyrene enhances the DNA association rate for the L191A mutant by 5-fold and slows the dissociation rate by 3.3-fold as compared with U  /A DNA. These results support the conclusion that a major role of Leu-191 is to impede the exit of the extrahelical base and confirm that pyrene rescue is composed of two effects: preorganization of the extrahelical uracil in the free DNA and plugging the hole to hinder reinsertion of uracil after its flipping into the active site. 3 The overall free energy change for binding a flipped out base (⌬G obsd ) may be viewed as the sum of a destabilization term (⌬G D ) that represents the unfavorable energetics for reorganizing the DNA duplex and protein before expelling the uracil, and a stabilization term (⌬G S ) that is derived from all the favorable interactions that are gained when the uracil is productively docked in the active site pocket. The present results, as well as previous kinetic and structural studies of uracil flipping (3,4,8), suggest that phosphodiester backbone compression (serine pinching) is in part responsible for destabilizing the duplex. A significant plugging role for Leu-191 is suggested, which leads to a more positive ⌬G S value when Leu-191 is removed, and the rescuing effect of pyrene (X ϭ Y). The upper pathway depicts an additional role for pyrene (represented as hashed marks in the DNA duplex) in preorganizing the uracil in a reactive extrahelical position, thereby partially bypassing the unfavorable duplex destabilization that must be paid for by UDG binding energy. This upper pathway is supported by the observation that uracil in a U/Y pair shows increased sensitivity to oxidation, and wtUDG binds more tightly to U/Y pair. In the final structure on the right, the asterisk designates the presence of pyrene or Leu-191 to stabilize the extrahelical state.
altering the solution conditions. Because active site mutants of UDG have already been shown to possess catalytic promiscuity by removing T or C bases (32), it may also be possible to target specific C/Y or T/Y sites in DNA using this pyrene rescue strategy. Given the enormous catalytic power of UDG, a large part of which resides in fairly nonspecific interactions with the DNA backbone (5,20), UDG offers an exceptional scaffold for engineering new glycosylase activities. | 8,114.4 | 2001-11-09T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Integrating Human Capital into National Development Planning in
When Singapore gained independence in 1959, its literacy and morbidity rates—two important measures of human capital—were similar to those of other lower-middle-income countries. In 1960, the port city-state’s per capita gross domestic product was USD428, less than the world average of USD453 and less than one-sixth that of the United States.1 There was little reason to expect that this small country (716 square kilometers in area) would become a world leader in the health and education of its people. The government of the nascent state inherited an educational system divided along ethnic lines. English, Chinese, Malay, and Tamil schools each had their own curriculum and language of instruction. The country faced a host of epidemic diseases common to tropical countries, such as malaria, dengue, cholera, and viral hepatitis. Poor education and health outcomes were major constraints on economic development. Modernizing the economy and achieving prosperity required building and harnessing Singapore’s human capital. The government charted a new path for Singapore, adopting a fast-paced industrialization strategy to create employment for an unskilled workforce and generate export earnings. Implementing that plan required overcoming fundamental challenges such as ethnic tension, high unemployment, regional instability, and turmoil caused by Britain’s withdrawal from the former colony. From these difficult and uncertain beginnings, Singapore would attain the highest score of any country in the world in the World Bank’s 2018 Human Capital Index, which took into account several health and education indicators. This delivery note examines some of the strategic decisions and policies that contributed to Singapore’s success (Yusuf 2020). Singapore’s experience illustrates the “whole of government” approach that, according to the Human Capital Project, can overcome challenges countries face in developing their human capital. The three elements of this approach are continuity (sustaining effort across political cycles), coordination (ensuring that sectoral programs and agencies work together), and evidence (expanding and using the evidence base to improve and update human capital strategies). This delivery note focuses on the continuity and coordination that defined Singapore’s success (Human Capital Project 2019a).
Introduction
When Singapore gained independence in 1959, its literacy and morbidity rates-two important measures of human capital-were similar to those of other lower-middle-income countries.In 1960, the port city-state's per capita gross domestic product was USD428, less than the world average of USD453 and less than one-sixth that of the United States. 1 There was little reason to expect that this small country (716 square kilometers in area) would become a world leader in the health and education of its people.
The government of the nascent state inherited an educational system divided along ethnic lines.English, Chinese, Malay, and Tamil schools each had their own curriculum and language of instruction.The country faced a host of epidemic diseases common to tropical countries, such as malaria, dengue, cholera, and viral hepatitis.Poor education and health outcomes were major constraints on economic development.
Modernizing the economy and achieving prosperity required building and harnessing Singapore's human capital.The government charted a new path for Singapore, adopting a fast-paced industrialization strategy to create employment for an unskilled workforce and generate export earnings.Implementing that plan required overcoming fundamental challenges such as ethnic tension, high unemployment, regional instability, and turmoil caused by Britain's withdrawal from the former colony.From these difficult and uncertain beginnings, Singapore would attain the highest score of any country in the world in the World Bank's 2018 Human Capital Index, which took into account several health and education indicators.This delivery note examines some of the strategic decisions and policies that contributed to Singapore's success (Yusuf 2020).
Singapore's experience illustrates the "whole of government" approach that, according to the Human Capital Project, can overcome challenges countries face in developing their human capital.The three elements of this approach are continuity (sustaining effort across political cycles), coordination (ensuring that sectoral programs and agencies work together), and evidence (expanding and using the evidence base to improve and update human capital strategies).This delivery note focuses on the continuity and coordination that defined Singapore's success (Human Capital Project 2019a).
Development Challenge
In the 1960s, Singapore had limited human capital.Its poor educational system limited its economic development, and its people lacked access to quality, affordable healthcare.
Delivery Challenges
Singapore had to grapple with several delivery challenges as it struggled to improve its health and education systems and develop its human capital.A whole of government approach was essential to the country's success in overcoming these potential obstacles.
Change in priorities or lack of commitment
To improve human capital, Singapore had to set clear policies to guide ministries and agencies and ensure that those policies were consistently implemented over time.Sustained commitment was important because it could take many years for investment in education and health systems to yield clear results.Political dynamics, resource shortages, and other factors could stand in the way of such continuity.
Inter-and intragovernmental relations
Developing human capital quickly and effectively required strong coordination across the government and among Singapore's development partners, as well as investments in sectors beyond education and health that can help citizens thrive (Human Capital Project 2019b).This can be difficult, however, as policies could be siloed within a certain ministry or agency that might resist sharing authority with other institutions.
Stakeholder engagement
Singapore wanted to develop its education and health systems to meet the needs of its citizens and to advance its industrialization strategy, which required an educated and healthy workforce.The challenge was how to communicate effectively with beneficiaries and stakeholders about their needs and tailor policies accordingly.
Financing
A fundamental obstacle to developing Singapore's health and education systems was allocating sufficient money to those systems.Providing high-quality services while controlling costs could be a daunting problem.Singapore did not have reserves of natural resources it could use to fund investment in health and education.The government needed innovative financing arrangements to ensure that its people's needs were met.
Addressing Delivery Challenges
To develop its human capital, Singapore built its education and health policies on a clear vision of its economic future, focused on coordination within government and with industry, and refined goals and adapted policies accordingly over time.
Building commitment across government
Focused development planning encouraged coordination and commitment among government institutions.Singapore's leaders built their education and health policies on attracting multinational companies that would bring in jobs and investment.In 1961, Singapore adopted its first four-year State Development Plan.Created in consultation with the World Bank and the United Kingdom, which helped finance the plan, this document spelled out an export-led industrialization strategy to prepare Singapore to manufacture high-value goods (Parliament of Singapore 1960).The plan articulated a clear vision of how the government wanted to develop its human capital.The ideal Singaporean worker would have basic education and technical training to suit the needs of multinational companies and be healthy enough to work in a factory.To create this workforce, Singapore would focus on schooling, inculcating technical skills, and eradicating endemic diseases.Forty percent of the roughly USD285 million committed to the development plan was earmarked for the education and health sectors.Singapore expected the plan to be mostly self-funding, with USD193 million coming from domestic sources of income.These included projected revenue surpluses, reserve funds in the government and statutory boards, and floating loans in the Singapore market.The remaining Integrating Human Capital into National Development Planning in Singapore USD92 million would be financed by external assistance from the United Kingdom and the World Bank (Lim 2017).Singapore allocated approximately 26 percent of its USD358 million budget for 1965 to education and 14 percent to health services and public health (Lee 2018). 2ithin the context of the development plan, there was institutionalized coordination within the government.Each month, permanent secretaries of each ministry met to focus on matters that required cooperation of more than one ministry.The government established a culture of teamwork, expecting that ministers would work together on matters that required interagency cooperation (Haseltine 2013).
Over the years, Singapore refined its development plans, reassessing needs and setting new goals to guide government institutions.By the early 1970s, Singapore's per capita GDP had surpassed the world average and was rapidly increasing.This growth would eliminate the need for external donor assistance to support its development planning.
As the economy grew, the government adopted moreambitious goals for its workforce and adapted economic and health policies accordingly.Initially, Singapore's focus was on the quantity of workers with necessary skills, ensuring that there were enough healthy people with a basic education to prepare them for work in light manufacturing.As industrialization advanced, the quality of education became more critical, and the health needs of citizens changed.Through it all, a common vision for the country's future guided policymaking and encouraged coordination within government.
Coordinating within government and with industry to build the education sector
In the early 1960s, Singapore began to lay the groundwork for an education and training system to teach the skills needed to underpin a modern industrial economy, with a focus on serving the needs of multinational companies.The primary objective during the 1960s and 1970s was free primary education for all.The government began unifying the country's school system by introducing a common syllabus, with an emphasis on mathematics, science, and technical subjects, and mandated six years of primary schooling for all children (Yusuf and Nabeshima 2012).After six years, students took an exam to determine whether they would continue to secondary school or enroll in a vocational training program.The government began a school construction program to provide the infrastructure needed to educate a growing number of students.Singapore also recruited and trained teachers to meet increasing demands for skilled educators and improved vocational training (Human Capital Project 2019a).
The education ministry, schools, polytechnic institutions, economic development board, and other official institutions coordinated in developing the education sector.They tracked trends in labor demand and consulted with businesses to match the skills that the education system taught with the market's needs.The government cooperated with foreign investors on the training of technicians for their factories, persuading them to train twice the number of technicians needed; the investors chose from among the pool of graduates, and the remaining qualified technicians helped attract new investors.As the number of training centers grew, the government consolidated them under what became the Institute of Technical Education (under the authority of the education ministry), which determined the number needed to be trained based on projections of labor demands and students' desires (Yusuf and Nabeshima 2012).In 1980, the education ministry established the Curriculum Development Institute of Singapore to bring what was taught in schools in line with Singapore's longterm industrial ambitions.The ministry also installed an information-gathering mechanism that helped school administrators assess the strengths and weaknesses of their institutions and track student performance.
As Singapore's development plan succeeded in attracting investment and employing people in factories, the government changed its focus to attracting highervalue manufacturing.To achieve this goal, the country needed a workforce with better literacy, numeracy, and language skills.By the late 1970s, it was clear that learning outcomes were falling short of expectations.A majority of students were unable to master English as a second language, literacy scores were low, the dropout rate was high, fewer students went to secondary school than the government intended, and teacher morale was low.A 1979 report from Deputy Prime Minister Goh Keng Swee initiated a set of education reforms implemented throughout the 1980s.The objectives of these reforms were to improve the quality of education, enhance language aptitude, and foster science and engineering skills.These reforms addressed management of schools, pedagogical practices, curriculum content, teacher training, performance assessment, and government oversight.
Teacher recruitment evolved, with the government making the pay scales of teachers comparable with those of engineers and lawyers.The government also developed a certification program for teachers and provided continuous training (Yusuf 2020).
Integrating health into urban planning and advancing the health system
The 1961 development plan and subsequent government policies set clear goals for improving health outcomes.Singapore recognized that healthcare was an integral part of overall development planning, but this meant more than simply building hospitals and clinics.Because Singapore was a densely populated city-state, many aspects of urban life affected people's health, such as housing, water supply, food supply, air quality, and waste disposal (Yusuf 2020).
Singapore considered health conditions in every aspect of urban planning, taking a comprehensive approach that required the cooperation of numerous ministries (Haseltine 2013).A vital institution in this effort was the Housing and Development Board, set up in 1960, which invested in clean, affordable housing to replace the unhealthy slums and squatter settlements many Singaporeans lived in.By 1965, it had built nearly 55,000 apartments, and the housing problem was solved by 1970.By 2013, almost 85 percent of Singaporeans lived in flats created by the HDB (Haseltine 2013).
The government placed a high priority on bringing diseases under control and expanding access to clinics and hospitals.By 1966, Singapore had instituted vector control, inoculation, and other measures such as reducing pools of brackish water where mosquitos bred.The government's focus on this was such that it permitted inspectors to enter homes to search for and remove items that could collect water and attract dengue-causing mosquitos.Households where mosquito larvae were found in containers could be fined (Yusuf 2020).
Another early move was to develop a network of outpatient dispensaries and maternal and child health clinics to make primary healthcare services more accessible.These clinics gradually evolved into wellequipped modern medical centers, a foundation of the health system (Haseltine 2013).The government initiated a nationwide program to make primary health services widely available for a nominal fee, which helped keep overall health costs down through screening and prevention.Evolving public health efforts, environmental management, vaccines, and treatment helped control infectious diseases (Yusuf 2020).As Singapore's economic development progressed, it had more resources to invest in its health system.In the 1980s, the government initiated the Healthcare Manpower Development Program to train doctors locally and abroad to serve in health facilities.Singapore partnered with foreign health organizations and hospitals to address the lack of specialists.This program largely erased medical personnel shortages and allowed the government to create specialist healthcare centers (Yusuf 2020).
A milestone in Singapore's health system development was the 1983 National Health Plan, which provided all citizens with affordable healthcare.The government helped dampen rising costs by giving public hospitals some independence in managing their own operations.The health ministry set up a holding company in 1985 that took ownership of public hospitals, giving the ministry oversight and guidance while granting hospitals significant autonomy (Haseltine 2013).
Promoting citizen savings as a pillar of health and education financing
One tactic that helped Singapore overcome financing challenges was the government's focus on mandatory savings accounts and means-tested subsidies, which helped keep the government's health and education costs down.In the 1970s, the government spent more than 40 percent of its budget on education as it built infrastructure and recruited and trained a teaching workforce.That level of expenditure gradually dropped as it shifted more of the spending burden to citizens, whose incomes were increasing as the country developed.From 2000 to 2013 (the most recent datapoint), Singapore's public expenditure on education ranged from 17 percent to 22 percent of the national budget.
The government created a system in which households that could afford to pay out of pocket for education did so, with government help.Starting in 1993, every child in Singapore received an education savings account into which the government paid USD200 annually during primary school education and USD240 during secondary education, as long as the child was enrolled in a fulltime school, vocational program, or special education program.The government also matched any payments made into a post-secondary education account, up to a certain limit (Yusuf 2020).
For healthcare, Singapore avoided entitlement programs and evolved a mixed healthcare financing system that featured extensive means-tested government subsidies to ensure equitable access to basic healthcare.The government tailored subsidies to patient age and ability to pay, and users incurred high co-payments from mandatory health savings accounts.Beginning in 1990, a basic health insurance program protected citizens against large hospital bills and certain costly outpatient treatments.Employees and employers contributed to mandatory savings accounts for medical expenses, and the government established an endowment fund to help those who had difficulty paying medical bills.As of 2017, Singapore's current health expenditures were 4.4 percent of gross domestic product, lower than the East Asia and Pacific average of 6.7 percent and the world average of 9.9 percent. 3Public expenditure on healthcare constituted only one-third of overall expenditures on health, compared with more than 81 percent in Japan and the United Kingdom.The health system evolved to the point that the country became the premier medical destination in Southeast Asia (Yusuf 2020).
Outcomes
Over decades, Singapore developed from a country with a low-skilled workforce and high incidence of tropical diseases to the best-performing state in the World Bank 2018 Human Capital Index, which derived a score based on a set of health and education indicators.The index reported that the probability of survival to age five was 99.7 percent, Singaporean students could expect to have attended 13.9 years of school by the age of 18, and student test scores were the highest in the world.Ninetyfive percent of people who attained the age of 15 were expected to survive to age 60.These metrics were all far 3 Data from the World Bank: https://data.worldbank.org/indicator/SH.XPD.
CHEX.GD.ZS above the East Asia and Pacific regional average and the world average. 4 focus on developing human capital allowed Singapore to leapfrog from production of low-value items in the early 1960s to assembly of high-value items such as hard disk drives and precision machine tools in less than two decades.
Lessons Learned
Singapore's strategy for building its education and health sectors, specifically its emphasis on continuity and coordination, demonstrated how a whole of government approach contributes to strong human capital development.
An overarching national development plan encouraged continuity and coordination in government policies
Singapore overcame potential coordination and commitment challenges in part by, early in its existence as an independent country, setting clear national goals to guide the work of all government institutions.Regular meetings encouraged coordination by bringing together institutions whose cooperation was necessary for implementing health and education policies.
The government sustained commitment to its human capital development strategy over time, adapting its goals to meet its evolving needs as the government sought to attract higher-value manufacturing.
Coordinating with industry helped optimize education and training
Singapore configured its education system around the needs of employers.The government coordinated with companies to understand the skills and backgrounds they needed in potential employees and collaborated on training programs that filled company needs while producing excess capacity to help attract new companies.This policy was effective in reducing unemployment and drawing foreign capital into Singapore that could fund its broader development goals.
Improving health outcomes requires the contribution of many institutions
In the early years of independence, before Singapore had the resources to invest heavily in its formal health system, it took measures to improve health outcomes that cut across different sectors, such as housing, food supply, and waste disposal.Realizing these goals required coordination among various institutions outside the formal health sector, for example the Housing and Development Board.Such efforts helped reduce the incidence of tropical diseases and other maladies that had affected the population at the time of independence.These early gains paved the way for building a modern health system in subsequent decades. | 4,463.2 | 2020-05-01T00:00:00.000 | [
"Economics",
"Education",
"Political Science"
] |
Toddler Nutritional Status Classification Using C4.5 and Particle Swarm Optimization
. Purpose: This research was conducted to create a classification model in the form of the most optimal decision tree. Optimal in this case is the combination of parameters used that will produce the highest accuracy compared to other parameter combinations. From this best model, it will be used to predict the nutritional status class for the new data. Methods/Study design/approach: The dataset used is from Nutritional Status Monitoring in 2017 in Riau Province, Indonesia. From the dataset, the Knowledge Discovery in Database (KDD) stages were carried out to build several classification models in the form of decision trees. The decision tree that has the highest accuracy will then be selected to predict the class for the new data. Predictions for new data (unclassified data) will be made in a web-based system. Result/Findings: Particle Swarm Optimization is used to find optimal parameters. Before PSO is used, there are 213 parameters in the dataset that can be used to do classification. However, using many such parameters is time-consuming. After PSO is used, the optimal parameters found are the combination of 4 parameters, which can produce the most optimal decision tree. The 4 chosen parameters are gender, age (in months), height, and the way to measure the height (either stand up or lie down). The most optimal decision tree has an accuracy of 94.49%. From the most optimal decision tree, a web-based system was built to predict the class for new data (unclassified data). Novelty/Originality/Value: Particle Swarm Optimization (PSO) is a method that can help to select the most optimal parameters, or in other words produce the highest classification accuracy. The combination of parameters selected has also been confirmed by the nutritionist. The prediction system has been declared feasible to be used by nutritionists through the User Acceptance Test (UAT).
INTRODUCTION
Nutritional status is a result of daily nutritional intake that describes an individual [1]. The nutritional status could assist and monitor the optimizing process of individual growth and development [2]. The toddler age (also known as the golden age) is an important period to optimize the growth and development of toddlers [3]. Toddler nutritional status is evaluated based on three indicators, namely weight-for-age (W/A), weightfor-height (W/H), and height-for-age (H/A) [4]. Based on H/A, the nutritional status of toddlers can be categorized into very short, short, normal, and high [5]. The combination of very short and short categories is known as stunting. The results of Nutritional Status Monitoring in 2017 show that the problem of H/A nutritional status (stunting) is the highest nutritional problem compared to W/A (underweight) and W/H (wasting) as shown in Figure 1 and Figure 2 [4].
Stunting is a condition of toddlers who are too short for their age due to inadequate nutrition and repeated bouts of infection during the first 1000 days of their life [6]. There are several causes of this condition, including poor parenting and a lack of consuming nutritious food due to the mother's lack of knowledge. Stunting has an impact on intelligence and physical development, more susceptibility to diseases, and atrisk of decreasing productivity in the future. Another broad impact of stunting is that it inhibits economic growth and increases poverty [7]. One of the health development goals to reduce the number of stunting (also called the Stunting Intervention Program) is providing public nutrition education through the Public Nutrition Improvement Program. However, this program has not been done so much that the Stunting Intervention Program has not been effective [7]. One of the actions recommended by [8] to deal with stunting is to carry out preventive activities. Prevention activities that can be carried out include increasing identification, measurement, and understanding of stunting. This shows the importance of delivering information about stunting to the public. Thus, they can ensure their toddlers are not classified as stunting.
Based on the existing problems, the author makes an application that can help people know the H/A nutritional status of their toddlers and get information about stunting. With this application, the public does not need to wait for the Public Nutrition Improvement Program to be held. In building the application, the author applies the C4.5 algorithm [9] and Particle Swarm Optimization [10] on the data of Riau Province Toddlers in 2017 to classify the H/A nutritional status. The results of the algorithm will be applied to the web-based application, thus people can find out the H/A nutritional status of their toddlers and get information about stunting.
The C4.5 algorithm is known to exceed the Learning Vector Quantization (LVQ) algorithm in an average accuracy and has a faster processing time than the K-Nearest Neighbors (K-NN) method in classifying student abilities [11]. Another study that uses the C4.5 algorithm is also shown good performance for classifying the eligibility value of new prospective debtors [12] and exceeding Naïve Bayes in predicting the future behavior of the customers [13]. The C4.5 algorithm can also be optimized with PSO to select parameters/features [14]- [18]. This feature selection is done to avoid unrelated features that can reduce the performance of the classifier model. C4.5 algorithm optimized with PSO can exceed the accuracy of Support Vector Machine (SVM), Self-Organizing Map, Back Propagation Neural Network (BPNN), and C4.5 in classifying cancer [19]. Other studies also prove that the average accuracy of C4.5 combined with PSO in classifying 5 medical data sets can outperform the average accuracy of Logistic Regression, SVM, BPNN, and C4.5 [20]. Research related to the classification of nutritional status that has been done previously is the classification of W/A nutritional status of toddler using LVQ with the 80% accuracy (Euclidean) and 20% accuracy (Manhattan) [21]. Another study that classifies nutritional status by using Naïve Bayes reach 93.2% accuracy [22].
METHODS
The research methods of this study as in Figure 3:
Data Collection
At this stage, data collection is conducted. The data collected is toddler data from the result of Nutritional Status Monitoring in 2017 in Riau Province, Indonesia. The data obtained consisted of 3.961 records and 213 parameters.
Analysis
At this stage, the author analyzes the data of toddlers starting from the data selection, pre-processing, transformation, and data mining [23]. The best accuracy (from the optimum parameters or decision tree) would be found in the data mining step. 1) Data selection. This is the step to select data and parameters to be used. In this study, 3.961 records and 19 parameters were selected. The selected parameters are expected to have a relation with H/A nutritional status of toddler. This step is done by a discussion with the nutritionist. Data should only be filled with numbers. However, there is a value of "#NULL!". The action taken at this step is to change the "#NULL!" value to "99". d) Outlier Deletion of records containing the outliers is done, thus the remaining records are 3.629. There are 15 records that contain outlier. At the end of this step, the total record that has been deleted is 332 records. Thus, the percentage of data loss is 8.3%.
3) Transformation. This is the step to change the data format to the format needed for data mining. For example, the data mining step in this research would form decision tree based on the parameters. Thus, the data should contain only the necessary parameter (that has relation to the class target). However, the ID parameter is not necessary for the classification process (or data mining step) as it is only needed in the pre-processing step (removing duplicate step). Thus, the ID parameter could be deleted. The total of parameters after deletion is 18 parameters.
4) Data mining. This is the step to apply the C4.5 algorithm and PSO to the data using Matlab. The following is a flowchart that describes the flow of the C4.5 algorithm and PSO in Matlab [5], [9], [10]. The maximum iteration, w, c1, c2, vmin, and vmax parameters are set based on the previous research that uses C4.5+PSO on a similar domain, which is the medical problem [20]. The number of particles is set as many as the number of parameters (excluding the target or class parameters). The r1 and r2 are obtained from the random function in Matlab (built-in function). 2) Initialize particle position and velocity randomly. The particle position is a 17-bit binary number that represents each parameter in the data (the 18 th parameter becomes the target class, thus only 17 parameters left). If the binary bit is 1, then the parameter will be used in the classification process (calculation of particle's fitness value).
3) Calculate the fitness value of each particle using C4.5 algorithm. The parameter to be used in this process (to be used in forming decision tree) is based on the particle's position. C4.5 algorithm uses gain and gain ratio to form decision trees using the following formula [11]: The particle fitness value is obtained based on the accuracy of the decision tree that produced by C4.5 algorithm. The following is an example of confusion matrix that is used to calculate accuracy [11].
Select particle best (pb) and global best (gb) from particle's fitness values. Pb is the best fitness reached by each particle, while gb is the best fitness among its neighbors. 4) Update the velocity (v) and position (x) of particle based on pb and gb. Here is the formula used [19].
Explanation: = new velocity of particle i = i th particle d = d th dimension w = Inertia weight = current velocity of particle 1 = learning rates of particle 2 = learning rates between particles 1 , 2 = random number between 0 and 1 pb = the position of the particles with the best fitness reached by each gb = the position of the particle with the best fitness among its neighbors = current position of particle = new position of particle ( ) = 1 1 + − ran[0,1] = random number between 0 and 1 5) Repeat steps 3-5 until the maximum iteration reached or 100% accuracy (gb is 100) obtained.
Whichever reaches first.
The result of the data mining step is the fastest particle to achieve the best accuracy. The decision tree of that particle (the best decision tree) will be applied to The Application of Toddler Nutritional Status. The flowchart will be used to analyzing the flow of the application as in Figure 5. 1) The user selects the "check nutritional status" menu on the main page.
2) The application displays a form to check the H/A nutritional status of toddlers.
3) The user inputs the toddler's data into the form. 4) The application processes toddler data using a decision tree. 5) The application displays the results (H/A nutritional status of the toddler). 6) The user selects the "stunting info" menu. 7) The application displays a page about stunting info, such as the definition of stunting, causes, effects, and recommendations (one of them is a balanced food menu for toddlers). 8) The user selects the "balanced food menu" menu. 9) The application displays a page about a balanced food menu which is recommended by a nutritionist.
Design
At this stage, the author designs the concept of database, menu structure, and interface of the application. The database design will be used as a reference when creating the data storage of the application. The menu structure and interface design will be made, thus the application has a display that meets the user-friendly and usefulness aspects.
Implementation
At this stage, application development is done based on the analysis and design that has been done before. The interpretation of the best decision tree is done by changing the decision tree details from Matlab into an if-else rule in the code line for web-based applications. This if-else rule will be used to predict H/A nutritional status of toddlers.
Testing
The percentage of testing data being used is 10% or 363 records. Confusion matrix is used for the accuracy testing of C4.5 algorithm and PSO. Application (web-based) testing is done by two methods, namely black box and User Acceptance Test (UAT).
RESULT AND DISCUSSION
The result of the data mining step (C4.5 and PSO) after the 100 th iteration is done is shown below. Based on the table above, gb (biggest pb or the best accuracy) is 94.49%. The fastest particle to achieve the best accuracy is 7 th particle. The 7 th particle obtained such accuracy only after the 15 th iteration. The other obtained the same accuracy (6 th particle, 13 th particle, etc) after the 15 th iteration. The 7 th particle obtain 94.49% accuracy when the particle position is 00001100000101100 (5 parameters used). These parameters are: 1) B4R4 is gender.
3) B27 is about history of being treated for malnutrition (yes or no). 4) F02B is height. 5) F02C is way to measure height (stand or lie down).
In forming a decision tree (using C4.5 algorithm), the B27 parameter has never been chosen as a tree branch. Only 4 parameters are used, namely B4R4, UMUR_BULAN, F02B, and F02C. These parameters are the most important and influential in determining H/A nutritional status of toddlers, as the PSO works as feature selection to avoid unrelated parameters that can reduce the classifier performance.
Based on the interview with nutritionist, the influence of gender parameters is men are higher than women. The influence of age parameters is that the more you age, the higher your body height. The influence of height parameters is the more optimal height, the more possible for your H/A nutritional status is included in the normal category. The influence of way to measure the height, namely the height measured while standing will be 0.7 cm shorter than lying down.
The following is the detailed calculation of the best accuracy reached using the confusion matrix. From the results of UAT testing by five nutritionists (Figure 6), it can be concluded that the application features are in accordance with the science of nutrition. These features are Nutritional Status Checking (QN2), Stunting Info (QN4), and Balanced Food Menu Info (QN5). It is also considered easier in predicting toddler nutritional status (stunting) compared to the manual method, which is manually counting Z-Score (QN3). This application also has an interface that is easy to understand (QN1), the features provided are quite in accordance with the needs (QN6), and overall the application is classified as good by nutritionists (QN7).
From the results of UAT testing by five general public (Figure 7), it can be concluded that the public can obtain the information about stunting (QP4) and the information about balanced nutritious food menu (QP5) from the application. The general public can also check the toddler nutritional status (stunting) (QP2) more easily than by using manual methods, which is visiting health center (QP3). This application also has an interface that is easy to understand (QP1), the features provided are in accordance with the needs (QP6), and overall the application is classified as good by the general public (QP7).
CONCLUSION
Based on the research methodology that has been done, several conclusions can be drawn as follows. The accuracy of C4.5 algorithm and PSO in classifying H/A nutritional status of toddlers is 94.49% with the 90:10 ratio and 4 parameters. Of the 19 parameters selected, there are 4 parameters that could reach 94.49% accuracy, namely the gender, the age (in months), the height, and the way to measure the height. These parameters are the most important and influential in determining H/A nutritional status of toddlers, as the PSO works as feature selection to avoid unrelated parameters that can reduce the classifier performance. The influence of gender parameters is men are higher than women. The influence of age parameters is that the more you age, the higher your body height. The influence of height parameters is the more optimal height, the more possible for your H/A nutritional status is included in the normal category. of way to measure the height, namely the height measured while standing will be 0.7 cm shorter than lying down. The Application of Toddler Nutritional Status can predict the H/A nutritional status of toddlers according to the decision tree produced by C4.5 algorithm and PSO. This application can also provide information about stunting and recommendations for a balanced nutritious food menu for toddlers. Based on the black box testing, the application has been running as expected and in accordance with analysis and design. From the User Acceptance Test (UAT) with the nutritionists, the application is in accordance with the science of nutrition and is considered easier in predicting H/A nutritional status of toddlers compared to the manual method (Z-Score). This application also has an interface that is easy to understand, the features provided are quite in accordance with the needs, and classified as good. From the User Acceptance Test (UAT) with the general public, the application can predict H/A nutritional status of toddlers and it is easier than the manual method (visiting a health center) and the application can also provide information about stunting. This application also has an interface that is easy to understand, the features provided are in accordance with the needs, and are classified as good. | 3,994.2 | 2022-05-31T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
A general framework to diagonalize vector–scalar and axial-vector–pseudoscalar transitions in the effective meson Lagrangian
A new mathematical framework for the diagonalization of the nondiagonal vector–scalar and axial-vector–pseudoscalar mixing in the effective meson Lagrangian is described. This procedure has unexpected connections with the Hadamard product of n × n matrices describing the couplings, masses, and fields involved. The approach is argued to be much more efficient as compared with the standard methods employed in the literature. The difference is especially noticeable if the chiral and flavor symmetry is broken explicitly. The paper ends with an illustrative application to the chiral model with broken U (3) L × U (3) R symmetry.
I. INTRODUCTION
The QCD Lagrangian with n massless flavors is known to possess a large global symmetry, namely the symmetry under U(n) V × U(n) A chiral transformations of quark fields. It has been shown by Coleman and Witten [1] that, in the limit of a large number of colors N c , under reasonable assumptions, this symmetry group must spontaneously break down to the diagonal U(n) V . Consequently, the massless quarks get their constituent masses M 0 , and massless Goldstone bosons appear in the spectrum [2][3][4]. These non-perturbative features of the QCD vacuum can be modeled in analogy with the phenomenon of superconductivity [5,6]. For that, one should regard the constituent quarks as quasiparticle excitations and the mesons as the bound states of quark-antiquark pairs. The dynamics of such bound states is described by the chiral effective Lagrangian [7][8][9][10][11].
Were the axial U(n) A symmetry exact, one would observe parity degeneracy of all states with otherwise the same quantum numbers. Due to the mechanism of spontaneous symmetry breaking, described by Nambu and Jona-Lasinio (NJL), the mass splitting occurs between chiral partners, e.g. m a 1 /m ρ = √ Z, where Z ≃ 2 in accordance with a celebrated Weinberg result [12][13][14]. In fact, the splitting between J P = 1 − and 1 + states is a result of the partial Higgs mechanism: the A µ ∂ µ φ mixing term which appears in the free meson Lagrangian [15] after spontaneous symmetry breaking must be canceled by an appropriate redefinition of the longitudinal component of the massive axial-vector field A µ = A ′ µ + κ∂ µ φ. The result is that the axial field A µ "eats a piece" of the Goldstone boson φ and "gets fat": m 2 a 1 − m 2 ρ = 6M 2 0 . Here the quark mass may be expressed in terms of observable values: f π (the pion decay constant) and g ρ (the ρ → ππ decay constant), namely 6M 2 0 = Zg 2 ρ f 2 π .
It is commonly believed that axial-vector fields A µ , defined in the symmetric vacuum, and A ′ µ , defined in the non-symmetric vacuum, should have the same chiral transformations [16]. As a consequence of that rather natural idea one must use the covariant derivative ∇ µ φ in the replacement above and write A µ = A ′ µ + κ∇ µ φ. Such derivative contains non-linear field combinations. Thus, upon substitution in the quadratic form to be diagonalized, new meson interaction terms emerge which are not present in the original Lagrangian. Although, in principle, there is no reason to object to new interactions as long as they fulfill the symmetry requirements, they are subleading in large N c counting as compared to the mixing which occurs at leading order. If one restricts the analysis to leading order in N c , i.e. to the level of the free meson Lagrangian, chiral symmetry will not be supported.
Recently [17,18], it has been shown that the linear replacement A µ = A ′ µ + κ∂ µ φ changes the chiral transformation laws of the axial-vector field A ′ µ as compared to the A µ ones, though in a way that leaves the group properties intact. The special thing about these new transformations is their dependence on the classical parameter M ′ which is a diagonal n × n matrix in the n-flavor space; in the non-symmetric ground state its eigenvalues are all equal and nonzero M ′ = diag(M 0 , M 0 , . . . , M 0 ); in the symmetric vacuum, M 0 = 0 and the transformations coincide with the standard ones.
In nature, chiral symmetry is also broken explicitly by the current quark masses m = diag(m u , m d , m s ) (it has been shown in [19] for QCD with two flavors that isospin is not spontaneously broken; in the following, we will consider mainly the case n = 3, although our discussion is valid for an arbitrary number of flavors n). Due to the current quark masses, the U(n) V symmetry breaks down to U(1) n V . Then, it follows from the gap equation that the constituent quark mass matrix M is diagonal but its eigenvalues are all unequal and nonzero M = diag(M u , M d , M s ). This leads to a new mixing between the vector, V µ , and the scalar, σ, fields and, as a result, to the redefinition of the longitudinal component of the vector field: V µ = V ′ µ + κ ′ ∂ µ σ. This makes the vector field V ′ µ heavier. It is the purpose of this letter to point out that the result [17,18] We also demonstrate that the linear replacement of variables that we found has unexpected connections with the Hadamard product of n × n matrices describing the couplings, masses, and fields. That allows us to reformulate the diagonalization procedure entirely in terms of the Hadamard product, in contrast to the conventional methods used in the literature which we refer to as being standard.
To find the chiral transformations of the meson fields in the non-symmetric vacuum we use the NJL Lagrangian which includes both spin-0 and spin-1 U(3)×U(3) symmetric four-quark interactions. It is known that this Lagrangian undergoes dynamical symmetry breaking [26].
It also reproduces the qualitative features of the large-N c limit. The model describes both phases and gives a solid framework for the study of the transformation laws of the qq The source of interest regarding the chiral transformations of spin-1 fields resides in the use of these fields in the effective Lagrangians describing the strong interactions of hadrons at energies of ∼ 1 GeV [23,24,27]. Presently these theories are actively used in studies of τ -lepton decay modes [28], e + e − hadron production [29], and the QCD phase diagram [30,31]. The chiral transformations we suggest allow for greater freedom in the construction of such Lagrangians by noting that one may use different representations for these transformations as long as they obey the same group structure. In this context it is important that transformations include the flavor symmetry breaking effects while not spoiling the symmetry breaking pattern of the theory. The latter is very important for controllable calculations.
In Section 2 we study the NJL model of quarks with the U in hot/dense/magnetized matter [32,33], or apply it in the study of hybrid stars [34], where eight-quark interactions seem to have an important impact.
II. CHIRAL TRANSFORMATIONS OF MESON FIELDS
We start from the quark version of the original NJL model [5,6] with nonlinear fourquark spin-0 and spin-1 interactions which allow a chiral group and where there is considerable freedom in the choice of auxiliary fields in the vector and axial-vector channels. The Lagrangian density of the model is G S and G V are universal four-quark coupling constants with dimensions (length) 2 and m is the current quark mass matrix. The symmetry group G acts on the quark fields q (in this notation the color and flavor indices are suppressed) as follows Here α = α a λ a /2, β = β a λ a /2, a = 0, 1, . . . , 8 and α a , β a ∈ R; λ 0 = 2 3 1 with 1 being a unit 3 × 3 matrix and λ a (a > 0) are the usual SU (3) Gell-Mann matrices with the following basic trace property tr(λ a λ b ) = 2δ ab ; the resulting infinitesimal transformations are This symmetry is explicitly broken due to non-zero values of the current quark masses Following [7], one may wish to introduce auxiliary fields σ a = G S (qλ a q), φ a = G S (qiγ 5 λ a q), , and resort equivalently (in the functional integral sense) to the theory with the Lagrangian density with D being a Dirac operator in the presence of mesonic fields where m a is defined by m a = 1 2 tr(mλ a ) with m = m a λ a = diag(m u , m d , m s ). The latter implies that or equivalently The symmetry is now These transformation laws are valid in the symmetric Wigner-Weyl realization of chiral symmetry, where σ = 0. The generators possess a Lie algebra structure δ [1,2] φ = i α [1,2] , φ − β [1,2] , σ = [δ 1 , δ 2 ]φ, δ [1,2] V µ = i α [1,2] , V µ + i β [1,2] , where iα [1,2] III. V −σ AND A−φ MIXINGS AND GENERAL LINEAR SHIFT For further progress we have to bosonize the theory to get profit from the 1/N c expansion.
In the large-N c limit the meson functional integral of the theory (5) where i = u, d, s and Λ is an intrinsic cutoff of the NJL model. The current quark masses m i affect (through the gap equation) the constituent quark masses M i which accumulate the explicit and flavor symmetry breaking effects enhancing them. In particular, in the chiral Since a dynamically broken symmetry is not spoiled in the Lagrangian, we can expand around a non-zero vacuum expectation value of σ without breaking the symmetry. For that we perform the shift σ → σ + M, leading to The equation (8) must still hold, now in the form because otherwise the symmetry breaking pattern (7) will be not preserved. This gives us the modified transformation laws of spin-0 fields in the non-symmetric phase We can still associate the infinitesimal transformations (19,20) with the chiral group G, because M does not ruin the symmetry algebra of G given by (14,15). This can be easily checked. On the other hand, the existence of such a symmetry shows that the part of the effective meson Lagrangian following fromqD M q (after integrating over the quark degrees of freedom in the generating functional of the theory) includes vertices with explicit and flavor symmetry breaking effects which are still altogether G-invariant as expressed in (18).
The results above have no consequences for the transformations of spin-1 fields in the non-symmetric phase. The equations (12) and (13) agree with the requirement (18).
Let us now turn to the vacuum-to-vacuum transition amplitude of the model beyond the mean-field approximation To obtain the effective meson Lagrangian one should consider the long wavelength expansion of the quark determinant, detD M , the formal expression for the path integral over quarks.
The appropriate tool here is the Schwinger-DeWitt method [35]. This yields the local lowenergy effective meson Lagrangian. Renormalizing the meson fields by bringing their kinetic terms to the standard form (e.g., φ R = gφ, where g ∼ 1/ √ N c ), one arrives at the picture corresponding to the large N c limit: the free parts of the meson Lagrangian count as g 2 N c ∼ N 0 c , the three-meson interactions as g 3 N c ∼ 1/ √ N c , the four-meson amplitudes as g 4 N c ∼ 1/N c [8,9].
Obviously, the V−σ and A−φ mixing arises at N 0 c order: the mixing is described by vertices proportional to tr (A µ {M, ∂ µ φ}) and tr (V µ [M, ∂ µ σ]). The first one is a result of spontaneous symmetry breaking, while the second one is a direct consequence of the flavor symmetry breaking enforced in the broken phase. Both lead to additional contributions to the kinetic terms of pseudoscalar (through the transitions ∂φ → A → ∂φ) and scalar (through the transitions ∂σ → V → ∂σ) states. Consequently, these fields must be renormalized again to the standard form. In this case one gets correct expressions for the masses of spin-0 states.
Alternatively, one may wish to eliminate the mixing and diagonalize the free Lagrangian by the replacements where the entries of X µ and Y µ are appropriate combinations of spin-0 fields. They should also depend on M, as it is required by the mixing terms. These replacements introduce further mixing terms through the V µ and A µ mass terms which must add up to zero in the end, fixing the coefficients of the combinations used in (22). In principle, there are an infinity of possible physically equivalent replacements in the form of (possibly infinite) sums of field products. According to Chisholm's theorem [36,37], all such redefinitions yield the same result when computing observables as long as they preserve the form of the free part of the Lagrangian. This reasoning ensures us that we may always restrict to the minimal necessary terms for our intended purposes, i.e. to linear field combinations.
From the point of view of the 1/N c expansion, the minimal replacement in (22) is unique.
All others include additional nonlinear combinations in fields, but must coincide with (22) in their linear part, i.e. at N 0 c order (see also discussion around eq.26). This is a direct consequence of the fact that the mixing terms have their origin at the level of the free Lagrangian.
We may now require that the replacement (22) does not violate the symmetry condition (18). This is ensured if the spin-1 states transform like Gathering separately the factors multiplying γ µ and γ µ γ 5 we conclude that (23) is equivalent These transformations must preserve the algebraic structure of the chiral group G, i.e. the composition properties of the group which are specified in (15).
It seems natural to require that V ′ µ and A ′ µ have the same transformation properties as V µ and A µ . From (24)- (25), it follows then that X µ and Y µ need to be chiral partners and should also transform like V µ and A µ . This would disable the linear solution X µ ∝ ∂ µ σ and Y µ ∝ ∂ µ φ due to the transformation properties of spin-0 fields (19) and (20). Instead, one would look for nonlinear combinations of fields for X µ and Y µ which respect the symmetry transformations (12) and (13) and contain the linear terms necessary for the diagonalization.
For instance, if we assume that flavor symmetry is unbroken, we may use the solution [25], where the constant κ is fixed by a diagonalization condition. The nonlinear terms essentially modify the interaction part of an effective meson Lagrangian without physical consequences [17,36]. However, if the flavor symmetry is broken the replacement (26) does not solve the problem. This is why such natural replacements are not efficient in the direct calculations.
Now, let us consider the minimal replacement in (22). In this case, we are faced with the opposite situation: it makes calculations as simple as possible, but one should take care in justifying such a replacement. Indeed, in this case, as it follows from (24)-(25), the fields V ′ µ and A ′ µ cannot transform like V µ and A µ . Can this introduce some spurious symmetry breaking and change the physics of spin-1 states? To answer the question we compute the commutators δ [1,2] V ′ µ and δ [1,2] A ′ µ . The symmetry will be respected if the commutators depend only on the parameters of the infinitesimal chiral transformations α [1,2] and β [1,2] , i.e., leaving the group composition properties (15) unchanged. We have One can see that the chiral group structure will be preserved overall as long as X µ and Y µ can be chosen in such a way that their transformation laws obey the algebraic structure of G as well This clearly defines the freedom one may have in the choices of X µ and Y µ . In particular, it is not forbidden for them to transform like spin-0 chiral partners X µ ∼ ∂ µ σ and Y µ ∼ ∂ µ φ as it follows from the diagonalization procedure of the considered NJL model. Another interesting case has been considered in [17,18], where X µ = 0, but Y µ = 0 (the case without flavor symmetry breaking).
Any point made so far on the chiral transformation properties of fields in matrix form may be carried over to a formulation based on individual matrix entries. If the Lagrangian contains mixing terms in the form tr (V µ [M, ∂ µ σ]) = tr (V µ (i∆ M • ∂ µ σ)) and , it suffices to define X µ and Y µ as Here, k and k ′ are symmetric coefficient matrices in a flavor space whose entries should be fixed from the Lagrangian diagonalization requirements; ∆ M and Σ M are mass-dependent matrices defined as and the symbol • stands for the Hadamard (or Schur) product (see e.g. [38]) defined as without summation over repeated indices. This product is commutative unlike regular matrix multiplication, but the associative property is retained, as well as the distributive property over matrix addition, i.e.
IV. APPLICATION TO AN SU (3) CHIRAL MODEL
Now, let us consider a physical application of the method presented above. For that we chose a recently proposed effective U(3) × U(3) chiral model [39]. It extends the model [9] presented in the text by including the U (1) A breaking 't Hooft interaction [40], eight-quark interactions and systematically taking into account the explicit and flavor symmetry breaking effects. Our choice is motivated by the growing interest in the eight-quark interactions in hadronic matter, including the physics of stars, and by the importance of the axial anomaly and the flavor symmetry breaking effects in the study of the QCD phase diagram.
In both models [9,39] we arrive, after bosonization, to the same mixing terms Here, the trace is to be taken in flavor space; ̺ 2 is a constant cutoff dependent factor related with the evaluation of the quark determinant, and M is the constituent quark mass matrix, group. This gives with ∆ M , Σ M as defined in (31). For simplifying this expression we have used the fact that, for any U(3) symmetric matrix S (e.g. Σ M ) or antisymmetric matrix Ω (e.g. ∆ M ) and any other B, C ∈ U(3), it is always true that This form for the mixing terms provides a direct hint to the adequacy of the forms (30) for diagonalizing the Lagrangian. Carrying on such replacements will induce from V µ and A µ new similarly shaped mixing terms with k, k ′ appearing as adjustable coefficients.
The model's mass terms in the unshifted Lagrangian may be expressed as for V µ and for A µ . Here, H (1) and H (2) are symmetric U(3) matrices. In the model [9] H (1) = H (2) ∝ 1.
The model [39] leads to a more general form of H (1) and H (2) which makes the standard diagonalization procedure algebraically heavier.
After replacing the fields according to (30), we get the following additional mixing terms: The Hadamard power is used here and it stands for The cancellation of V −σ mixing requires Since the latter sum must vanish, if we equate to zero the coefficients of the independent combination ∂ µ σ ij V µ ji we obtain for any i, j. Due to the antisymmetry of ∆ M , this condition is always satisfied for i = j independently of k values. For i = j, we have This expression defines the values of k entries which diagonalize the Lagrangian, and show us that k is a symmetric matrix coinciding (after some renormalizations of fields) with the known result [9], for H (1) ∝ 1.
A convenient way to write these results in matrix form is where the Hadamard inverse has been used; its definition may be given as (A •−1 ) ij = (A ij ) −1 .
It may be checked that these results are in complete agreement with the previously obtained values for k, k ′ coefficients in [39].
We remark that the standard treatment of the problem in [39] requires the analytic manipulation of expressions involving something like 10 or more flavor indices which are contracted among themselves in non-trivial ways. This can easily become a cumbersome and error-prone calculation. Furthermore, the previous form for X µ and Y µ obscures the fact that each matrix entry of the spin-1 fields is effectively shifted by a single entry of the spin-0 matrix field; this is made explicit within the formalism presented here. The present formulation yields all the results in an efficient and closed-form way.
V. CONCLUSION
Resorting to arguments pertaining to the Lie algebra associated with chiral transformations and to Chisholm's theorem, we have shown that one may always use the most general linear shifts of V µ and A µ fields (22) for dealing with V − σ and A − φ mixing in chiral models without compromising the chiral symmetry properties of the Lagrangian, as long as one admits the corresponding new transformation laws (24,25) for the shifted fields. This result is independent of the number of flavors and works even when the U(n) × U(n) chiral symmetry is explicitly broken to U(1) n .
Although our arguments have been presented using an NJL-type quark model as a starting point, we expect that our proposed strategy for dealing with the mixing terms is fully applicable to other kinds of chiral models such as the linear sigma model [41], massive Yang-Mills models and so on [23]. This is justified if one understands that the form of these mixing terms is fundamentally constrained by the symmetry requirements which should, in principle, be the same in any effective chiral model for mesons. A particular shifting scheme | 5,264 | 2017-05-12T00:00:00.000 | [
"Physics"
] |
Relationship of isotopic variations with spring density in the structurally controlled springs and related geosystem services in Alaknanda Valley, Garhwal Himalaya, India
As a traditional water source, springs are vital for Himalayan communities and it is essential to consciously focus on springs conservation. We report oxygen isotopes (δ18O) of spring water before, within, and after the tectonically active zones of the Alaknanda Valley, Uttarakhand. Higher variation of δ18O in the spring waters is found in highly tectonically disturbed zone i.e., Zone-2 with δ18O range − 4.9‰ to − 9.0‰ compared to tectonically less disturbed zones: Zone-1 and Zone-3 with δ18O value range − 7.9‰ to − 9.9‰ and − 7.4 to − 10.2‰ respectively. We hypothesize that the highly active thrust zones (Zone-2) with increased permeability compared to other Zones, manifested as greater spring density, results in higher water recharge in Zone-2. Very high to high spring density stretches are dominant in Zone-2 compared to Zone-1 and Zone-3. Stretches in Zone-2 with high spring density formed due to its highly tectonically active nature leads to the higher isotopic variation in Zone-2. The study also identifies the geosystem services provided by thrust zones as water resources to the local people and need of conservation modalities to manage the spring water resources in the thrust zones.
Many accounts associated with the earthquake-induced permeability enhancements and subsequent release of the new water sources have been documented worldwide [1][2][3] . Further, the coseismic hydrological changes propose the groundwater mixing among different aquifers through new cracks 4 , which is the manifestation of the enhanced permeability due to the earthquakes. Stable isotopes of water have been used as proxy 5 to show that that earthquake enhanced permeability and release water from mountains as a subsurface hydrogeological response to the 2016 Mw 7.0 Kumamoto earthquake. The stable isotopic variations have been used as a direct fingerprinting tool to examine the changes in between before and after earthquakes 2,3,6 . On the other hand, the variation in the isotopic ratios ( 18 O/ 16 O) of oxygen has generally been attributed to evaporation of meteoric water 7 , together with 18 O-shift by the water-rock interaction 8 and changes of the oxygen isotopes before earthquakes [9][10][11] . Also, the isotopic variation with altitude has been worked out by 12,13 . This progressive depletion of the heavier isotope in rain with altitude, as the cloud ascends, is often referred to as the "altitude effect". Thus, most of the studies are focused either on the earthquake or altitude-regulated isotopic variations (altitude effect) or due to the other fractionation processes. Other factors like the very high abundance of springs sources or very high spring density in some stretches and the presence of active thrust zones may also also bring changes in the stable water isotopes giving the indication of water recharge in the zone. As the stable isotopes of oxygen indicate the recharge sources of the water [14][15][16] , the present work has been carried out to understand the impact of tectonics on isotopic composition of spring water.
In the present work, we tried to decipher the first account of the thrust-controlled stable oxygen isotope variation in the spring waters of Alaknanda valley located in Garhwal Himalayas, Uttarakhand, India.
Study area
The study area is located in the Alaknanda Valley of the Garhwal Himalayas, the sub-tropical, humid, and temperate type of climate zone encompassing the coordinates of 30°0'N-31°0'N latitude and 78°45'E-80°15'E longitude as shown in Fig. 1. The river basin covers an area of 11,085 km 2 . The study area is characterized by elevated, rugged mountain landscapes, V-shaped valleys, river terraces, flood plains, etc.
Geological and structural setting. The Alaknanda river basin lies between the sediments of Tehthyan Himalayan sediments and rocks of Lesser Himalayan sequence. The basin has its southern portion underlain by limestone, phyllite, schists and quartzites of the Lesser Himalayan series [24][25][26][27] 26,27 .
With the help of morpho-tectonic active map, the area has been divided into three zones such that the Zone-1 and Zone-2 lie in tectonically highly and very highly active zones respectively, while Zone-3 lies in moderately active zone 17 . Also, the earthquake density is very high in Zone-2, moderate in Zone-1, and low in Zone-3 17 . Springs water samples (n = 96) were collected in pre-cleaned high-density polyethylene bottles (HDPE) (Supplementary information Table.1). The bottles were filled without air bubbles and tightly capped to avoid any evaporation and exchange with the atmospheric moisture. The elevation, time and location were recorded for individual sample. The isotopic analyses of water samples were carried out at Isotope Hydrology Laboratory, National Institute of Hydrology, Roorkee, India. The collected samples were analysed for stable isotopic ratio of oxygen (δ 18 O) using Dual Inlet Isotope Ratio Mass Spectrometer following CO 2 equilibration methods with systematically described steps and standards in the measurement 36,37 . Figure 1 showing the geology of the area have been derived by georefrencing and modifying the 17 and superimposed with spring data using Arc GIS Desktop10.6 software and QGIS 3.10.6 software (QGIS 3.10.6 https:// qgis. org/ downl oads/ QGIS-OSGeo 4W-3. 10.6-1-Setup-x86_ 64. exe) respectively. The spring density shown in Fig. 3 is made by georeferenceing Fig. 1 of Shukla et al. 17 , using ArcGIS Desktop10.6 software. According to the criteria of 17 , the area with spring data is divided into three zones i.e., Zone-1, Zone-2 and Zone-3, as shown in Figs. 3 and 4, where the Zone-2 is "very highly tectonically active" zone, Zone-1 is "highly tectonically active" zone and Zone-3 is "moderately tectonically active" zone. Data analysis to evaluate the δ 18 O variation in boxplots was done through R studio 1.4.1717 software (https:// educe-ubc. github. io/r_ and_ rstud io. html). The three zones viz. Zone-1, Zone-2, Zone-3 were mapped. The Zones 1, 2 and 3 are Lambagad-Joshimath-Helang zone, Chamoli-Karanprayag-Rudraprayag zone and Srinagar zone respectively. In order to establish a balanced comparison with dataset in three zones and to support the isotopic variation due to tectonics the spring density have been calculated using the point density tool in ArcGIS software as shown in Fig. 3. The spring density was calculated by finding the number of springs in 1 km 2 within a buffer of 1 km along the road in study area and is measured in number of springs per km 2 . To minimise the contribution of altitude effect the Zone-2 is further divided into two smaller sub-zones i.e., A (outlined yellow, with high density stretches) and B (outlined black, with low density stretches) and thus a balanced dataset for each zone was established. Also the high to very high spring density stretch in Zone-2 i.e., A, is compared with the similar type of density stretch of Zone-1 and low spring density stretch of Zone-2 i.e., B, is compared the same density stretch of Zone-3 for a balanced comparison. The spring density is used to avoid the comparison biasness that could occur due to bigger size of Zone-2 than Zone-1 and Zone-3. To better explain the isotopic variation of δ 18 O the concept of δ 18 O diversity is introduced in zones under study. The diversity here means the various different species of δ 18 O mentioned in legend that a zone holds and is explained in Fig. 3.
Discussion
The study is a first account that introduces the anomalies of spring density, δ 18 O variation in "tectonically very high" 17 , "high" 17 , and "moderately high" 17 active zones. The anomalies are expressed in terms of very high (36)(37)(38)(39)(40)(41)(42)(43)(44)(45) to high (27)(28)(29)(30)(31)(32)(33)(34)(35)(36) spring densities in Zone-2 compared to Zone-1 and Zone-3, which is attributed to the "very highly tectonically active" 17 stretches in compartment A of Zone-2 in the vicinity of Ramgarh and Munsiari Thrusts as shown in (Fig. 3), giving rise to greater spring densities and high δ 18 O variation in spring water in Zone-2. The A compartment of Zone-2 with very high (36)(37)(38)(39)(40)(41)(42)(43)(44)(45), high (27)(28)(29)(30)(31)(32)(33)(34)(35)(36) and medium (18)(19)(20)(21)(22)(23)(24)(25)(26)(27) spring density stretches is compared to the spring densities of Zone-1 which according to Shukla et al. (2014) lies in the "highly tectonically active zone" is dominated by high spring density stretches (27-36.) to medium spring density stretches (18)(19)(20)(21)(22)(23)(24)(25)(26)(27). Thus the higher spring density in compartment A of Zone-2 compared to Zone-1 is attributed to the higher dominance of Ramgarh and Munsyari thrusts in A of Zone-2 due to its "very highly tectonically active" nature and lesser dominance of Vaikrita thrust, Munsiari thrust in Zone-1 due to its comparatively tectonically less active nature than Zone-2 as per Shukla et al. 17 shown in Fig. 3.This gives rise to higher δ 18 O variation in compartment A of Zone-2 compared to the Zone-1. Similarly dominance of low to medium spring density in the compartment B of Zone-2 is due to the lesser dominance of Ramgarh thrust but the spring density increases to medium and high as one approaches to North Almora Thrust (NAT) and so the variation in δ 18 O which is indicated by the presence of highest variation legend of turquoise blue shown in Fig. 4. Similarly, Zone-3 is dominated by low (9-18) to very low (0 -9) spring density stretches and medium (18-27) (1, 2, 3). The altitude variation is quite visible among different zones but is minimised within the particular zone. Response to reviewer 1. Comment for Figure: The third zone has error bar but is too short as it adjusts according to the data. Hence no changes have been made. www.nature.com/scientificreports/ spring densities stretches in minority. The spring density in the Zone-3 decreases to very low as one moves away from NAT compared to medium spring density in the vicinity of NAT and hence leads to higher δ 18 O variation in the B compartment of Zone-2 compared to Zone-3. Thus the "highly tectonically active" nature of Zone-2 Shukla et al. dominated by Munsyari (MCT-II) and Ramgarh thrust (MCT-III) compared to less active Zone-1 and Zone-3 containing Vaikrita thrust, Munsiari thrust and NAT respectively Shukla et al. 17 increases the spring density in Zone-2 compared to Zone-1 and Zone-3 shown in Fig. 3, and this gives an indication of higher water recharge in the Zone-2 which is manifested as the higher δ 18 O anomaly in Zone-2 than in Zone-1 and Zone-3 shown in Figs. 4 and 5. It is thus hypothesized that the dominance of thrust zones in Zone-2 might have enhanced the permeability and may have provided the pathways to enhance flow from mountain aquifers to downslope springs. Certainly, the high and focussed precipitation in Zone-2A 17,41 could attribute to the high stable isotopic variations in the spring water , but at the same time it is also a zone of active convergence 17,41 , and thus the overlap of the zone of active convergence (highly tectonically active nature of Zone-2A) and focussed precipitation suggests δ 18 O anomaly are not just simply a reflection of rainfall variability only but could also attribute to the permeability enhancement by the zone of active convergence providing additional water recharge to the springs. Shivanna et al. 42 , using δ 18 O showed that the spring water reflects the contributions from both precipitation and groundwater flowing through the weathered and fractured zones further validates the influence of tectonics in providing the additional water to the springs. Further, the spring density and δ 18 O anomaly also increases as one approaches near the vicinity of thrust zones in Zone-1 and Zone-3 despite being the zones of low precipitation 17 , justifying the precipitation alone do not govern the δ 18 O anomaly in the study area. The hypothesis is consistent with the work of 5 suggesting that fracture systems as the dominant pathways for new waters from mountain aquifers manifested as the high spring density zones in Zone-2. Also, higher isotopic variation in spring water of the Zone-2 compared to the Zone-1 and 3 is further supported by the fact that the foot spring waters have more depleted isotopic composition implying a contribution from higher elevations 5 and thus suggesting additional recharge zones and sources compared to the zones that are less tectonically active. Thus the stable water isotopic variation that we propose is equivalent to the change in isotopic compositions of the spring water, such that the variation in the isotopic composition of spring water gives the indication of the source diversity 2,5 that recharges the springs but controlled by the highly tectonically active thrust zones in Zone-2 compared to Zone-1 and Zone-3. Consequently, the higher isotopic variation is shown as a farrago of colour mapping of stable isotopes in (Fig. 4) and higher spread as revealed from boxplots in (Fig. 5) in Zone-2. It vindicates the contributions from additional sources 2 which is manifested as a greater spring density and greater (δ 18 O) in Zone-2 compared to Zone-1 and 3 and can also be accounted by a large mid altitude range of subzone B of Zone-2. However, unlike 2,5 where the temporal permeability enhancement by the earthquakes caused the isotopic anomalies in spring water giving the indication of additional sources, the isotopic variation in the present work is a result of spatial permeability enhancement due to tectonically varied nature of thrusts 17 in three Zones. The broad range of isotopic composition (more depleted to less depleted) in Zone-2 compared to other zones (Fig. 5), suggests different sources recharged by meteoric water at different elevations 2 . Possibly some of the sources with evaporation effects with less depleted isotopic compositions might reflect the dry season recharges 2 and sources which are isotopically more depleted suggests the rainy season recharges 2 thus validating the prevalence of different recharge zones providing the water to springs resulting high isotopic variations in Zone-2 than others (Fig. 5). The present account also elucidates the geosystem services provided by thrust zones in delivering the water resources to the Himalayan community that is dependent on springs for their day-to-day water needs. The diversity in the study area with the presence and absence of the number of thrust lines culminates to identify the geosystem services associated with the thrust zones. It has been stressed by 20,21 about the clear distinction between the biotic and abiotic services and has labelled the services provided by the geosystems as the geosystem services. They define the geosystem services as the "goods and services that contribute to human well-being specifically resulting from the subsurface". Brilha et al. uses 39,40 to delineate the geosystem services and has categorized it into regulating, supporting, provisioning, and cultural services under the ecosystem services while 20 Gray et al. did the same classification under the abiotic ecosystem services which 18 Gray has reclassified as Geosystem services into regulating, provisioning, supporting, cultural and knowledge 20,38 . further compartmentalizes the regulating services into global services (magnetic field or the gradual release of the Earth 's internal heat, hydrological cycle, and the carbon cycle) and local services like the river flows with input from groundwater to sustain the river flow in drought periods, weathering, erosion, river transport of sediments and nutrients, depuration of water during its underground circulation and many other terrestrial services 20 . The identified thrust zones associated with geosystem services in the present context can be added to the aforesaid criterion of 18,20,38 under regulating services of regional and local scale as a contribution to the geosystem services classification and the crucial, inevitable, and underrated role of geosystem services to the society. Also, according to 43 the spring water is the most important ecosystems that provides a multitude of goods and services to the human and biodiversity. Thus the springs provide irrigation water, potable water and livestock watering under provisioning services, cultural services in the form of medicinal services and treatment of diseases (digestive, skin, etc.) recreation (ecotourism) and supporting services in the form of stability of living beings including humans 43 .Albeit the springs being small in terms of absolute volumes of fresh water they have been a vital source of water for drinking and sanitation 44 and hence their low amounts accounts for the higher dependence of communities in the Indian Himalayan Region 45 and hence needs more attention for conservation. Thus, the springs as an important fresh water commodity in Himalayan region and their anomalous presence in the area controlled by tectonics, provides valuable insights for the need of targeted policy interventions in order to conserve them for accessing their long term benefits for the society.
Conclusions
We found the greater spread of − 4‰ in oxygen isotopic compositions of Zone-2 and a lower spread of − 2‰ and − 2.8‰ in Zone-1 and Zone-3 respectively. The greater spread may be attributed to greater spring density and "highly tectonically active nature" of Zone-2. Greater spring density in Zone-2 is likely due to the permeability enhancement by the "highly tectonically active nature" of Zone-2 17 than Zone-1 and 3 and has been designated as geosystem services of thrust zones. The study suggests the higher variation of stable isotopes of oxygen within the thrust zone (Zone-2) as an indicator of more water recharge into the zone which is manifested as dominance of very high spring density stretches in Zone-2 than others. Thus, the presence of the thrust zones in the study area functions to serve as abiotic assets to the Himalayan community by increasing spring density. These abiotic goods and services are the natural capitals for the Himalayan populace and are referred to as the geosystem services [18][19][20][21][22][23] , and their understanding is imperative to get multiple benefits for society in terms of water availability. Thus, we can say that Zone-2 has a higher density of springs and hence benefits most from the geosystem services of thrust dominance. The concept of oxygen isotope variation and its diversity within thrust zones has been introduced and linked to spring density and tectonically more or less active nature of Zones. The higher variation shows higher spread which in turn indicates additional recharge and different recharge sources in the vicinity of tectonically active zones compared to tectonically less active zones from water of springs. Albeit, the fruition of geosystem services as the springs in Zone-2 are a prerogative to the local community of area even then it's further evaluation will be a pragmatic approach during the event of lean or drought periods. If geosystem services could continue to provide water even in low amounts that they are containing through precipitation, their role would become more crucial in sustaining the populace of the region. Nevertheless, from the present data and assessments it can be inferred that the permeability enhancements in the thrust zones can be targeted as suitable rainwater harvesting sites in order of the need to maintain environmental flows due to the immense services of springs for the Himalayan populace.Such soft engineering practice could benefit, so that apart from the natural regulations, sustainable human interventions can provide viable solutions for the growing water needs in the Himalayan terrains. | 4,422.6 | 2022-05-10T00:00:00.000 | [
"Geology"
] |
The Research of Refugee Quota Plan on the Basis of Grey Model and Muti-Objective Analysis
In recent years, it brings lots of challenges for European governments with the influx of refugees. Hence, we build relative models to solve this situation. In this paper, we get relative data from UNHCR statistical yearbook and make a short-term prediction of refugee number via grey GM (1, 1) model at first. Then we select five limited indexes for the multi-objective analysis, in order to build the optimal quota model and get optimal quota plan. Finally, we put forward some feasible suggestions to EU about refugee quota plan on the basis of former analyses.
Introduction
Refugee policies are not only the problem concerned universally by the international community, but also an important cooperation area in the process of the EU integration. The EU [1] and its member states have issued a series of policies to refugee problems and obtained certain achievements. Whereas, the integration of European refugee policies started late and there are still many crises in this process. With the refugee problems become increasingly complex, the refugee export countries, recipient countries and transit countries still need to work together [2]. As we know, any policies are the products of specific era and environment, so it will be a big challenge for existing policies when the historical backgrounds and surroundings have tremendous changes. Now Europe is facing a great pressure because of refugee problems, and the primary causes of increasing refugees are the regional conflicts and natural disasters [3]. However, the international place and domestic situation of EU all have changed a lot after entering the 21st century [4]. At the same time, refugee problems emerge new features and the existing refugee policies arise many kinds of issues. It is imperative to formulate and improve refugee policies, hence doing research on EU refugee quota policy is extremely in need.
Research Questions
It is necessary to have a continuous understanding about the number of refugees pouring into Europe, especially the number of refugees for several consecutive years. And it can build a good data foundation for the following establishment of models.
(a) The major issue to be addressed is the establishment of scientific refugee population prediction model [5], in order to reasonably predict the number of refugees in next few years. We need to learn about the current situations of accepting refugees as well as the difficulties in settling, and quantify these limiting factors.
(b) According to the quantitative limiting factors, we try to build an optimal model of allocation plan for getting the optimal refugee quota plan.
(c) Evaluate and analyze the above plan existing deficiencies.
(d) Put forward our opinions to the refugee quota plan.
Research Assumptions
(a). Migration occurs yearly. (b). There are no unexpected disasters that have impact on the choice of the refugee.
(c). Ignore the data of countries which refugee number is less.
(d). The difference of languages cannot influence the refugee quota plan.
(e). The refugee mortality in the process of migration is consistent.
The Prediction of Refugee Number
For refugees into Europe mostly from the Middle East and North Africa, we use the number of refugees in Middle East and North Africa as raw data to build Grey Model [6]. There are too many factors can influence the refugee number, such as the war, politics and religions, so we do not predict the number for more years in order to guarantee the accuracy of these figures.
The Building of Grey Model
The number of refugees in the Middle East and North Africa of origin and asylum are found in UNHCR statistical yearbook, as shown in Table. We can get the flow volume of refugees by doing subtraction between the figures in Table 2, as shown in Table 2. We choose this portion of refugees as the number of migration into Europe for most of them migrate into Europe. Constructing the sequence, stands for seats of each number in sequence.
Thus the data matrix and the data vector are. (3) Do least square estimation to parameter list then we have
Model Test-Residual Test
According to the prediction formula, calculate then we have (9) Make IAGO to get sequence (10) Original sequence (11) Calculate the absolute residua and relative residuals sequences. Absolute residua sequence: (12) Relative residuals sequence: The relative residuals is less than 1.19%, thus the accuracy of this model is high.
Model Test-Conduct the Relational Degree Test
Calculate the absolute residua sequence between and , then we have (14) Calculate the correlation coefficient There are only two sequences (reference sequence and comparison sequence), thus we do not calculate the second level of minimum difference and maximum difference.
Model Test-Posterior Error Test
Calculate: Calculate the probability of small residual All are less than , thus the probability of small residual , thus model is qualified.
Model Solution
We find that the prediction effect of our model is ideal after testing it, as shown in Table 1. The difference between predicted values and actual values is not significant, so we can predict the number of refugees in recent three years. We via MATLAB figure out the refugee population increased from 5745596 in 2016 and 11379385 in 2017 to 22537333 in 2018, also the increased ratio is larger and larger. Therefore, Europe will encounter more serious tide of refugees and bear a greater burden in next few years.
The Calculation of Refugee Quota
For the number of each country can house is limited, it needs a reasonable plan for settling refugees to lighten the load on European countries. Therefore, on the basis of multi-objective analysis method [7], we build an optimal model to calculate the certain limited amount of refugees pour into Europe. And then we use the same multi-objective analysis method to distribute the refugees to each recipient country according to the actual situations in different countries.
The Building of Quota Model
For multi-objective analysis, we choose five indicators to analyze the refugee number Europe can accept, such as population density, GDP per capita, capacity per capita, personal welfare amount and the distribution of resources.
The refugees are divided into four types, the elderly, child, young male and young female, then set them in turn as ; ; and For the population density, we choose the population density of the eastern coastal cities in China as the maximum. Because we learn about that the most areas in Europe are habitable except for only a few areas.
For the GDP per capita, we choose the 80% of original level as the minimum. Because we get that there are many developed countries in Europe, it will not seriously Influence their development even if the GDP falls.
(23)
Due to the mature welfare policies in Europe countries min G G x x x x x ≥ + + + + [8], it cannot increase the refugee settlement costs by decreasing the expenditures of welfare. In reference to the large amount of information and documents, we find that it is feasible to reduce the welfare portion to 70% of original level.
The welfare policies of children and the elderly are the same as the natives in Europe, but the young male is 60% and the young female is 70% of the natives in Europe (24) As for refugees, the most important thing is solving the problems of clothing, food and shelter. As mentioned above, it still can meet the daily need of people even if reduce the welfare portion to 70% of the original level.
(25)
Some refugees have skills originally, so they can contribute to recipient countries. We define that the elderly and children do not have the ability to work, but there are 70% in young male and 50% in young female can work.
(26)
Aggregate all the above analyses, and then we have the following equations:
Model Solution
It will obtain the following equation that takes the data of the table into the( ) 27
(28)
Via the lingo we get the ; ; and , a total of 3.267 million refugees. According to the multi-objective analysis of Europe, we use the same limited factors and method to analyze some member states. Here are the constraint conditions on refugee problems to German, Hungary, Sweden, Italy and France.
Suggestions
Peace and development remain as the main theme of the 0 0 1 2 3 4 min 60% 70% present era, however, the European migrant crisis [9] begins to spread at unprecedented speed. The settlement of refugees for humanitarian principle is a very tricky problem need to be solved by European governments. Combining with the realistic factors, we find that there are many other aspects having great impact on refugee quota plan [10], such as national policies, external environmental factors and refugees their own quality. On the basis of above research, we integrate all aspects and make the following recommendations to EU: (a). Release the detailed refugee quota plan in public and explain how this plan formulates; (b). Do well in ideological work to general public, let them not to connect terrorists with refugees; (c). Reinforce the cooperation with international communities and strive for the better solutions; (d). The once refugees should be got a fair deal like ordinary citizens; (e). Strengthen basic infrastructure construction and improve the health care system; (f). Supervise the implementation of refugee quota plan.
Combining with the results of our research, we put forward the above suggestions and sincerely hope these proposals can help refugees go through the tough time as well as EU.
Strengths and Weaknesses
There are some of the following features in our model.
Advantages
(a). There are many influence factors in the prediction of refugee population, and the structural relationships between them are complex, so it is difficult to do precise calculation. But we use the grey model to integrate all these factors into time and downplay each f actor's influence on population. It can help us be easier to get the results and achieve more accurate prediction effect.
(b). This model can take consideration of many factors at the same time, so we can use it to give full consideration to their own situations for refugee recipient countries and analyze specific issues for different countries, thus the optimum allocation we get after multi-objective optimization analysis is reasonable. It can separate investigation between different objectives and each of them have successively focus on relationships, so we can get the weight coefficients which used into comprehensive evaluation analysis model.
Weaknesses
(a). For GM (1, 1) model [11], there are certain restrictions on its use conditions. It is the model of describing something changes exponentially, so it is always applied in prediction systems which develop and change by exponential law.
(b). The growth of refugee population can be affected greatly by single factor, so it is different from the natural population growth. Therefore, we only can make a short-term prediction in grey model.
(c). It is important to choose constraint conditions and it needs us to further discover constraint conditions hidden in the problem. Meanwhile, it will be better if we transfer the single-objective programming problem into multi-objective decision-making problem. When analyzing and solving the same problem, the model we can build is relatively single and we cannot analyze and compare in different ways on the same model.
Model Extensions
(a). The grey model can be used in many ways, such as pest control, population growth and the prediction of business outlook. And it is convenient for users to decide whether choose this model only through the accuracy test.
(b). The multi-objective optimization analysis is the reasonable analysis of multi-angle. So it can help us give more comprehensive proposals and consider all kinds of complicated conditions in actual situations. This method also can be used in many ways which determined and limited by multi factors such as the allocation of water resources. In a word, this model has strong scientific nature and practicability.
Conclusions
Overall, this study first clearly predicts the growth tendency of refugee populations via Grey Model and then gives the optimal refugee quota via Muti-objective Analysis. According to the prediction, the number of refugees will increase from 5.74 million to 11.37 million from 2016 to 2018, and reach the peak value in 2018 about 22.53 million. Hence, Europe will face tremendous pressure on refugee problems in the next few years. It is not only a management problem, but also a humanitarian trial. The relations between Europe governments, the local people in Europe and refugees are complex. Once the interests of any party are violated, it will become the focus of the world. So it is high time to give an optimal refugee quota plan to distribute the refugees to each member states for normal development in Europe. As OECD Secretary -General Angel Gurria said: "Immigration is far from a burden, but a real treasure. [12]" We think the reasonable distribution of refugees would benefit the member states rather than bringing crises. In this paper, we reasonably calculate the specific quota plan about five representative countries. According to our research, we put forward some feasible suggestions to EU. We sincerely hope this research can help EU and member states solve refugee crises promptly and reasonably. | 3,081.4 | 2017-05-22T00:00:00.000 | [
"Computer Science"
] |
Genetic Diversity in the Camel Tick Hyalomma dromedarii ( Acari : Ixodidae ) Based on Mitochondrial Cytochrome c Oxidase Subunit I ( COI ) and Randomly Amplified Polymorphic DNA Polymerase Chain Reaction ( RAPD-PCR )
Hyalomma dromedarii ticks are important disease vectors to camels in the UAE and worldwide. Ticks can be identified using DNA-based techniques. In addition, such techniques could be utilized to study the intraspecific genetic diversity in tick populations. In this study, the genetic diversity of four H. dromedarii populations was investigated using mitochondrial cytochrome c oxidase subunit I (COI) gene and randomly amplified polymorphic DNA polymerase chain reaction (RAPD-PCR). The results showed that both of the aforementioned techniques produced similar grouping patterns. Moreover, they revealed that the four tick populations had high levels of genetic similarity. However, one population was slightly different from the three other populations. The current study demonstrated that H. dromedarii ticks in the UAE are very similar at the genetic level and that investigating more locations and screening larger numbers of ticks could reveal larger genetic differences.
Introduction
In order to successfully control tick-borne diseases, it is very important to develop good identification techniques, which enable successful separation of the different species of ticks.The RAPD-PCR is a molecular technique, which was used before the development of DNA sequencing.It is a good molecular technique, which is utilized for many years by taxonomists successfully and reliably [1] [2] [3].However, with the advent of DNA sequencing new molecular identification techniques were developed by using segments of certain genes to provide good species identification.In a study [4], researchers developed a DNA barcoding system for the identification of Ixodid ticks using segment of the 16S and COI genes.The identifications were successful, however it was mentioned that there were serious deficiencies in the information of 16S and COI of some species of ticks, and thus additional research was needed to resolve this problem.
In another study, medically important ticks were identified using a 658 base pair DNA barcode sequence from the COI gene [5].Additionally, a genetic cluster analysis was performed on the barcode sequences and 26 Ixodid species were included.It was reported that DNA barcoding was a very good tool that clinicians could utilize to get the correct identification of tick specimens.To date, the use of a COI gene fragment is the standard marker for DNA barcoding.However, for COI, there are some limitations in its use with some species, and for some taxa [6].In order to study their relative effectiveness, the COI, 16S ribosomal DNA (rDNA), nuclear ribosomal internal transcribed spacer 2 (ITS2) and 12S rDNA were compared for tick species identification.The results showed that the intra-specific divergence of each marker was much lower than the inter-specific divergence.In addition, the aforementioned study indicated that COI was not significantly better than the other markers in terms of its rate of correct sequence identification.Nonetheless, COI should remain as the first choice for tick species identification.Moreover, a study reported that the DNA barcoding was successfully used in tick identification and that the K2P distances in most interspecific divergences exceeded 8%, while intraspecific distances were usually lower than 2% [7].However, the study indicated that more research was required to reveal ambiguous species delimitation in some problematic genera.But is should be mentioned that there is a chance for some misidentification when using DNA-based techniques.A study was conducted as a comparative test of identification of ticks occurring in Western Europe and Northern Africa.There was a sizable misidentification rate and thus the recommendation was that a combination of certain genes may adequately identify the target species of ticks [8].
Ticks are vectors to many pathogens and parasites, which can cause serious diseases to human, domestic animal and wildlife [9] [10].In different parts of the world such as North Africa, Southern Europe, Middle East, Central Asia and China Hyalomma (Acari: Ixodidae) ticks are common and may transmit Crimean-Congo hemorrhagic fever (CCHF), rickettsiae and tropical theileriosis [11] [12].One of the potentially important parasites to domestic animal health in the Middle East is Theileria annulata because it infects cattle and is transmitted by Hyalomma spp.[12] [13] [14].Ornithodoros savignyi collected from camels in Egypt was reported to harbor Rickettsia-like microorganisms [15].
Anaplasma marginale, Coxiella burnetii, Rickettsia aeschlim were reported from Egypt, Somalia and Nigeria from several Hyalomma spp.including H. dromedarii Koch, 1844 [12] [16].Seventeen species of the genus Rickettsia are catego- rized within the Spotted Fever Group (SFG) rickettsiae [17] and several have been documented in the Middle East, although some reports are considered questionable [12].In the UAE, a spotted fever group Rickettsia sp. and Theileria annulata were recorded for the first time in H. dromedarii ticks in the country [18].Also, a Coxiella-like endosymbiont was reported in argasid ticks (Ornithodoros muesebecki) from a Socotra Cormorant colony in Umm Al Quwain, United Arab Emirates [19].
The aim of this study was to investigate the intraspecific genetic diversity of four H. dromedarii tick populations from the UAE.
Tick Collection
Camel ticks, H. dromedarii, Figure 1 Twelve ticks were collected per location.
DNA Extraction and PCR
Genomic DNA of camel ticks was extracted from tick legs using a Maxwell 16 Tissue DNA purification kit (Promega, USA) following the manufacturer's protocol.For the mitochondrial cytochrome c oxidase subunit I gene (COI) the following primer pair was used FishF1: 5'-TCAACCAACCACAAAGACATTGGCAC-3' and FishR1: 5'-TAGACTTCTGGGTGGCCAAAGAATCA-3'.The template DNA was amplified in a 25 μl reaction mixture containing 50 ng of mite DNA, 10 pmol of each primer pair, and 12.5 μl 2x PCR Master Mix (Promega, USA).Reaction mixtures were preheated at 95˚C for 5 min.Amplifications were carried out for 35 cycles (95˚C for 30 s, 58˚C for 30 s, and 72˚C for 1 min (final extension cycle at 72˚C for 5 min) in a Swift MaxPro thermocycler (ESCO, Singapore).For the RAPD-PCR assay, a set of 13 decamer primers (Operon Technologies, USA) were used.The sequences of these primers are presented in Table 1.These primers were selected as they generated consistent and reproducible RAPD banding patterns on the agarose gel.HotStarTaq™ Master Mix kit (Qiagen, USA) was used for PCR reactions.The reaction mixture included 20 ng template DNA, 12.5 μL HotStarTaq™ Master Mix, and 50 pM primer in a total volume of 25 μL.
The program of thermal cycling was as follows: initial activation step at 95˚C for 5 min followed by 40 cycles of 1 min at 94˚C, 1 min at 36˚C and 2 min at 72˚C, with a final extension step at 72˚C for 5 min.To detect contamination, a negative control, in which all PCR components except DNA template were included, was run in every experiment.In order to insure reproducibility of the individual bands, PCR reactions on the extracted DNA were conducted at least three times.
Amplification products were resolved by electrophoresis in a 1.5% agarose gel, which was stained with ethidium bromide and photographed by UVDI gel documentation system (Major Science, Taiwan).
PCR Amplicon Sequencing and Analysis
In the RAPD-PCR, the banding pattern produced by each primer on the agarose gel was scored as one or zero (one for the presence of a band and zero for the absence of the band).Band scores were analyzed using cluster analysis in SPSS statistical package (IBM SPSS Statistics for Windows).The PCR amplicons were purified by a PCR purification kit (Qiagen, Germany) following the manufacturer's protocol.The purified PCR amplicons were sequenced (Sanger sequencing) by Source Bioscience (Cambridge, UK).Bases with too low confidence calling levels were trimmed from the beginning and end of sequence.Sequences were submitted in the GenBank and received accession numbers.They were analyzed by NCBI BLAST (http://blast.ncbi.nlm.nih.gov/Blast.cgi)analysis tool.
In addition, sequence alignments and tree generation were conducted in MEGA6 [20].The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) was shown next to the branches.Genetic distances were computed using the Kimura 2-parameter method and were in the units of the number of base substitutions per site.
RAPD-PCR
The analysis of the banding patterns produced by the random primers revealed that the four H. dromedarii populations were grouped into four clusters Figure 2. In this grouping of cluster analysis, the Bayata, Al-Wagan, and Falaj Al-Mualla were closer to each other compared to Al-Samha.
COI Gene
The sequences of the PCR amplicons (≈600 bp) of the mitochondrial COI gene were submitted in the GenBanK and received accession numbers: Falaj Al-Mualla: MG188798, Bayata: MG188799, and Al-Samha: MG188800.The sequence of Al-Wagan KJ022631 was submitted already in the GenBank in a previous study [18].The sequences of Falaj Al-Mualla, Al-Wagan, and Bayata were 100% similar to each other, while the one of Al-Samha was different from them in two nucleotides.The intra-specific divergence value was 0.002.In the maximum-likelihood Figure 3 and neighbor-joining Figure 4 trees the four sequences appeared in cluster of H. dromedarii sequences in which the sequence of Al-Samha showed up with some distance from the other three sequences.There was 99% -100% similarity and 99% -100% sequence coverage between the four sequences of the UAE and other H. dromedarii sequences in the GenBank such as KT920181.1,AJ437061.1,and KY548842.1.The inter-specific divergence value was 0.091.
Discussion
The RAPD-PCR is a little bit an older molecular technique compared to Sanger sequencing, yet it is a good tool in studying genetic diversity.In a study [21] scientists reviewed more than 9000 papers that used RAPD and reported that The use of DNA sequencing of the COI gene is a very reliable identification tool in animal species.The current study did not show high levels of genetic diversity among the H. dromedarii ticks collected from the four locations in the UAE.In part, this could be explained by the fact that the sampling was limited to a few areas of UAE in a relatively short period of time.Therefore, future studies should investigate more locations and screen larger numbers of ticks in order to possibly find bigger genetic differences.In this study, the four tick populations showed no intraspecific sequence difference except in one population (Al-Samha) there were two different nucleotides.Moreover, there was 99% -100% similarity between the four sequences of and other H. dromedarii sequences in the Gen-Bank.The maximum-likelihood and neighbor-joining trees produced almost similar tree topologies and the branches were supported by high bootstrp values.
The UAE tick sequences appeared in clusters of H. dromedarii on the trees.
Other studies reported low genetic diversity when using COI gene.In a study on molecular identification of necrophagous Muscidae and Sarcophagidae fly species by COI, the data showed very low intraspecific sequence distances and species-level monophylies [22].The COI gene was used successfully in conducting an assessment of genetic variability in dog ticks [23].Studying genetic diversity in H. dromedarii tick populations is very important if some populations were different by harboring certain disease causing agents or by having acaricide resistance genes.Therefore, future studies should include as many geographical areas as possible in the UAE.This is very important in having successful disease and acaricide resistance management programs.
Conclusion
In conclusion, the current study demonstrated that H. dromedarii ticks in the UAE are very similar at the genetic level based on the four sampled locations.Also, the RAPD-PCR and the sequencing of the COI gene produced similar results.To our knowledge, the present study provides the first report of genetic diversity in H. dromedarii in the UAE.
were collected manually from the infested animals.Collected ticks were kept in 50-mL plastic tubes inside a −80˚C freezer.Ticks were collected from four locations: Falaj Al-Muaalla, Bayata, Al-Wagan, and Samha.Falaj Al-Mualla and Bayata are located in the Emirate of Umm Al-Qaiwain while Al-Wagan and Al-Samha are in the Emirate of Abu Dhabi.
Figure 2 .
Figure 2. Dendogram of a cluster analysis showing three tick populations with higher DNA similarity versus a fourth population with less similarity (SPSS statistical package, IBM SPSS Statistics for Windows).
Table 1 .
Random primers and number of generated bands of randomly amplified polymorphic DNA polymerase chain reaction (RAPD-PCR) in camel ticks. | 2,741.2 | 2018-10-31T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Inter- and Intra-Host Viral Diversity in a Large Seasonal DENV2 Outbreak
Background High genetic diversity at both inter- and intra-host level are hallmarks of RNA viruses due to the error-prone nature of their genome replication. Several groups have evaluated the extent of viral variability using different RNA virus deep sequencing methods. Although much of this effort has been dedicated to pathogens that cause chronic infections in humans, few studies investigated arthropod-borne, acute viral infections. Methods and Principal Findings We deep sequenced the complete genome of ten DENV2 isolates from representative classical and severe cases sampled in a large outbreak in Brazil using two different approaches. Analysis of the consensus genomes confirmed the larger extent of the 2010 epidemic in comparison to a previous epidemic caused by the same viruses in another city two years before (genetic distance = 0.002 and 0.0008 respectively). Analysis of viral populations within the host revealed a high level of conservation. After excluding homopolymer regions of 454/Roche generated sequences, we found 10 to 44 variable sites per genome population at a frequency of >1%, resulting in very low intra-host genetic diversity. While up to 60% of all variable sites at intra-host level were non-synonymous changes, only 10% of inter-host variability resulted from non-synonymous mutations, indicative of purifying selection at the population level. Conclusions and Significance Despite the error-prone nature of RNA-dependent RNA-polymerase, dengue viruses maintain low levels of intra-host variability.
Introduction
Dengue is the most frequent arthropod-borne human viral infection in tropical and sub-tropical regions, with one hundred million people at risk annually. In the last two decades, Brazil has been responsible for more than 60% of the total reported dengue fever cases in the Americas, experiencing severe yearly outbreaks of serotypes 1 to 3 [1]. In 2008, serotype 4 re-emerged in the northern region of Brazil [2] and rapidly spread to other regions, causing small outbreaks in at least six different states. Today, the country experiences outbreaks of all four serotypes. For this study, samples were collected during a large outbreak in the State of São Paulo coastal region during the first semester of 2010, with more than 108,000 reported cases, including 81 deaths associated with the severe forms of the disease [3]. This epidemic affected a geographic area nearby the city of Santos, with the greatest number of cases reported in five nearby cities ( Figure 1).
Dengue has limited intra-genotype diversity, which is modulated by two main processes: (i) the host's immune response, which exerts a selective pressure on the virus, and (ii) the bottlenecks at transmission (two in this case, the vertebrate-invertebrate alternate cycle) [4]. As a result, a balance between variability gain due to mutations in each host and the maintenance of these mutations through natural selection is shaping the resulting viral diversity observed at the population level.
Our current understanding of dengue intra-host diversity has been mostly limited to the analysis of single genes of dengue serotypes 1 to 3 using conventional cloning and Sanger sequencing methods [5][6][7]. Most studies agree that the level of intra-host diversity is relatively high with mean pairwise differences reaching up to 1.67%. Recently, a study conducted by Thai et al. [8] assessed dengue diversity by sampling a larger number of clones while using more rigorous methods to validate the mutations and reported much lower intra-host variability estimates than previously reported (0.008%,).In the last decade, deep sequencing technologies have emerged capable of sequencing individual molecules directly from PCR amplicons without the need for cloning thereby enabling researchers to study viral quasispecies diversity with unprecedented resolution. Work by our group and others has employed 454 pyrosequencing, a method for deep sequencing, for whole-genome shotgun sequencing of RNA viruses causing chronic infections, such as hepatitis C virus (HCV), human immunodeficiency virus (HIV), and simian immunodeficiency virus (SIV) [9][10][11][12][13][14], but studies on the evolutionary dynamics of acute infecting viruses within individuals are limited [15]. Here, we employed pyrosequencing in combination with a transposon-based fragmentation method to capture inter and intra-host diversity of near full-length DENV-2 genomes covering the entire open reading frame (ORF) in representative samples obtained during the 2010 outbreak in cities located in the State of São Paulo coastline [3]. By sequencing viruses sampled from 10 infected individuals we were able to confirm that the level of intrahost viral variability is much lower than previously reported, reaching approximately 0.002%, and determined the level of variability throughout four months of the outbreak.
Ethics Statement
The current project was conducted after Hospital das Clínicas -University of São Paulo's Institutional Review Board (CAPPesq) approval, under protocol #0652/09. Written informed consent was obtained from all participants. Written informed consent was also obtained for children through their parents.
Patients and Samples
Serum samples were obtained from patients with clinically suspected dengue fever treated at the Ana Costa Hospital in Santos, coastal region in the State of Sao Paulo, Brazil, during the 2010 DENV2 epidemic [16]. Dengue infections were confirmed and serotypes were determined by using commercially available rapid tests (NS1, IgG, and IgM) as well as multiplex reverse transcriptase polymerase chain reaction (RT-PCR) [17]. Patients were selected to be representative of the outbreak. Therefore, samples were obtained from cases diagnosed from February through May, 2010, from either classic dengue fever cases and severe cases.
To determine the viral load, RNA was extracted from 140 ml of plasma using the Qiagen Viral RNA mini kit (Qiagen, Valencia, USA) according to the manufacturer's instructions. RT-PCRs were performed in duplicate using SuperScriptH III PlatinumH SYBRH Green One-Step RT-qPCR with ROX kit (Invitrogen, Inc., EUA) and pan-dengue primers covering all 4 serotypes [18]. An internal control (Bovine Diarrhea Virus -BVDV, a flavivirus grown in cell culture) was added to the samples before extraction and also submitted to a parallel Real-time PCR assay. Supernatant from DENV-3 cell cultures was included as external control in every RT-qPCR run.
Several samples were also isolated in cell culture using the established C6/36 Aedes albopictus cell line (ATCC number CRL-1660). After the onset of a cytopathic effect (CPE) associated with DENV infection, supernatants were collected and subjected to RT-PCR in order to confirm the serotype. To discriminate between primary and secondary dengue virus infections, the antigen-binding avidity of specific IgG was measured using a modified Dengue ELISA IgG kit (Focus Technologies) [19].
RNA Extraction and Deep-sequencing
500 ml of plasma from acutely ill patients were centrifuged at 3,000 rpm for 5 min to remove cellular debris followed by centrifugation at 14,000 rpm for 60 min to concentrate the virus. To evaluate if the viral population diversity found in plasma is maintained after isolation in insect cell culture, we isolated viruses from one acutely infected patient and sequenced the viruses (i) direct from plasma and (ii) after one-week culture in C6/36 cell line. Viral RNA was extracted from 200 ml of plasma or cell culture supernatant containing the concentrated virus using the QIAamp Viral RNA Mini Kit (Qiagen, Valencia, USA) according to the manufacturer's instructions. Primers were designed to amplify the complete open reading frame with three to six overlapping PCR amplicons of approximately 2-4 kb (See Table S1 for primers sequences used), based on conserved regions identified after a multiple sequence alignment of complete DENV2 genomes of the American/Asian genotype. Viral RNA was reverse transcribed and amplified using the SuperScript III high-fidelity one-step reverse transcription-PCR (RT-PCR) kit (Invitrogen, Life Technologies, Carlsbad, CA) with the following conditions: 50uC for 30 min; 94uC for 2 min; 40 cycles of 94uC for 15 s, 55uC for 30 s, and 68uC for 2-4 min; and 68uC for 5 min. Following amplification, PCR amplicon bands were isolated using gel electrophoresis (1% agarose) and purified using a Qiagen MinElute gel extraction kit (Qiagen, Valencia, CA). Each amplicon was quantified using Quant-IT HS reagents (Invitrogen, Life Technologies, Carlsbad, CA), and all amplicons from a single viral genome were pooled together at equimolar ratios. Each pool was then quantitated, and approximately 50 ng of each was fragmented, using a Nextera DNA sample prep kit (Roche Titanium compatible; Epicentre Biotechnologies, Madison, WI) according to the manufacturer's protocol. Final libraries representing each genome were characterized for average size by use of a high-sensitivity DNA Bioanalyzer chip on a model 2100 Bioanalyzer (Agilent Technologies, Loveland, CO) and were quantified with Quant-IT HS reagents (Invitrogen, Life Technologies, Carlsbad, CA). Libraries were then subjected to emulsion PCR, and enriched DNA beads were loaded onto a picotiter plate and pyrosequenced with a Roche/454 GS Junior sequencer using titanium chemistry (454 Life Sciences, Branford, CT).
Pyrosequencing data were analyzed using CLC Genomics Workbench 5.5 (CLC Bio, Aarhus, Denmark). Initially, reads were trimmed to remove short and low-quality reads. Using a phylogenetically related DENV2 genome as reference (GenBank ID GU131864), reads were assembled and the resulting consensus sequences were then used as each subject's sequence in a reference guided alignment to assess intra-host variability. The single nucleotide polymorphism (SNP) analysis was performed using CLC's SNP analysis tool, applying the following parameters: window length = 7, maximum gap and mismatch count = 2, minimum central quality base = 30, minimum average quality for window bases = 25, minimum coverage =6100, and minimum variant frequency = 1% (defined as the percentage of nucleotides that differ from the reference). Parameters used in this analysis were chosen to include variants more likely to be biologically significant, based on previous studies that used Roche/454 data [12,14,20]. The 1% frequency cut-off both maximizes the chance of detecting minor variants while at the same time reducing the probability to detect errors generated in vitro [12,14]. Becker and co-workers used this cut-off as well as cut-offs below 1% for comparison and showed that if the coverage is higher than 1006, variants found at 1% frequency are most likely real, and the lower the coverage is, the higher the probability of the variant to be an artifact [20]. Also, by choosing high-quality values for both the central base (the SNP base) and the window bases (bases surrounding the central SNP base, both upstream and downstream), as well as having a minimum coverage of 6100, only areas of high quality and high coverage were considered for SNP calling. To avoid erroneous base calls in homopolymer regions, we mapped all the homopolymers through the genome by searching for regions of three or more identical consecutive nucleotides, and the deletions and insertions (DIPs) and SNPs that mapped in these regions were dismissed from Roche/454 generated sequences due to the high probability of miscalls.
In addition to the samples sequenced in 454 Roche, one different plasma sample was sequenced on the Illumina MiSeq using direct sequencing as previously described [14]. Briefly, after isolation of RNA from cell-free plasma and DNase I treatment, double stranded DNA was generated using the Superscript double-stranded cDNA Synthesis kit (Invitrogen, Carlsbad, CA, USA) primed with random hexamers thereby omitting any amplification. Approximately 1 ng of cDNA was then fragmented using the Nextera DNA Sample Preparation kit (Illumina, San Diego, CA, USA), quantified and subjected to deep sequencing on the Illumina MiSeq. Sequencing data were analyzed applying the same parameters previously used for the pyrosequencing data using CLC Genomics Workbench 5.5.
Sanger Sequencing
To compare the variability obtained by deep sequencing methods to traditional cloning and Sanger sequencing, the capsid gene and part of the prM gene of the sample ACS538 were subjected to traditional Sanger sequencing. Viral RNA was extracted as described above and single strand cDNA was generated with random hexamers using the High Capacity cDNA Reverse Transcription Kit (Invitrogen, Life Technologies, Carlsbad, CA) followed by amplification with the TaqPlatinum Hi Fidelity kit (Invitrogen, Life Technologies, Carlsbad, CA) using primers described in Table S1. The resulting 615-nucleotide (nt) fragment was cloned into the pCR-TOPO TA vector (Invitrogen, Life Technologies, Carlsbad, CA) and transformed into E. coli DH5alpha quimio-competent cells. Thirty-one clones were sequenced on the ABI3100 using Big-Dye7 Terminator (Applied Biosystems, Warrington,UK), with M13 forward and M13 reverse primers. Chromatograms were analyzed in CodonCode Aligner v.3.0 (available at http://www.codoncode.com/) with a Phred quality score of 20 as cut-off for trimming of low-quality sequences and manually aligned and inspected using SeAl (http://tree.bio. ed.ac.uk/software/seal/).
Phylogenetic Analysis
Consensus complete genome sequences obtained by sequencing on the GS Junior and the MiSeq were aligned to globally sampled DENV2 genomes (See Table S2 for a description of the sequences used) using Muscle [21]. Bayesian phylogenetic trees were constructed for 124 DENV2 genomes using MrBayes v.3.1.2 [22]. Two independent runs of 50 million steps each were done using GTR with gamma-distributed rate variation substitution model and a proportion of invariant sites as suggested by Akaike's information criterion (AICc) in jModeltest [23]. Sampled trees were summarized and the consensus tree was visualized in FigTree v1.3 (http://tree.bio.ed.ac.uk/). Selection pressures were evaluated by comparing the rate of non-synonymous changes per nonsynonymous site (dN) to the rate of synonymous changes per synonymous sites (dS) inferred for entire polyprotein as well as for specific sites (by codon) of all the consensus genomes by singlelikelihood ancestor counting (SLAC), using the open-source HyPhy package [24]. Genetic diversity was also estimated for the consensus complete genomes at both the nucleotide and amino acid levels using Mega 5 [25]. MacClade v4.08 was used to assess nucleotide and amino acid substitutions along the branches [26].
Epidemiological and Evolutionary Patterns of Santos DENV2
American/Asian DENV-2 complete genomes consensus were reconstructed from overlapping RT-PCR amplicons deep sequenced on the Roche/454 GS Junior instrument. Additionally, one plasma sample from an individual with classical dengue was sequenced on the Illumina MiSeq by direct sequencing [27]. Therefore, viral complete genomes were generated from ten infected patients that manifested either classical fever (eight individuals) or severe dengue (two individuals), and also from one sample cultured in C6/36 cell line. Disease outcome was determined according to the WHO's 2009 Dengue Guidelines [28], where three possible classifications were adopted: (i) Dengue without warning signals (classical), (ii) Dengue with warning signals (classical WS), and (iii) severe Dengue. Table 1 summarizes all samples used in this study as well as key clinical and laboratory relevant features. There was no evident correlation between age, gender or viral load and disease outcome. The complete genomes of all DENV2 viruses were submitted to GenBank, under IDs JX286516 to JX286526 (Table 1).
To determine the extent of DENV variability at a population level, we examined the consensus sequence of all viruses. We were not able to identify specific nucleotide or amino acid substitutions that distinguished classical and severe cases. Nucleotide pairwise distance calculated for all consensus genomes showed high degree of conservation, with a maximum distance observed between DGV37 and DGV69 (0.0044). Selection pressure analysis performed across the whole coding region and also by codon revealed no signs of positive selection (w = 0.03) as well as no codon under selection. The mean genetic diversity of the population (p) across the entire polyprotein was 0.002 for nucleotide and 0.0008 for amino acid sequence, indicative of protein conservation. Changes in 84 nucleotide sites were distributed throughout the coding region, and only nine were non-synonymous with three of them exclusively found in ACS542 (prM, NS3 and NS4a) ( Figure 2B). Surprisingly, while the envelope gene exhibited low variability at inter-host level, the NS5 showed higher variability among samples (Figure 2A). It is also interesting to observe that the variability among DENV2 population sampled in this outbreak was much higher than that observed in viruses that circulated in a 2008 epidemic in Ribeirão Preto, a city located in the North of the State of São Paulo (represented by light green squares 2B), even though they belong to the same lineage.
Phylogenetic Analysis
To evaluate Dengue viruses in an epidemiological context, a Bayesian phylogenetic tree was constructed using 124 complete genomes comprising all five DENV-2 human genotypes and the eleven genomes sequenced from the 2010 DENV2 Santos outbreak ( Figure 3A) (GenBank accession numbers are listed in table S2). All genomes sequenced in this study clustered with viruses that circulated in previous outbreaks registered in São Paulo and Rio de Janeiro States in 2008, but separated from those that circulated in older epidemics [16,29]. Based on our results, Brazilian DENV viruses are not monophyletic, with viruses from Cuba, USA and the Dominican Republic clustering with those sequences.
The eleven 2010 DENV-2 genomes sequenced here segregated into two sub-populations, named clade 1 (DGV34, DGV69, DGV106, ACS538, ACS542 and ACS721) and clade 2 (DGV37, DGV91, ACS46p, ACS46sn and ACS380) ( Figure 3B). By mapping all changes along the branches leading to them we found that viruses from clade 2 accumulated eleven nucleotides changes (eight C-T and two G-A transitions and one C-A transversion) in relation to the clade 1. One of them, a transition of cytosine to thymine at position 180 of the envelope gene led to an amino acid replacement (Thr to Ile). We also detected variability in NS1, NS3, and NS4b but found no evidence of association with virulence or pathogenicity.
Deep Sequencing of DENV2 Genomes
On average, 44,000 reads per sample (620.800) were generated with an average coverage of ,1226 reads per nucleotide in Roche/454 (Table 2). Between 0.1 and 1.7% of the reads could not be mapped to Dengue genomes. Blast analysis revealed that 99% of the unspecific reads result from bacteria, while none of the remaining reads matched to the human genome. Less than 0.001% of the unmapped reads come from DENV.
Homopolymer regions can induce erroneous base calls on the Roche/454 platform, therefore all SNPs and DIPs that mapped to those regions were disregarded in subsequent analyses. Between 32 and 85 DIPs for each sample sequenced were recovered by 454/ Roche and only two DIPs by Illumina. More than 90% of 454/ Roche DIPs mapped to homopolymer regions and about 80% of them were identified as deletions in homopolymers of A, followed by a much lower proportion of Ts, Cs and Gs, respectively ( Figure S1). The presence of DIPs was correlated to the size of the homopolymer and most DIPs occurred in regions of more than 5 nucleotides ( Figure S1, right top graph). A total of 624 homopolymers (ranging from 3 to 6 consecutive bases) were found along the DENV2 genome, resulting in 2178 sites excluded from the analysis of intra-host genetic diversity ( Table 2).
The level of intrahost variability obtained through Illumina direct sequencing was also evaluated for the sample ACS380 and was comparable to the observed through 454 pyrosequencing ( Table 2, Tables S3 and S4).
Sanger Sequencing of Capsid and prM
In parallel, the entire capsid and part of the prM gene of ACS538 (400 base pairs) was amplified using similar conditions and 31 clones were sequenced. The resulting 12378 nucleotides sequenced revealed twelve changes randomly distributed among the clones. One change was observed twice, but the remaining eleven were singletons. None of the changes were detected by deep sequencing and the unique variable site in the capsid recovered by deep sequencing was not observed in Sanger sequences probably due to the very low frequency (1.6%).
Low Intra-host Genetic Diversity of DENV2
Overall, the two deep sequencing methods revealed between 10 and 44 variable sites per genome, resulting in approximately 0.002% of variability in all sequenced sites (Table 2 and see also Table S3 for details of frequency and coverage of all SNPs). DGV37, ACS46 and ACS721 had the highest conservation, with only 10 sites each exhibiting some variability and ACS380 had the lowest conservation, with 44 variable sites. The average viral load of the samples was around 7E+06 copies per ml, suggesting that the low level of diversity observed was not a result of low template abundance.
Among all 199 variable sites, 185 mapped into the coding region and 14 were found in either 59UTR or 39UTR. Among the sites that mapped to the polyprotein, 93 (50%) were in the first and second codon positions, resulting in non-synonymous changes. High proportions of non-synonymous mutations at the intra-host level have previously been reported for DENV1 [30] consistent with sequences evolving under or close to neutrality. In contrast, only 10% of all changes detected across the consensus sequences were non-synonymous, confirming that distinct selection pressures are imposed at short term and long term dengue evolution [30].
Variable sites were randomly distributed across the genes. Although we hypothesized that the envelope gene would accumulate the greatest number of viral variants due to immune pressure exerted by host immune system, our data did not support this hypothesis since only 13 sites exhibited some variability at envelope region (2.6%). Among the seven non-synonymous variants, four were into Domain II (EDII). With exception of I129F/V for DGV37 where variant phenylalanine was at a frequency of 6%, the remaining non-synonymous variations occurred in a frequency below 2% and none of them mapped into known immunogenic regions or were related to infection. Three non-sense changes were found (capside, NS3 and NS4b), but they appeared in a frequency below 2% in all cases (Table S4 and see also Table S3 for details of frequency and coverage of all SNPs.). Comparison of ACS46 sequenced directly from plasma to the same sample being cultured for one-week revealed similar levels of genetic variability (4.2 and 6.2610 25 respectively) ( Table 2). Although the consensus genomes were identical, variable sites within population were not the same. Only two synonymous variable sites were detected in both virus populations and a silent variant with a frequency of 36% in cell-isolated viruses was not detected at all in plasma population.
Despite such a low number of intra-host variability, particular sites showed some variability in distinct viral populations (Table S4). Among fourteen 'shared' variable positions, seven were non-synonymous, distributed through capsid, envelope, NS3 and NS5 and only the capsid 26 V was also variable at the inter-host level.
Discussion
To date, within-host diversity of RNA viruses has been studied most comprehensively in HIV-1 and HCV [11][12][13][14]31]. Here, we analyze intra and inter host diversity of dengue from patients infected with classical and severe forms of the disease that circulated during the same epidemic.
One of the challenges in studying viral diversity using massive parallel sequencing is the discrimination between bona fide and artificial variants since the type and rate of the error introduced by each platform is different. For example, insertion and deletion (DIPs) errors occur more frequently on the 454/Roche platform, especially in homopolymer regions, while the Illumina platform is more prone to mismatch errors [32,33]. Furthermore, differences in the sample preparation, such as PCR-induced errors, could also potentially affect the results. Despite the 10 fold higher coverage obtained on the MiSeq platform, the number of variable sites or the total variability observed at the intra-host level was comparable between both methods. Interestingly, almost all DIPs detected by 454/Roche sequencing mapped to homopolymer regions suggesting that those variants were sequencing artifacts specific to this platform. Unfortunately, we did not have enough plasma from the nine individuals used for 454 sequencing to perform a head-to-head comparison.
Intra-host Genetic Variability
Although the results described in this report are not based on large number of patients to carry out statistical analysis, some considerations can be discussed.
Despite the generally accepted high substitution rate of dengue viruses, estimated at 6.5610 24 s/s/y [34,35], our data suggests that DENV-2 intra-host genetic variability is much lower than previously estimated for DENV serotypes 1 and 3, as determined by Sanger sequencing [5,6,36]. Nevertheless, because the very stringent parameters to validate variants obtained through deep sequencing, it is possible that we missed some bona fide mutations. Parameswaran et al., also used deep sequencing to evaluate intra- host DENV2 variability in Nicaragua, but although they also noticed low intra-host variability, they observed 10 to 1006 more variable sites across the genome compared to our results, with no evidences of mixed infection [37]. However, about 3 out 9 variable sites described by the authors as putative hotspots map to homopolymer regions and may be related to 454 errors. By considering the number of reads with variability at any given site per total coverage at this site, the mean diversity of intra-host viral population was 4.8610 25 , which is consistent with previous results reporting DENV1 intra-host variability as 7.2610 25 [8]. Interestingly, while Thai et al. used cloning and Sanger sequencing to evaluate variability, they also applied rigorous methods to validate variants eventually leading to results comparable to our study.
With few exceptions, most studies estimating DENV variability have used cloning and Sanger sequencing [5][6][7][8]36]. In this study, we compared the variability obtained by deep sequencing to that recovered by cloning and Sanger sequencing. Although we are aware that an appropriate comparison is not possible here due to the low number of clones sequenced, the high number of changes observed in Sanger sequencing (, 0,1%) resembled previously reported results using the same method, but clearly contrasted to the results seen by deep sequencing here and by others [5,7,8,37]. Importantly, Descloux and group evaluated the sequence variation produced by their in vitro system and found an error frequency of 0.1% [5], which is exactly the SNP frequency found here and in the previous studies that used Sanger.
Two SNPs found by Sanger (both from A to G) were also present in 454/Roche reads. However, they mapped in homopolymer regions of 3 and 4 Adenines and were in frequencies of 0.37 and 0.4% respectively (data not shown). Thus, according to the parameters adopted here to consider a variant to be true (i.e. 1% frequency, quality phred score above 30 and, most importantly, the exclusion of homopolymers), these variants were dismissed. We therefore suspect that much of the changes reported by studies that used Sanger sequencing without posterior validation of the variants represent artifacts generated during cDNA synthesis and PCR amplification [12,38], which are more prone to be eliminated by deep sequencing analysis due to the high coverage and filter parameters. More rigorous experiments must be performed to evaluate how much of Sanger detected variants are due to experimental errors.
Arboviruses usually harbor less nucleotide and amino acid variation than observed in other RNA viruses due to alternate replication in mammalian and mosquito cells [39][40][41]. This alternation between hosts keeps arboviruses in high-fitness peaks that rarely overlap and only tolerate very few changes. In this scenario, genetically stable viruses would have evolutionary advantages over those that rapidly accumulate variants. In our case, the specialization of DENV to a single mammalian environment after urbanization may indeed have led to the selection of very specific genetic traits, further reducing optimal fitness peaks. Additional studies are needed however to determine how dengue restore adequate levels of diversity.
It is not unusual to use in-vitro cell culture to amplify viruses obtained from infected individuals prior to genetic and molecular studies [42,43]. However, although one week of virus culture is probably not sufficient for the emergence of adaptive changes, the amount of variability that is gained or lost at the intra-population level during in vitro culture is largely unknown. To address this question, we compared deep sequenced viruses directly from human plasma to the same viruses after one-week of culture in C6/36 mosquito cells. As expected, the consensus sequences were identical, but the sites showing variability at the intra-host level were different. These findings indicate that different (low) selection pressures are exerted by C6/36 culture [44], which ultimately impacts on the generation of distinct viral populations. However, the influence of distinct selection pressure imposed by mammalian and invertebrate hosts on the generation of dengue diversity needs more investigation.
There is little (and controversial) evidence that intra-host genetic variability is linked to disease severity [5,6,37]. Although the small number of patients analyzed at a single time point presents a limitation to the present study, the amount of intra-host viral variability did not differ between classical and severe dengue cases. Subtle effects of specific variants on disease severity may not be detectable in such a small cohort and larger numbers must be evaluated to address this hypothesis.
Although only few sites exhibited variability in our study, some were consistently identified among distinct viral populations. Detection of the same variants in viruses circulating during the same epidemic suggests two possible and non-exclusive explanations: (i) common variable sites are a consequence of de novo mutations in less constrained sites (i.e. mutation hotspots). Mutation hotspots often reflect the mechanisms of generating mutations at a particular site (i.e. enzyme-induced hypermutation, homopolymers and repeat regions) [45,46]. In viruses, hotspots are usually present in immunogenic sites and enable the escape of the host immune system [47,48]. Nevertheless, shared variable sites in our sequences were not in homopolymer or repeat regions and also appeared in low frequency, suggesting that such mutations were non advantageous, at least in this context. And (ii), minor variants are not immediately eliminated by genetic drift and could be maintained during transmission to a new host. Although variable shared sites were found as minority population within the host, it has been demonstrated that minor variants can be passed between individuals regardless of their fitness, and sometimes for extended time-periods [36,49]. Considering that the viruses sequenced in this study were sampled over a four months period, the maintenance of minor variants during this epidemic presents a realistic scenario.
Genetic and Epidemiological Features of DENV2 from 2010 Epidemics
Our phylogeny, which includes globally sampled DENV genomes, confirms the Caribbean origin of the Brazilian American/Asian DENV2 strain [16,29]. In fact, the presence of viruses from Central and North America (Cuba, Nicaragua, Guatemala, Jamaica, Dominican Republic and USA) within Brazilian sequences exemplifies the idea of continual exchange of viruses among countries on the American continent, and not only between Central and South America.
Interestingly, the separation of Santos viruses into two subclades and the high diversity found among the sequences in comparison to viruses sampled in previous epidemics (Ribeirao Preto, SP, Figure 2B) could be justified by the extent of both epidemics, where the number of cases reported in 2008 in Ribeirao Preto was around 1000 (http://www.cve.saude.sp.gov. br/htm/zoo/Den_gve08.htm) against more than 33 thousand cases reported in Santos coastal cities in a short five month period (http://www.cve.saude.sp.gov.br/htm/zoo/den09_import_autoc. htm). Although viruses sequenced in this study were all sampled in the city of Santos, it is a regional hub that is heavily trafficked by residents of neighboring cities.
In sum, the development of deep sequencing technologies provides new opportunities to study the viral diversity with unprecedented resolution. By using two different deep sequencing methods, we were able to describe, for the first time, the level of DENV2 variability intra and inter host throughout a large outbreak. The observed low level of variability may be a consequence of different pressures, including the constraints imposed by alternating hosts during virus evolution. Moreover, the putative intrinsically low rate of changes combined with the short time of viremia -common of acute infections -may exert less selective pressure on the virus than is commonly observed when studying chronic RNA virus infections. | 7,173.6 | 2013-08-02T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
A Resource Scheduling Strategy for WSN with Communication Constrained
. Due to the loss of wireless communication link, the remote estimator in the wireless sensor network can only receive part of the observation information or can not be completely received, which reduces the accuracy of the state estimation. Focusing on the above problems, a strategy based on network resource scheduling is proposed to improve the impact of link loss on state estimation. The strategy considers the quantization process of sensor observations and the limited transmission bandwidth. The objective of optimization is to minimize the estimated error covariance and the expected energy consumption of the data packet. The data rate and the time slot are allocated to each communication link. The simulation results show that the optimal state estimation of the physical process can be obtained under a small transmission bandwidth and simple BPSK modulation, and the energy consumption of the transmitted data packet can be effectively reduced.
Introduction
Wireless sensor networks can greatly improve the ability to monitor and control the physical environment, the typical application is the environmental control [1], car networking and traffic control [2], inventory tracking [3]and military applications [4].An important challenge in the wireless network is to improve the performance and reliability of the system under the constraints of network resources, including energy, transmission bandwidth, data rate, and so on.
Two important limitations of wireless communication includes: limited transmission bandwidth and information loss.In the field of networked state estimation studies, many studies assume that there is no quantification problems [5], but only consider the link packet loss problem.There are some studies, assuming that there is no link packet loss problem, but only consider the observation information and control information quantization process [6].In the linear system, the minimized data rate is achieved in the packet loss network around the target state tracking control problem [7].In the state of remote estimation of simple random linear system, due to quantization noise and communication link packet loss.You K researches coding and decoding methods [8].
This paper is a comprehensive consideration of wireless sensor network resource optimization,By establishing the covariance of the estimated error covariance and the energy of the packet transmission, through optimizing the target, the network resources are scheduled among the plurality of communication links, the transmission bandwidth, the link data rate and the transmission period are appropriately allocated,The estimation accuracy and the packet transmission energy are also considered while improving the estimation accuracy of the remote estimator for the physical process.The results of network resource optimization analysis show that the optimal state estimation of physical process can be obtained under the very small transmission bandwidth with simple BPSK modulation mode, and the energy of the transmission packet can be effectively reduced.
2
System model and problem description
Dynamic physical processes and sensor observations
The system structure contains two wireless communication links, two links between the independent transmission of data and sharing network resources.This paper discusses the general multi-input multi-output discrete-time linear system, Observation vector k s is divided into two parts Assume that the transmitted packet contains an error detection code, so that the estimator can know whether the packet has an error when it receives the packet.When the observation packet is successfully received, the state estimation can be made;When the observation packet information is lost or there is an error, the observation information is discarded directly.For simplicity, each link is modeled as a single-hop network with no packet retransmission mechanism.If both signal quantization coding and packet loss are considered at the same time, then the estimator receives the signal as follows: , ik n is the observed quantization noise, assume that the quantization noise is a zero mean independent distribution of Gaussian noise.The covariance matrix is As the equivalent noise of the observed information quantization, it is also zero mean Gaussian noise, covariance is
Suppose observation noise
, ik has the following conditional probability density distribution: , 2 ,, ,
indicates the observation information
, ik y lost.According to the Kalman filter principle, we can get the following equation:
Forward fading communication channel
Assume that the forward fading channel between the sensor and the estimator is unstable,
Objective function
According to the foregoing description, the energy consumption of transmitting one bit of data in the channel is Define the energy required to transfer a packet consisting of B bits of data is as follows: () The goal of this paper is to design an optimal network resource scheduling method to minimize the covariance of the error and the energy required to transmit the packet.This optimization problem can be described as an optimization problem for infinite time domain:
Network resource optimization
Each link can be assigned to a portion of the period When the sensor uses adaptive bit rate allocation technology (ad-RA), the probability of packet loss is :
) ( , ) 1 exp( )
The packet reception rate can be obtained : Therefore, the optimization method of network resource scheduling in this paper can be expressed as follows under BPSK / QPSK modulation mode: In ad-RA mode, the optimization method of network resource scheduling is expressed as follows:
Analysis of network resource scheduling
This section evaluates the performance of network resource scheduling.
Figure 1 is the relationship between the estimated error covariance and the packet loss probability with 1 = .It can be found that in the absence of resource scheduling, if the link communication quality deteriorates , resulting in packet loss of the observed information, directly reduce the accuracy of the state estimation.At the same time, it can also be found under the BPSK modulation mode, the estimated error increases with the increasing of packet loss probability not changing so fast as the ad-RA modulation mode.It shows that the estimated error covariance is more stable and reliable under the BPSK modulation mode.Figure 2 is the relationship between packet transmission energy and packet loss probability.From the simulation results, it can be found that with the increasing of packet loss probability, the data sent in the communication link is less and less, which leads directly to the decreasing of data transmission energy.However, this reduction in energy is at the expense of increased information loss, not the required optimal packet transmission energy.In the absence of network resource scheduling, the degraded communication quality caused by packet loss will make the packet transmission energy lose the optimal balance state and affect the network communication performance.From the simulation results 1 and 2, we can find that the error covariance and the packet transmission energy are directly related to the packet loss probability.If there is no network resource scheduling, under the poor communication quality of the network, the estimated error covariance and packet transmission energy will become worse.It is necessary to consider the combined error of the estimated error covariance and packet transmission energy .
In the simulation of Figure 3, we set 1s T = , 1 20dB
=
, 2 10dB = .BPSK/QPSK and ad-RA, the three modulation methods use the same simulation parameter.We put them in the same environment simulation, and get their contrast effect.From the results
Conclusion
In this paper, for wireless communication link packet loss, under the simple BPSK modulation mode, it is possible to minimize the energy consumption of the network communication while minimizing the error covariance of the state, by optimizing the network resource scheduling, optimizing the network transmission bandwidth, bit data rate and scheduling of the transmission time.This kind of network resource scheduling method can effectively reduce the influence of information loss in the communication link on the state estimation and make up the shortcomings of packet loss in the wireless network.
Bernoulli random variables.
2 N
error rate for sending a bit of data, then send a packet with n bits of data packet loss different communication channels and different digital modulation modes, the calculation of b p is different.For example, In the BPSK modulation mode, the calculation of is the noise energy spectral density, the function
Figure 1 .
Figure 1. the relationship between estimation error covariance and packet loss probability
Figure 2 .
Figure 2. the relationship between packet transmission energy and packet loss probability
Figure 3 .
Figure 3. the relationship between joint optimization goal and network bandwidth . Therefore, when the bandwidth is large enough, BPSK modulation is better, in this way the packet reception probability is greater than the QPSK mode packet reception probability.
ik | 1,960 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
An Evaluation Model of Plant Variety Rights Capitalization Operation Value Based on Big Data Analysis
Plant variety rights are essential for agricultural development and food security.is papermainly starts with the cost-benet issue of plant variety rights that enterprises are concerned about, and introduces the cost-benet analysis method into the research and development, application, and protection of plant variety rights. is research takes the new plant variety rights as the research object, claries the development status of the world plant variety rights from the perspective of time series development characteristics and spatial distribution pattern, explores the location of the variety rights in the world, and uses methods such as knowledge measurement and data mining, to carry out in-depth research on the application and authorization of plant variety rights, the scope of protection catalogues, the structure of variety rights, and the applicant unit. Finally, the regression model is tested from ve aspects: regression model signicance test, model goodness-of-t test, Hosmer goodness-of-t test, goodness-oft test, and overall accuracy test of model prediction. e regression coecient of plant variety rights cooperation in the research process was 1.908, and the statistical test value was signicant at the 5% level (P 0.022< 0.05). Using system dynamics to evaluate its risks, it provides new ideas for the risk analysis and prevention of seed industry enterprises’ variety rights capitalization operation.
Introduction
In the licensing of variety rights, the licensee of variety rights is highly dependent on the related varieties, and once monopoly occurs, the harm will be huge. In the process of variety rights licensing, the exclusiveness and monopoly of the variety rights make it possible for the variety rights holders to take advantage of this exclusive and monopolistic position to implement behaviors that restrict competition. At the international level, with the opening of China's seed market, foreign large-scale agricultural technology companies with abundant capital and advanced technology have expanded aggressively, threatening the living space of the domestic breeding industry, and challenging the development of China's seed industry and food security.
As a major agricultural country, agricultural, scienti c, and technological achievements are produced in batches every year, and the task of promoting the smooth transformation of these achievements is very important. Doing a good job in the evaluation of the value of agricultural, scienti c, and technological achievements has an important role and signi cance in improving the e ciency of achievement transformation and accelerating the application of achievement transformation.
is paper mainly starts with the cost-bene t issue of plant variety rights that enterprises are concerned about, and introduces the cost-bene t analysis method into the research and development, application, and protection of plant variety rights. Based on the above research, it is found that the domestic research on the value evaluation of agricultural, scienti c, and technological achievements, especially the value evaluation method of new plant variety rights, is still very much lacking. Most of them are limited to traditional value evaluation methods, which makes the reference signi cance and accuracy of the evaluation results very low, and it is impossible to evaluate the value of new plant variety rights scientifically, reasonably, and pertinently. For the current scientific and technological achievements transformation trading market, the reference transaction value is of great significance for promoting the successful transaction of achievements. erefore, it is indispensable to study the method of evaluating the value of new plant variety rights.
Related Work
e implementation of the new plant variety protection system has created a suitable legal environment for the healthy development of seed enterprises and scientific research units, promoted the breeding and innovation of new plant varieties, and found a breakthrough for the seed industry to face the world and improve its core competitiveness.
e concept of an Essential Derivative Variety (EDV) under the 1991 UPOV Convention presents a number of challenges for both UPOV and users of the system. Bostyn believes that the concept of EDV is not only expressed in a rather difficult language in regulations but has proven equally difficult to apply. Furthermore, due to the lack of clarity in the provisions of the UPOV 1991 Convention, reaching a consensus on the exact interpretation of the concept, to be implemented later by courts and guidelines, is equally challenging. He has made novel and original contributions to research in the field of EDV. e approach presented here is inspired by other areas of (intellectual property) law. His careful study of the proposed solutions shows that at least some of them can effectively end at least some of the deadlock and legal uncertainty surrounding the concept of EDV [1]. Wijesundara et al. consider Plant Patents (PP) and Plant Breeders' Rights (PBR) as two forms of Intellectual Property Rights (IPR) granted to improved varieties of crops. Authoritative national governments issue PPs and PBRs after confirming the uniqueness of the breed identity. Uniqueness depends on the uniqueness, consistency, and stability of the new variety. In the presence of a large number of closely related breeds as a reference set, morphological, physiological, and biochemical descriptors are less able to obtain IPR in breed identification, but advanced molecular tools, such as DNA fingerprinting and sequencing, have a high potential to detect uniqueness. DNA fingerprinting and sequencing have identified varietal traits in many crops, such as rice, apple, wheat, and soybean, revealing the potential for the successful use of molecular descriptors in patenting or PBR. Novelty verification is the first step in the process of allowing a patent or PBR. e Patent or Plant Variety Protection Office requires breeders to submit an application that includes all the details of the plant variety [2]. Li et al. applied the 15 N labeling methods to the leaves of Arabidopsis rosettes to characterize their protein degradation rates and understand their determinants. Stepwise labeling of new peptides with 15 N and measuring the decrease in abundance over time of more than 60,000 existing peptides allowed him to determine the degradation rates of 1228 proteins in vivo [3]. High-yielding Spanish bunch peanut culture ICGV 00351 (cross-derivative of ICGV 87290 × ICGV 87846) developed by Vindhiyavarman et al. at ICRISAT (International Crops Research Institute for the Semi-Arid Tropics), Patancheru, Andhra Pradesh, with six other promising drought varieties were evaluated together. e overall average dry pod yield of cultivated ICGV 00351 under rainfed conditions was 2189 kg/ha. is cultivar with a duration of 105 to 110 days increased pod yield by 17% and 26%, respectively, over the region's popular varieties VRI (Gn) 6 and TMV (Gn) 13 [4]. As an important agricultural intellectual property rights, plant variety rights operate and develop worldwide, which can enhance the international competitiveness of plant variety rights and the ability to participate in international affairs. e securitization of plant variety rights is inseparable from the development of finance, economy, and society, and is an important way to protect plant variety rights and operate assets. eir research on plant variety rights lacks practical practice, and this study will further explore this issue.
Supply-Side Value Range Evaluation
Model. e construction of the value range evaluation model for the supply side of new plant variety rights should start from the perspective of producers, and consider two aspects of R&D and income: First, the supplier of new plant variety rights should not only maintain the simple reproduction of the enterprise's products but also provide funds for the enterprise to further expand its production scale. erefore, the value of the new plant variety rights should not only compensate the consumption of the cost of the supply side but also distribute the benefits brought by the new plant variety rights, so as to provide effective financial support for the enterprise to further proceed anyway. Secondly, various risks and intangible losses of new plant variety rights should be considered in the value of new plant variety rights. is research is based on agricultural intellectual property rights, a hot issue of general concern, and takes plant variety rights, a unique intellectual property rights in the agricultural field, as the research object. erefore, the main research line of the cost-benefit analysis of plant variety rights is shown in Figure 1.
e Lower Limit of the Supply-Side Value Assessment
Range. e formula for the lower limit of the transfer value of the supplier of new plant variety rights in the study is: In the formula, R is the research and development cost of new plant variety rights.
e Upper Limit of the Supply-Side Value Assessment
Range.
e upper limit of the price determined by the supplier for the transfer of new plant variety rights is: 2 Scientific Programming e sunk cost of research and development of new plant variety rights is S; the expected benefit of the supplier is E.
ΔM ij is the income increment generated by the kth enterprise purchasing the new plant variety right in the jth year [5].
Demand-Side Value Interval Assessment Model.
For the demand side, purchasing a new plant variety right is a productive investment of the enterprise. e expected return from the investment to the demander after deducting possible risks is greater than the average social capital return, or if the new plant variety can be developed by itself, the purchase price must be lower than the cost of self-development. e evaluation scheme is shown in Figure 2. e evaluation formula for the value interval of the demand side of new plant variety rights is [6]: P is the transfer price of new plant variety rights. Consider the selection and operation strategy of dualchannel retailers' channel sales mode when consumers have free-riding behavior in pre-sales services in the market and the degree of free-riding is constantly changing. en, the formula for the upper and lower limits of the transfer price of the new plant variety right on the demand side is [7]:
Model Selection and Variable Description.
In logistic regression, assuming that the probability of M � 1 is P, for M, its distribution function is [8]: e binary logit model is now used, the value range of the dependent variable is [0, 1], and the maximum likelihood method is used to evaluate its regression parameters.
Among them P is the number of independent variables, and B 0 is the regression intercept.
Determination of Value Assessment
Interval. In this paper, the supply side and demand-side value evaluation interval models of a single new plant variety right are constructed, respectively [9]. e source of data is real data, and the expression of the proposed uncertainty measure is helpful to solve the current practical problems. e research on the uncertainty measure of knowledge is a hot Scientific Programming issue in the field of artificial intelligence. On the basis of reviewing several classical knowledge uncertainty measurement methods, the relationship and difference between these measurement methods are systematically studied. e results show that measures such as information granularity are equivalent to knowledge granularity, and collaborative entropy can be regarded as the derivation of information entropy, which can verify the correctness of the conclusion.
In the first case, if In this case, the equilibrium transfer price of new plant variety rights cannot be formed [10].
In the second case, if e intersection interval is [P s−max , P d−min ], which is the lowest price interval for both the supply and demand sides. e third case, if [11] At this time, when the new plant variety rights are transferred, it is very easy for both parties to trade and negotiate. e fifth case, if When the new plant variety right is transferred, an equilibrium price may be formed after negotiation between the two parties.
After using big data to analyze basic information such as business conditions, capital flow, investor preferences, etc., the credit data, plant variety rights information, and plant variety rights future cash flow rating information of seed industry enterprises can be visualized and analyzed, which provides convenient conditions for enterprises to carry out the operation of plant variety rights securitization.
Variable Setting.
e three variables of transaction mode factor, transaction operation factor, performance factor, and four variables of media factor are assigned as follows: 1 � selected this factor, 0 � not selected this factor. e statistics of some variables are shown in Table 1.
Regression Result
Processing. Using SPSS16.0 statistical analysis software, logit regression analysis was carried out on the influencing factors of plant variety rights market-oriented operation satisfaction. Backward was adopted in the processing process, and the dependent variables in the research process were eliminated. For [14], Scientific Programming Its score function and exact function are, respectively, [15] As an important strategic resource for society and enterprises, intellectual property rights are only the subject of legal rights for their owners if they cannot be used effectively, and they do not have the ability to obtain benefits by virtue of their rights. Letting α 1 , α 2 be two IVITFNs whose collation is [16] en e smooth operation of rights securitization creates favorable policy conditions. Participating in plant variety rights securitization through high-tech means such as big data, artificial intelligence, AI technology, and cloud computing can not only effectively control the infrastructure of plant variety rights securitization operations but also realize profit output with the latest technology [18]. rough the application of big data, cloud computing, and other technologies, plant variety rights securitization operations are under the supervision of laws, regulations, and policies, and through the smart contract mechanism, network identification and transmission technology is used to implement effective supervision in the form of encrypted codes [19]. erefore, the formulation of relevant laws and regulations to standardize the application of regulatory technology for plant variety rights securitization is also an issue that needs to be focused on. e income method does not take into account the many uncertainties in the process of achievement development, investment, and commercialization; the cost method ignores factors such as the value of the achievement after commercialization. In order to solve this series of difficult problems and promote the smooth transformation of agricultural scientific and technological achievements, it is particularly important to find a value evaluation method suitable for new plant variety rights in agricultural scientific and technological achievements.
If for any ∀A, e limited set is en there is [21] Ψ Among them, ρ ≠ 0. (Table 2) and Internet technology based on big data can solve the problem of information asymmetry in breeding research and development, application and authorization of new plant varieties, and plant variety rights securitization (Table 3) transactions and operations [22]. In this paper, regression model testing is carried out from four perspectives: regression model significance test, model goodness-of-fit test, Hosmer & Lemeshow goodness-of-fit test, and overall accuracy test of model (Table 4) prediction [23].
Regression Model Test. Financial technology
(1) Significance test of regression model: Chi-square is the likelihood ratio of the model, the larger the value of the chi-square statistic, the better; e P value corresponding to Sig is as small as possible. e comprehensive test results of the model coefficients are shown in Figure 2.
(2) Model goodness of fit test: e larger the value of Cox & Snell R Square and Nagelkerke R Square, the better the overall fit of the model. SPSS16.0 statistical analysis software was used to conduct a comprehensive evaluation of prefectural and municipal agricultural academies, and principal component analysis was used to extract input-output factors, scientific and technological service contribution factors, and social and economic benefit factors, and then perform hierarchical clustering analysis on the samples. e classification and ranking of the innovation ability and comprehensive strength of the Scientific Programming prefecture-level agricultural research institutes were analyzed, and the quantitative basis was provided for the objective evaluation of the main body of plant variety rights licensing. e goodness-of-fit evaluation of the model is shown in Figure 3. (3) Hosmer & Lemeshow's goodness-of-fit test: Hosmer-Lemeshow is a fitting statistic, and its null hypothesis is that the equation fits the data well [24]. (4) Overall, accuracy test of model prediction: As can be seen, the overall discriminant accuracy of the model for estimated samples is 78.7%. e final observations are classified, as shown in Table 4.
Results of the Evaluation Model for the Capitalization of Plant Variety Rights
(1) From the perspective of the main factors of plant variety right marketization, the degree of education and the nature of the unit have different degrees of influence on the satisfaction of plant variety right marketization. e regression coefficient of education level is 0.489, and the statistical test value is significant at the level of 5% (P � 0.040 < 0.05), indicating that education level has an obvious impact on the satisfaction of plant variety right marketization. e more years of education the plant variety rights holders have, the more they can calmly analyze, predict, and make decisions on the plant variety rights market. More importantly, they can accurately evaluate the effect of plant variety rights marketization, and try to make the actual effect of plant variety rights marketization consistent with their prediction, so as to ensure that the plant variety rights marketization can achieve high satisfaction. e regression coefficient of unit property is −0.383, and the statistical test value is significant at the level of 5% (P � 0.032 < 0.05), indicating that unit property is another significant factor affecting the satisfaction of plant variety right marketization. e nature of the different units determines the different priorities of the marketization of plant variety rights. Seed companies mainly focus on the commercialization and marketization of plant variety rights, while agricultural academies and universities often focus on the research and development and promotion of new plant varieties.
(2) From the perspective of the factors of marketoriented transaction of plant variety rights, plant variety rights licensing and plant variety rights cooperation have a significant impact on the satisfaction of plant variety rights. e regression coefficient of plant variety right license is 1.055, and the statistical test value (P � 0.010) is significant at the level of 1%, indicating that plant variety right license significantly affects the satisfaction of plant variety right marketization, which benefits from the advantages of plant variety right license itself. Because the license of plant variety right only transfers the use right, the biggest advantage of this transaction mode is that the owner of plant variety right can transfer the use right of plant variety right to multiple people for many times, which is conducive to the owner of plant variety right to recover the funds quickly and expand the market-oriented scope of plant variety right, so as to improve the market-oriented satisfaction of plant variety right. With the improvement of China's property rights trading market, plant variety rights licensing will be more and more applied to the market-oriented practice of plant variety rights. e regression coefficient of plant variety right cooperation is 1.908, and the statistical test value is significant at the level of 5% (P � 0.022 < 0.05), indicating that plant variety right cooperation has a significant impact on the satisfaction of plant variety right marketization. e reason is closely related to the structure of plant variety rights holders in China. Most of China's variety rights holders are subordinate to agricultural research institutes and colleges and universities. Plant variety rights accounted for 62.8% of the total authorization, and seed industry enterprises only accounted for 33.1% of the total authorization. In addition, most of the seed industry enterprises in my country are small in scale and do not have the necessary conditions for selfimplementation. ey need to rely on the support of external resources such as capital, technology, and information.
According to the development report of China's seed industry [25], in 2021, the top 10 seed industry companies in terms of sales invested a total of 483.43 million yuan in breeding research and development, an increase of 48% over Figure 3. Figure 4 shows the total number of applications for variety rights, the total number of grants, and the total number of effective variety rights at the end of the period in UPOV member countries.
Only 5.53% of the 4,808 variety rights authorized in 2018 were transferred, and only 3% of the varieties authorized by teaching, scientific, and educational units were implemented. e authorized variety plant types are shown in Figure 5.
However, only 19.40% of the 3,278 variety rights authorized for these five field crops have been promoted, and only 26.95% of the authorized varieties of commercial crop cotton have been promoted. e authorized promotion of field crops is shown in Figure 6.
Among the surveyed units, there are 23 individual units, 16 private enterprises, 14 collective units, 21 joint-stock units, 12 private enterprises, 12 state-owned enterprises, 2 cooperative enterprises, 1 institution of higher learning, and 7 other units, a total of 108. e top three in terms of quantity are: individual enterprises, joint-stock enterprises, and private enterprises, accounting for 22%, 19%, and 15% of all units, respectively. e specific distribution is shown in Figure 7. e mean scores from high to low are: the R&D capability of the enterprise, the mean value is 4.18; the degree of technological innovation, the mean value is 4.07; the complexity of plant variety rights, the mean value is 3.96; the ability of enterprises to control R&D costs, the mean value is 3.94; the experimental scale of R&D, the mean value is 3.67. Based on this, it can be inferred that among all the factors affecting the R&D cost of plant variety rights, the R&D capability of the enterprise has the greatest impact on the R&D cost, while the experimental scale of R&D has the least impact on the R&D cost of plant variety rights. Figure 8 shows the statistics of factors affecting the R&D cost of plant variety rights. e complexity score of plant variety rights is concentrated on 3, 4, and 5, which are important, very important, and extremely important, as reflected in Figure 8. e selection of these three items accounts for 97.2% of the total selection. e importance evaluation of the scale of plant variety rights R&D trials is shown in Figure 9.
It can be seen from Figure 10 that the respondents' recognition of the scale of R&D trials is mainly focused on "very important." e distribution of the scores of the remaining items is also relatively uniform, with 10 people who think that this factor has an average degree of influence, 29 people who think that the degree of influence is important, and 17 people who think it is extremely important. According to this, it can be inferred that the scale of research and development has a certain impact on the research and development costs of plant variety rights. e technical innovation evaluation of plant variety rights is shown in Figure 10.
Discussion
China's securitization development continues to advance, and the balance structure of the financial market is also under certain pressure. Due to the falling interest rate level, the financial market under macro-control and government policies cannot achieve equilibrium, resulting in abnormal financial institutions such as shadow banking. With the investment in research and development of new plant varieties and the continuous development of high-tech agricultural production, many breeding scientific research institutions and seed industry enterprises have begun to seek various ways to absorb social funds to obtain corporate financing, which makes the form of plant variety rights securitization developed. Breeding scientific research institutions and seed industry enterprises can cooperate with financial institutions, use computers and the Internet for securitization financing, and introduce concepts such as financial technology, Internet finance, and blockchain smart contracts for project innovation. rough the establishment of an open and transparent trust network in the untrusted cooperation, the plant variety rights securitization transactions involving multiple parties can be recognized. In this process, it is necessary to strengthen the financial supervision of the entire market and operation links [26].
From a functional point of view, intellectual property protection and anti-monopoly regulation are complementary in function. First, intellectual property protection and anti-monopoly law adjustment methods are complementary. On the one hand, intellectual property rights, as a legal Scientific Programming establishes a series of internal restriction systems of intellectual property, which self-regulate from the inside and balance the relationship between intellectual property rights and social public interests. At the same time, a major difference between plant breeding work and industrial invention is that the former is highly dependent on existing reproductive materials and biological genetic information. If subsequent innovators cannot obtain relevant materials or genetic information, breeding work is useless. In addition, other than the exclusivity, temporality, and regionality of traditional intellectual property rights, variety rights have both self-replication, agriculture-related attributes, and seasonality and regionality of crop planting. Special consideration should be given to its protection and regulation; not only should it be given exclusive rights to maintain breeding innovation and profits but also to protect the normal competition in the seed industry market and the supply of agricultural products. e general license for the implementation of variety rights means that the variety rights holder authorizes the licensee to implement the variety within a certain time and territory. At the same time, within this scope, the breeder can implement the breed by himself or herself or allow others to implement the breed. is model is conducive to the variety rights holders to adjust their licensing strategy in real time according to market changes, and at the same time, the variety is more widely spread, the breeding cost can be quickly recovered, and the commercial benefits can be maximized quickly. Also, the dispersion of implementation rights makes it difficult to restrict competition in the market, which is conducive to technological progress and increasing production and income. e disadvantage of this model is that the variety rights are licensed too widely, and vicious competition occurs in the relevant market, which damages the interests of both parties. e anti-monopoly regulation of variety rights licensing is the specific application of the theory of limitation of rights in the field of variety rights. In order to maintain the research and development motivation of breeders, the breed rights protection system gives them sufficient exclusive protection, and any use without legal authorization or the consent of the rights holder is an infringement. At the same time, the advancement of breeding technology and the development of the agricultural industry require that the relevant plant breeding achievements can be fully disclosed, disseminated, and implemented in the field of agricultural production.
Huge contradictions need to be reconciled by setting up a system through the theory of rights limitation. On the one hand, the variety rights self-restriction system, such as the variety rights licensing system (voluntary license and compulsory license), regulates the abuse of rights from within; on the other hand, while fully respecting and recognizing the variety rights, the anti-monopoly law should also restrict the variety rights through reasonable means and regulate the monopoly behavior of the variety rights. Relevant crop varieties produced, sold, and promoted by breeding companies and agricultural technology companies are often plant varieties with specific traits cultivated through scientific and technological means, such as traits suitable for specific soils, specific chemical fertilizers, or specific farming methods. After long-term production and planting, the soil composition has been basically fixed, and a certain variety has gradually adapted to the soil environment of this plot, and can obtain higher yields. Agricultural producers will gradually form a heavy dependence on the specific variety, and eventually evolve into a situation where it is difficult to carry out agricultural production if the seeds cultivated and provided by relevant breeding companies cannot be obtained. When the relevant variety rights and crop seeds are controlled by the breeding enterprise and the variety rights holder, the relevant competitors cannot copy or obtain them, and the controller has the ability and conditions to license but refuses to license the relevant rights, seed-related variety rights, and crop seeds; such variety rights and crop seeds should be identified as "critical Scientific Programming facilities" and become a factor to consider whether to regulate related behaviors. e tying behavior in intellectual property licensing means that the right holder in a dominant market position requires the licensee to accept other unrelated intellectual property rights and products, or the act of purchasing unnecessary goods or services from the right holder or a third party designated by it. e tying sale, which is covered by the Anti-monopoly Law, requires the right holder to have a dominant market position and abuse it; such a dominant market position may be possessed by the right holder itself or it may be the dominant market position formed by the horizontal joint agreement between operators. Secondly, it is required that the tying product can be independently licensed or sold. If the right holder does not have a dominant market position, or the licensed intellectual property rights and property rights or products and services are inseparable and cannot be licensed and sold separately, this behavior is not subject to anti-monopoly laws.
It should be noted that it is necessary to avoid the balance between the compulsory license and the interests of the variety rights holder, and avoid excessive restrictions on the interests of the variety rights holder due to the abuse of compulsory license, so as to conform to the original intention of the establishment of the variety rights protection system. e anti-monopoly legal system and the variety rights protection system are the legal norms to adjust the rights and obligations between the variety owner and the licensee, agricultural producers, and even the consumers of agricultural products, and are the balancer of the interests of all parties. e anti-monopoly regulation of variety rights licensing has practical significance and an important role in the development of China's agricultural industry and the guarantee of food security; it is mainly manifested in the functions of prevention, regulation, and relief of monopoly behavior in variety rights licensing, which in turn promotes the continuous progress of breeding technology, the improvement of consumer welfare, the development of agricultural industry, and the maintenance of food security. Nowadays, China's breeding technology has made great progress, the proportion of foreign capital in China's breeding industry is also increasing year by year, and the impact of variety rights on the economy and trade is becoming more and more important. However, there are few academic studies in this field, which leads to many problems in the current anti-monopoly regulation of variety rights licensing that need to be solved urgently. erefore, it is necessary to strengthen the research on anti-monopoly regulation of variety rights licensing, analyze and examine the interest relationship in variety rights licensing, and systematically sort out existing problems and propose solutions, in order to play its due function of preventive regulation and relief, and help the development of the seed industry [27].
is topic needs to be studied from a broader perspective. For example, economic growth is one of the purposes of improving the legal system, so it is equally important to use economic indicators to analyze the effectiveness of the legal system and to assist the implementation of the law. When conducting anti-monopoly analysis on behaviors that restrict competition by variety rights licensing, economic analysis methods can be used to evaluate the impact of relevant behaviors on competition and innovation. In the process of implementing and remedying the anti-monopoly system of variety rights licensing, the analytical method of economics can also play an important role. Enterprises may have infringement risks, commercial disclosure risks, technical risks, and product counterfeiting risks in the process of intellectual property operation. Based on this, the concept of intellectual property management is proposed to prevent these specific risks in intellectual property rights, and reduce the possibility of risk management. e modernization of agriculture and rural areas is inseparable from science and technology. e continuous progress of breeding technology and the healthy development of agricultural industry are the powerful starting points of rural revitalization. e progress of seed industry is the basic element to promote the modernization of agriculture and rural areas. To realize technology-led rural revitalization and agricultural and rural modernization, the relationship between variety rights holders, licensees, agricultural producers, agricultural product consumers, and social public interests should be well balanced. rough the interpretation of the basic issues of anti-monopoly of variety rights licensing, this paper sorts out the system status quo of antimonopoly regulations on variety rights licensing outside the territory. Based on the existing problems in China, a systematic research and demonstration has been carried out on the improvement of the anti-monopoly regulatory system for variety rights licensing [28].
Conclusion
With the widespread popularity of the Internet, mobile phones, the Internet of ings and artificial intelligence, information data have grown exponentially, generating massive data (Big Data). How to quickly and efficiently extract and display invisible data in these unstructured big data knowledge is very important. Visualization technology can solve this problem very well. e plant variety rights pricing model mainly involves technical risk, value assessment risk, operational risk, variety rights own risk, legal policy risk, and market risk. Risk warning is an important part of risk management. Collect relevant data through big data technology, collect and analyze indicators, look for potential risks, and timely warn and control risks for the plant variety rights pricing model. Western countries first put forward the concept of capitalized operation, and intellectual property capitalization is also a hot topic, but most of them are limited to the field of law.
ere is almost no systematic research on the capitalization of plant variety rights. Scholars have mainly discussed the three aspects of the capitalization operation of plant variety rights: pledge, capital contribution, and securitization. Few of them have studied the capitalization operation risks. erefore, there is a lot of room for discussion in this field. From the perspective of management, this paper will make more beneficial attempts to capitalize the operational risk of plant variety rights. Based on the big data, this paper discusses and analyzes the key words of the classic literature, which provides a reference for the research on the securitization pricing of plant variety rights. e bibliometric and knowledge map analysis of securitization pricing, the use of financial big data, data collection technology and learning analysis methods, combined with cross-field and multi-disciplinary characteristics, provide a direction for the research and development of plant variety rights securitization, lay a foundation for the theoretical discussion on the securitization and pricing of plant variety rights, and also point out the way for the securitization and capitalization of plant variety rights under big data. e research on the capitalization operation risk of plant variety rights can not only help identify various risks in the capitalization operation of plant variety rights, to provide advice on risk prevention and control, and to promote the development of China's variety rights capitalization theory, but also provide guidance for seed industry enterprises to carry out the capitalization operation of plant variety rights. erefore, this research has practical value and significance. Due to the broad development prospects of plant variety rights, seed industry enterprises and scientific research institutions urgently need to meet the financing needs of enterprises by developing the economic potential of plant variety rights. Previous studies by scholars have mainly focused on the demonstration and analysis of plant variety rights commercial bank financing, listed company financing, government behavior finance, securities investment funds, etc., but there is no comprehensive analysis of plant variety rights securitization. e risk evaluation index of plant variety rights capitalization operation of seed industry enterprises is selected, and all the risk factors in the capitalization operation system of variety rights cannot be considered, and there is a certain subjectivity. erefore, in the future research work, the risk factors can be further studied to form more objective, systematic, and comprehensive indicators.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this article. | 8,346.8 | 2022-07-19T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
A Protocol for the Diagnosis of Autism Spectrum Disorder Structured in Machine Learning and Verbal Decision Analysis
Autism Spectrum Disorder is a mental disorder that afflicts millions of people worldwide. It is estimated that one in 160 children has traces of autism, with five times the higher prevalence in boys. The protocols for detecting symptoms are diverse. However, the following are among the most used: the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5), of the American Psychiatric Association; the Revised Autistic Diagnostic Observation Schedule (ADOS-R); the Autistic Diagnostic Interview (ADI); and the International Classification of Diseases, 10th edition (ICD-10), published by the World Health Organization (WHO) and adopted in Brazil by the Unified Health System (SUS). The application of machine learning models helps make the diagnostic process of Autism Spectrum Disorder more precise, reducing, in many cases, the number of criteria necessary for evaluation, denoting a form of attribute engineering (feature engineering) efficiency. This work proposes a hybrid approach based on machine learning algorithms' composition to discover knowledge and concepts associated with the multicriteria method of decision support based on Verbal Decision Analysis to refine the results. Therefore, the study has the general objective of evaluating how the mentioned hybrid methodology proposal can make the protocol derived from ICD-10 more efficient, providing agility to diagnosing Autism Spectrum Disorder by observing a minor symptom. The study database covers thousands of cases of people who, once diagnosed, obtained government assistance in Brazil.
Introduction
In a society where competitiveness is a substantial differential in profile and productive capacity, any severe psychological disorder that an individual may present constitutes a factor of their exclusion from the wealth-generating process. In this context, individuals with Autism Spectrum Disorder (ASD), as well as other severe psychological disorders, are observed with reticence from an early age and, often, discriminated against in daycare centers, schools, universities, and companies where they try to get an insert. A probable cause for this discrimination is based on the lack of information about ASD and other psychological disorders, as well as on the lack of conscience of the potential contribution that every human being can give to society, especially when there is an appreciation of the plurality of ideas and points of view.
According to the World Health Organization (WHO), ASD is a mental disorder that affects more than 70 million people worldwide. Worldwide estimates show one child with autism every 160 [1], with a prevalence five times higher among boys. In Brazil, there is a ratio of one autistic child for every 360. Although considered underestimated, Brazil's data show that there is a significant demand for specialized care [2]. Currently, the need for effectiveness and efficiency in diagnosing ASD goes beyond just health since people diagnosed with this disorder also seek to participate in selective processes for the formation of work teams. These diagnoses can help these people in both the educational and professional placement in the job market. On the other hand, the globalized world requires competitiveness and sustainability from companies, which depend heavily on people. In their selection processes, companies seek to select healthy professionals and, often, have excluded those who suffer from specific problems, such as ASD, even though they have different skills.
As a social contribution, the present study is aimed at applying a machine learning model to make the ASD diagnostic process more precise and concise, reducing the number of criteria necessary for the evaluation. To this end, the study proposes a hybrid approach, based on the juxtaposition of machine learning algorithms, aimed at knowledge discovery, associated with a multicriteria method of decision support based on Verbal Decision Analysis (VDA). The general objective is to evaluate how the hybrid methodology now proposed can make the protocol derived from the International Classification of Diseases, 10th edition (ICD-10) more efficient, enabling the diagnosis of Autism Spectrum Disorder by observing a minor symptom. The study sought to link the ICD-10 with the codes and structure of the DSM-5, aiming at expanding the understanding and diagnosis capacity from different clinical points of view. This objective comes considering that the hybrid model proposal mentioned above is applied to ASD and considering that mental disorders are the main focus of the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition .
Moreover, to validate the hybrid model, the study used a database provided by the Social Security Technology and Information Company of Brazil-DATAPREV. Mentioned data refer to thousands of cases of people who obtained the assistance benefit related to ASD, as provided by Brazil's federal legislation. The granting of these benefits goes through evaluating each case of children with ASD, to whom the Benefit of Continuous Provision (BPC) program applies. These assessments have an extensive questionnaire divided into Social Assessment and Medical-Expert Assessment, which allowed for research to obtain more consistent data on the stages of medical and social evaluation.
This article is divided into five sections. After the introduction, the study presents the second section that outlines the systematization of the fundamental theoretical basis on which the work is based. The third section describes the methodology used, which is subdivided into four subsections-which deal, respectively, with the database, the Random Forest machine learning algorithm, the applied learning framework, and the Verbal Decision Analysis (VDA) method addressed. The fourth section presents the results achieved. Finally, the fifth section concludes the work and proposes actions and perspectives for future work.
Theoretical Reference
This section contains some important topics about Autism Spectrum Disorder and highlights proposed information technology solutions used to support the diagnosis and referral of that disorder.
Highlights about Autism Spectrum Disorder Concepts.
Autism Spectrum Disorder (ASD) consists of a neurodevelopmental disorder characterized by patterns of stereotyped and repetitive behaviors and difficulty in communication and social interaction. There are several protocols for diag-nosing the disease. The most widely adopted are the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition, known as DSM-5, from the American Psychiatric Association; the Revised Autistic Diagnostic Observation Schedule (ADOS-R); the Autistic Diagnostic Interview (ADI); and the International Classification of Diseases and Health-Related Problems (ICD-10), a criterion published by the World Health Organization (WHO) and adopted in Brazil by the Unified Health System (SUS). In Figure 1, there is a highlight of the Autism Spectrum Disorder (ASD) in the context of the Neurodevelopmental Disorders categorization, as discriminates in the DSM-5.
Scientific investigations about Autism Spectrum Disorder have evolved a lot since the beginning of the last century, in both the theoretical and empirical aspects. From this perspective, it is clear that since the term "autism" was coined by the psychiatrist Eugen Bleuler, in 1911, when analyzing symptoms of schizophrenic patients, including the disclosure of other autistic patterns by Hans Asperger, in 1970, research has advanced considerably. With the passage of time and the deepening of knowledge on the subject, the expression Autism Spectrum Disorder (ASD) started to feature a very different set of behavioral changes with early onset, chronic course, and variable impact in multiple areas of development [4].
Furthermore, Martone and Santos-Carvalho [5] performed a bibliographic review of the articles published by the Journal of Applied Behavior Analysis (JABA) between the years 2008 and 2012, focusing on the importance of verbal behavior for the detection of the disorder [5]. Cohen et al. [6] present a pioneering study in which it uses a neural network to classify the disorder. The results showed the importance of implementing new technologies in the detection of the disorder and, simultaneously, pointed to the need to better explore the application of neural networks, with all its variations of parameters, to improve the classification of ASD [6].
Information Technology Solutions
Used in the Diagnosis of Autism Spectrum Disorder. Following the technological approach, Yekta et al. [7] proposed an ASD diagnostic system, called the Autism Screening Expert System (ASES), whose main attributes or criteria were chosen from the use of machine learning techniques on a basis obtained by questionnaires. In this case, to select the diagnosis's main attributes, the technology solution "Random Forests and Support Vector Machines" was used. This system seems to be very efficient for detecting the disorder in children aged 2 to 6 years [7]. In a similar approach, Khullar et al. [8] developed a system-the Handheld Expert System (HES)-to diagnose ASD using artificial neural networks. It obtained 100% (one hundred percent) accuracy in detecting the disorder when considering the protocol described in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) [8]. Also, in this same line, Nunes et al. [3] present an intelligent solution based on a hybrid model of the expert system associated with the multicriteria method MACBETH (Measuring Attractiveness by a Categorical Based Evaluation Technique) to support the early diagnosis of ASD in children aged up to two years [3]. 2 Computational and Mathematical Methods in Medicine In a similar context, Cohen [9] presents a study of a neural network that simulates and shares formal qualitative similarities with selective attention and the generalization of deficits observed in people with autism. In simulations in which the model was taught to discriminate children with autism from children with mental retardation, there were observations that neuropathologies described in the literature are sufficient to explain some of the unique pattern recognition and learning skills, as well as their generalization and acquisition problems [9]. Florio et al. [10] show the importance of a second opinion on the diagnosis of ASD and the use of technology to provide independent clinical evaluation, increasing diagnosis accuracy. Challenges related to using technology to offer independent second opinions include the possibility for individuals and relatives to participate in self-diagnosis activities [10].
Still, from the perspective of intelligent systems, Thabtah [11] points out that ASD has been studied in the area of the behavioral sciences using intelligent methods based on machine learning to speed up the screening time or improve the sensitivity, specificity, or precision of the diagnostic process. In this sense, machine learning considers the problem of ASD diagnosis as a classification task, by which predictive models are built based on historical cases and controls. These models must be connected to a screening tool to achieve one or more of the objectives. This work sought to shed light on recent studies that use machine learning in the ASD classification to discuss its pros and cons. It highlighted a noticeable problem associated with current ASD screening tools: the reliability of these tools using the DSM-IV rather than the DSM-5 manual. As a result, the need to change the current screening tools to reflect the newly imposed criteria for ASD classification in the DSM-5 has been suggested, particularly the diagnostic algorithms incorporated in these methods [11].
In the same sense, Thabtah [12] states that machine learning is a multidisciplinary research topic that employs intelligent techniques to discover relevant hidden patterns used in forecasting to improve decision-making. Thus, some machine learning techniques, such as Random Forests and others, were applied to datasets related to autism to build pre-dictive models. These models claim to increase clinicians' ability to provide robust ASD diagnostics and prognosis [12].
In a systematic review, Thabtah and Peebles [13] evaluated and critically analyzed 37 different ASD screening tools to identify possible areas that need to be addressed through further development and innovation [13].
In another analysis, Thabtah et al. [14] proposed a new machine learning framework related to screening autism for adults and teenagers containing vital characteristics. The results obtained reveal that the machine learning technology was able to generate classification systems with acceptable performance in terms of sensitivity, specificity, and precision, among others [14].
On the other hand, in a different approach, Neto et al. [15] presented the G-ASD, which consists of a prototype of a game to assist professionals in the field of psychology using the methodology of Applied Behavior Analysis (ABA) in teaching children's learning with Autism Spectrum Disorder (ASD) [15]. The G-ASD shows how important the use of games is in the process of building the knowledge of children with autism, assisting the professional in teaching colors to autistic children. From a social and pedagogical perspective, Lampreia [16] evaluates the official instruments for the diagnosis of autism and comments that, due to the intention of unifying them, it harms some areas such as social interaction, communication, interests, and development activities. Costa et al. [17] address the difficulty of including autistic people in the information technology segment and highlight information for analysis and discussion regarding the inclusion of autistic people in higher education and the respective job market in the area of information technology.
Several approaches are promoted to study neurological disorders. Maenner et al. [18] investigated the Random Forest algorithm in a set of autism data from the Georgia Autism and Developmental Disabilities Monitoring (ADDM) network, using phrases and words obtained in child development assessments. The dataset consists of 5,396 assessments for 1,162 children, of which 601 are on the spectrum. Random Forest classifiers were assessed on an independent test dataset that contained 9,811 evaluations from 1,450 children.
Computational and Mathematical Methods in Medicine
The results reported that the Random Forest reached about 89% of the predictive value [18].
Finally, it is observed that the Random Forest algorithm has been used with an outstanding frequency in decisionmaking processes in the health area, mainly for ASD diagnosis and treatment. This was one of the main reasons for using the Random Forest in the present study as an algorithm that is part of the hybrid model now proposed.
A Methodology for the Autism Spectrum Disorder Diagnostic Protocol
The present study used the quantitative research method, looking for numbers, frequency of characteristics, and quantifiers in the responses to social and medical assessments of children diagnosed with ASD, candidates for the Benefit of Continuous Provision (BPC) of Social Security in Brazil. Also, a qualitative approach was used to interpret the results of the evaluations in the Brazilian medical-social context. The study database was obtained from the Social Security Technology and Information Company of Brazil-DATAPREV. The said database has 320,302 (three hundred twenty thousand three hundred two) records referring to the responses in the medical evaluations of 3,861 (three thousand eight hundred sixty-one) children from 0 to 5 years old, from all over Brazil, representing 82 (eightytwo) social, environmental, and clinical variables. This base is fed from evaluation forms, whose items can be transformed into labels for the variables that characterize diseases. By mapping the main characteristics, it is possible to assign a response to each question. The variables include information about the person's identity, the socioenvironmental aspects, and the clinical criteria necessary to identify ASD. The process of preparing the model proposal can be summarized in the following steps: (i) In the first stage, after collecting the responses in the evaluations, the characteristics were organized with keywords (tag) and related to the respective qualifiers in a data table (Analytical Base Table (ABT)) (ii) Then, in the second stage, the data were classified using the machine learning algorithm, Random Forest, which combined several decision trees to obtain a prediction with greater accuracy and stability (iii) The third stage consisted of the creation of a form with the nine most used characteristics for the application by the decision-makers (iv) Finally, in the fourth stage, the ZAPROS-IIIi method, implemented in the ARANAÚ tool [19], performed the Verbal Decision Analysis (VDA), aiming at the order of preference of the main characteristics of the ASD These characteristics constitute the proposed model. Figure 2 presents a summary of the process of the steps described above, providing a simplified view of the creation of the proposed hybrid model mentioned.
Preparation for Processing.
There are two preprocessing steps. The first one was developed to appropriate the data for manipulation in the Orange framework, which uses the machine learning model (see step I and step II, in Figure 2). The second one was developed, from the first preliminary result, to adjust the information for the Verbal Decision Analysis (see step III, in Figure 2).
In the first preprocessing stage, the questionnaire, received in a .txt extension, has the questions classified and labeled. After transposition, they are arranged in tabular form, in a panel format. The answers are mapped and assigned to the item, with a specific weight, ranging from 0 to 4. The resulting file has the extension .csv and is consumed by the machine learning framework.
In the second preprocessing step, the result of the model imported will be the shortest path in the decision tree, in which nodes have represented the symptoms of the disorder investigated. Such a way will again be converted into a questionnaire, with each sign being a variable that is the subject of a question, with 3 (three) evaluation criteria, each with 3 (three) possible qualitative answers. The result will serve as input to the Verbal Decision Analysis method, ZAPROS-IIIi, implemented in the ARANAÚ tool [19].
Random Forest Method
Considerations. Random Forests are randomized decision trees capable of predicting expected values of regressions and classifications. The model was published by Breiman [20,21], and since then, the technique has become an important data analysis tool, quite versatile and with few adjustment parameters. In this sense, despite its simplicity, the method is generally recognized for its precision and ability to handle small sample sizes, large resource spaces, and complex data structures [22]. The versatility of the algorithm contributed to the popularity of Random Forests, in addition to having few parameters to adjust. Many practical issues have successfully involved Random Forests, including forecasting air quality, chemoinformatics, ecology, 3D object recognition, and bioinformatics, to name a few. Many have validly proposed variations of the original algorithm to improve calculation times while maintaining good forecasting accuracy. Breiman's Random Forests were also extended to quantile estimation, survival analysis, and ranking forecast [22].
Regarding the theoretical aspect, the analyses are less conclusive, and, regardless of their extensive use in practical contexts, little is known about the mathematical properties of Random Forests. To date, most studies have focused on isolated parts or simplified versions of the procedure. The most famous theoretical result is that of Breiman [20,21], which offers an upper limit to the error of generalization of forests in terms of correlation and strength of individual trees. It was followed by a technical note [22], which focuses on a modified version of the original algorithm. A critical step was subsequently followed by Jeon and Lin [23], who established lower limits for nonadaptive forests, independent of the training set [22]. They also highlighted an interesting connection between Random Forests and a specific class of predictors of k-nearest neighbors (KNN) that was further developed [22]. 4 Computational and Mathematical Methods in Medicine The nature of the procedure can explain the difficulty in adequately analyzing Random Forests, the former being a subtle combination of different components. Among the essential ingredients of the forest, both bagging (Breiman [24,25]) and classification and regression trees (CART) criterion (Breiman et al. [26,27]) play a critical role. Bagging, like a bootstrap-aggregating contraction, is a general aggregation scheme that proceeds by generating subsamples of the original dataset, building a predictor for each resampling, and deciding on the mean. It is one of the most useful computationally intensive procedures to improve unstable estimates, especially for large data and high-dimensional datasets, where it is impossible to find a suitable model in one step due to the complexity and scale of the problem [22]. The most influential CART algorithm by Breiman et al. [26,27] originated the CART-split selection, used in the construction of individual trees to choose the best cuts perpendicular to the axes. In each node of each tree, the best cut is selected, optimizing the CART division criterion, based on the notion of Gini impurity (classification) and quadratic regression error [22].
However, despite the essentiality of the algorithm components mentioned above, both bagging and CART are challenging to analyze, which is why theoretical studies have considered simplified versions of the original procedure. This analysis is usually done by simply skipping the bagging step and replacing the CART partition selection with a more elementary cutting protocol. Also, in Breiman's Random Forests, each leaf-that is, a terminal node-of the individual trees contains a preestablished fixed number of observations (this parameter is usually chosen between 1 and 5). Thus, the authors opt for simplified and independent data procedures, thus creating a gap between theory and practice [22].
Motivated by the discussion, Scornet et al. [22] studied some asymptotic properties of Breiman's algorithm [20,21] in additive regression models. These researchers proved the consistency of Random Forests, which provides a first basic theoretical guarantee of efficiency for this algorithm. This finding was the first consistency result for Breiman's [20,21] original procedure. The approach was based on a detailed analysis of the cell's behavior generated by the selection of the CART-split as the sample size increased. The study also showed that Random Forests could adapt to a sparse structure when the dimension is large, but only a smaller number of coordinates make the information. Figure 3 describes the Random Forest algorithm [22].
The general structure consists of the regression in which a random input vector X ∈ ½0, 1 p is observed, where "p" is the dimension of the vector. The objective is to predict the integrable random response square Y ∈ R by estimating the function of regression mðxÞ = E½Y | X = x. Therefore, it is assumed that the training sample is given by D n = ðX 1 , Y 1 Þ, ⋯, ðX n , Y n Þ in ½0, 1 p × R, independently distributed as the independent pair ðX, YÞ. The objective is to use the dataset D n to construct an estimate m n : ½0, 1 p → R of the function "m." Thus, it is said that an estimated regression function m n is consistent if E½m n ðXÞ − mðXÞ 2 → ∞, where the expectation E is greater than X and D n [22].
A Random Forest is a predictor that consists of a collection of M trees of random regression. For the j-th family tree, the predicted value at query point x is indicated by m n ðx, Θ j , D n Þ, where Θ 1 , ⋯, Θ M are independent random variables, distributed as a generic random variable Φ and independent of D n . In practice, this variable is used to resample the training set before the growth of individual trees and to select successive directions for separation. The trees are combined to form the finite estimate of the forest [22].
Since one can choose, in practice, M as large as possible, Scornet et al. [22] demonstrated the property; according to which, the infinite forest estimate obtained as the limit of Step -I Step -II Step -III Step -IV (1) is verified as follows when the number of M trees grows to infinity [22]: where E Θ denotes the expectation in relation to the random parameter Θ, conditional on D n . The law of large numbers, which states that it almost certainly depends on D n , justifies this operation [22]: See Breiman [20,21] for more details. From now on, to simplify the notation, m n ðxÞ will be written instead of m n ðx ; D n Þ [22].
In the original forests of Breiman [20,21], each node of a single tree is associated with a hyperrectangular cell. At each stage of the tree's construction, the set of cells forms a partition ½0, 1 p . The root of the tree is ½0, 1 p itself, and each tree grows, as explained in Algorithm 1 in Figure 3. This algorithm has three parameters [22]: (1) m try ∈ f1, ⋯, pg, which is the number of preselected directions for the tree's dividing (2) a n ∈ f1, ⋯, ng, which is the number of data points sampled in each tree (3) t n ∈ f1, ⋯, a n g, which is the number of leaves in each tree By default, in the original procedure, the parameter m tinytry is set to p/3, a! is set to n (resampling is done with substitution), and t n = a n . However, in this approach, resampling is done without replacement, and the parameters a n and t n may differ from their default values [22].
The algorithm works by growing M different trees as follows. For each tree, data points are drawn at random without replacing the original dataset; then, in each cell of each tree, a division is chosen maximizing the CART criterion; finally, the construction of each tree is interrupted when the total number of cells in the tree reaches the value t n . Therefore, each cell contains exactly one point in the case t n = a n [22].
Framework Orange Canvas.
Demsar et al. [28] designed a framework called Orange, used in the present work. The Computational and Mathematical Methods in Medicine framework is a set of machine learning and data mining tools for data analysis through Python scripts and visual programming. Orange is intended for experienced users and programmers, as well as data mining students. It is based on the C++ language; however, it allows developers to work with the Python language.
Considerations about the ZAPROS-IIIi Verbal Decision
Analysis Method. Verbal Decision Analysis (VDA) assumes that most decision-making problems can be expressed in natural language and consists of the decision-making process based on the related representation of the problem [29]. It is also noteworthy that this methodology addresses unstructured problems, characterized by the absence of logical and well-defined procedures to be applied to its resolution [29]. These problems are qualitative and are complex to be organized, formalized, and measured numerically. Also, it is not possible to have all the information necessary to resolve them. Thus, the analysis process is also subjective, which requires the collection of information from the decisionmaker. According to Larichev and Moshkovich [29], the methods that make up the structure of Verbal Decision Analysis are ZAPROS-III, ZAPROS-LM, PACOM, and ORCLASS, as well as their characteristics and applications. The analysis of a large amount of data by humans has shown that the correct way of operating is used by the mentioned methods and works as follows [29]: (1) Comparison of two assessments on a verbal scale by two criteria (2) Assignment of multicriteria alternatives to decision classes (3) Comparative verbal evaluation of alternatives according to separate criteria The mentioned methods are Decision Support Systems (DSS) that help a decision-maker to classify alternatives of multiple attributes. The Verbal Decision Analysis framework's first three methods are aimed at establishing a ranking of the alternatives in order of preference. The latter presented is the only methodology for classification based on the VDA structure. Figure 4 shows an easy visualization of several Verbal Decision Analysis methodologies according to the types of problems.
For this work, the ZAPROS-IIIi method will be applied. The method fits the characteristics of the problem addressed, which allows the structuring of a decision rule used to compare the alternatives and will not be changed, even if the set of alternatives is modified and applies to problems with a large number of alternatives. The ZAPROS-III method is structured in three well-defined main stages: formulation of problems, elicitation of preferences, and comparison of alternatives. As proposed in the main version of the ZAPROS method, the method is aimed at classifying alternative criteria in scenarios that involve minimal set criteria and criteria values and a large number of alternatives. The relevant criteria and their values for decision-making and the scale of preferences based on the preference of the decision-maker are obtained in the first and second stages. In the last step, the comparison between the alternatives based on the decision-maker's preferences is performed. These stages are described below.
As part of the Verbal Decision Analysis structure as well, the ZAPROS-III methodology can be considered an evolution of ZAPROS-LM. Similar to the ZAPROS-LM and PACOM methods, this method is aimed at classifying a group of alternatives, from the most preferable to the least preferable.
Although ZAPROS-III applies a similar procedure to obtain the preferences of its successor, it implements modifications that make it more efficient and more accurate concerning inconsistencies. The number of incomparable alternatives is substantially less than in the previous ZAPROS. Figure 5 presents a flowchart with the steps to apply the VDA ZAPROS-IIIi method. According to the scheme described in the procedure, it is possible to divide the application of the method into three stages: formulation of the problem, elicitation of preferences/validation of the decisionmaker's preferences, and comparison of alternatives.
Since there is an exponential growth in the alternatives of the problem and the growth of the necessary information in the process of obtaining preferences, a disadvantage of the method is that to control the complexity, the number of criteria and values of the treated criteria is limited. The ZAPROS-IIIi methodology brings an essential difference in relation to the previous models: the division into different stages. The method proposes that the two substations are transformed into one instead of basing the decision-maker's preferences on the first reference situation and then establishing another preference scale using the second reference situation. Therefore, the questions asked considering the first reference situation are the same as those asked considering the second reference situation. Thus, both situations will be considered in answering the question at the same time. The change implies process optimization: dependence on criteria is avoided.
Ultimately, these modifications increased method comparability so that several alternatives defined as incomparable, when applying the ZAPROS method purely, could now be compared directly or indirectly. Furthermore, these changes in the method process did not change its computational complexity [30].
Considering that it is complicated for a decisionmaker, the process of ordering preferences, Tamanini et al. conducted a study with data from a battery of tests for patients with a possible diagnosis of Alzheimer's disease to structure a decision tree based on the characteristics that led to the determination of that disease. In this study, the preference scale was established through the analysis of the resulting tree and, subsequently, the scale was subjected to the ZAPROS method to classify the tests involved in the case studied [31]. This hybrid model shows the potential of integrating VIDA with machine learning solutions [32].
A Protocol to Determine the Main Characteristics for Diagnosing Autism Spectrum Disorder
The model was structured using decision trees with the Random Forest to classify the main characteristics of medical evaluations. Then, these characteristics were submitted to the VDA ZAPROS-IIIi method to be placed in order of preference of the decision-maker. The construction of the protocol, structured in machine learning and the ZAPROS-IIIi method of VDA, used the criteria defined based on each qualifier of obstacles experienced by individuals with ASD, namely [33]: Mild/moderate (from 0% to 49% commitment). Has a slight, regular barrier or difficulty.
Computational and Mathematical Methods in Medicine
Complete (96% to 100% commitment). Has a barrier or total difficulty. The evaluations of 3,861 (three thousand eight hundred sixty-one) children aged 0 to 5 years were selected, indicating the classification "F84" and family of the ICD-10, which characterizes the global developmental disorders-this classification in the ICD-10 corresponds to the code "299.00" in the DSM-5. After data collection, it had the use of keywords for each of the characteristics of the social and medical evaluations, aiming at generating the ABT (Analytical Base Table) and executing the Random Forest, using Orange. Figure 6 shows the configuration for using the Random Forest. (ii) Difficulty to intentionally use the sense of sight (follow objects visually, observe people, watch a sporting event, and observe people or children playing, among others), in a manner compatible with the age group-d110 from 1 year (iii) Difficulty in acquiring and executing necessary skills (using cutlery and pencils, among others) and complex skills (games, sports, using tools, and watching, among others), in a way compatible with the age group-d155 from 2 years (iv) Sleep functions (start, maintenance, quantity, and quality of sleep), in a way compatible with the age group-b134 Analytical Base Table Random Forest Test and score (1) Predictions → data Model → tree Tree Tree viewer (1) D a t a L e a r n e r Figure 7 shows the resulting tree of the Orange application, with five levels, and the alternatives considered in the assessment. These main characteristics were applied to the Source: formatted by the author. 10 Computational and Mathematical Methods in Medicine ZAPROS-IIIi method, implemented in the ARANAÚ tool, for an ordering of the most preferable to the least preferred characteristic, in the answers for cases of ASD diagnosis. Based on the characteristics identified early in Section 4.1, a form was prepared to be filled out by decision-makers in a survey format. Table 1 presents these characteristics in the form of a questionnaire. Each question in this questionnaire had answers under three criteria discriminated in the lines of Table 1, namely: A. Voice, mental, or vision functions must present a barrier or difficulty.
B. Learning and the application of knowledge must present a barrier or difficulty.
C. Mobility must present a barrier or difficulty.
In the sequence, each of these criteria was evaluated under the following three alternatives discriminated and identified in the columns of Table 1: Complete (96% to 100% commitment). Has a barrier or total difficulty. Severe (from 50% to 95% commitment). Has high or extreme difficulty.
The mentioned questionnaire served as a basis for conducting research with a sample of 91 (ninety-one) professionals with knowledge and experience in the treatment of people who have ASD: (i) 74 (seventy-four) health professionals, twenty doctors, forty physiotherapists, forty nurses, four psychologists, four speech therapists, and two occupational therapists (ii) 17 (seventeen) professionals in the educational field After the elicitation phase, each professional's preferences in the research generated the results shown in Table 1, which were compared with the results obtained through decision trees. Then, the data with values of criteria and alternatives, acquired in the research, were loaded in the ARANAÚ tool to define the order of preference of the main characteristics and their qualifiers most likely to determine the definitive diagnosis for ASD. The table in Figure 8 shows the order of preference of the ASD's main characteristics, established by the ZAPROS-IIIi method, implemented in the ARANAÚ tool after processing the data obtained in the research. These data enabled the combination of vectors formed by "criteria × alternatives," that is, "rows × columns" of Table 1. The graphical representation in Figure 8 makes it possible to observe that the characteristics most preferable to decision-makers have more outgoing arcs than incoming arcs.
Conclusion and Future Work
The present study made it possible to analyze the questions of social and medical assessments of the Benefit of Continuous Provision (BPC) applied to children with Autism Spectrum Disorder.
The evaluations have an extensive questionnaire. With the application of decision support methods, it was possible to simplify the choice of specific characteristics for faster and more accurate diagnosis, taking into account the importance of early diagnosis since most of the cases are still detected later. With that, there was a speed gain in identifying the disease, facilitating the physician's decision-making.
This work presented a structured protocol, using decision trees with the Random Forest to classify the main characteristics of the evaluations and the ZAPROS-IIIi method for ordering these characteristics.
The first phase of the study demonstrated that the main symptoms-considered variables or vertices of the tree graph constructed by the Random Forest-are the following, in summary: speech dysfunction, difficulty in intentionally using vision, trouble in acquiring and executing basic and complex skills, sleep dysfunction, memory dysfunction, global social dysfunctions, difficulty moving, difficulty walking, and difficulty in vision. Also, the next step, based on Verbal Decision Analysis, resulted in the following ordering of the criteria selected in the previous step, namely: (i) 8. For the difficulty in walking (moving on foot, for short or long distances, without the aid of people, equipment, or devices), in a way compatible with the age group-d450 from 2 years old (ii) 6. For global psychosocial functions (interpersonal skills necessary for the establishment of reciprocal social interactions, in terms of meaning and purpose, adaptability, responsiveness, predictability, persistence, and accessibility, and interpersonal communications, among others), in a manner compatible with the age group-b122 from 2 years (iii) 7. For the difficulty of moving using specific equipment or device to facilitate movement (walker, wheelchair, crutches, cane, and others), in a manner compatible with the age group-d465 from 3 years (iv) 1. Regarding the functions of fluency and rhythm of speech (changes in fluency, stuttering, verbiage, dyslalia-tachylalia, and bradylalia, among others), for a definite diagnosis of ASD-b330 (v) 4. For sleep functions (start, maintenance, quantity, and quality of sleep), in a way compatible with the age group-b134 (vi) 9. For vision functions (quality, accuracy, light and color perception, monocular and binocular vision, myopia, hyperopia, astigmatism, hemianopsia, presbyopia, color blindness, tunnel vision, central and peripheral scotoma, diplopia, night blindness, and adaptability to light, among others), in a way compatible with the age group-b210 and age group-d465 from 3 years old (vii) 2. For the difficulty of intentionally using the sense of sight (following an object visually, observing people, and watching a sporting event or children playing, among others), for a definite diagnosis of ASD-d110 (viii) 3. For the difficulty in acquiring and executing necessary skills (using cutlery and pencils, among others) and complicated skills (games, sports, using tools, and watching, among others), in a way compatible with the age group-d155 from 2 years old (ix) 5. For memory functions (recent, remote, and amnestic memory disorders), in a way compatible with the age group-b144 from 3 years old The overall result suggests that of the more than 80 (eighty) variables evaluated, only 9 (nine) would be sufficient to indicate the diagnosis of Autism Spectrum Disorder safely.
The authors suggest, for future work, the application of a broader approach to the protocol, conducting future research in the universe of children and adolescents from 0 to 18 years old, and a more in-depth analysis of the characteristics and their qualifiers.
A comparative study of the present protocol proposal with another model, using algorithms based on Bayesian networks, logistic regression, or other machine learning techniques, will expand and improve the model now proposed for the decision-making process. At this point in the study, comparisons can be made between the mentioned algorithms [34].
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request. | 8,975.4 | 2021-03-30T00:00:00.000 | [
"Computer Science",
"Medicine",
"Psychology"
] |
A BIM-Based Approach to Reusing Construction Firm ’ s Management Information
Nowadays most construction firms have begun to use information management systems in their business to work more efficiently. At the same time, a lot of management information is being accumulated and some of the information can be reused to support the decisionmaking. Up to now, the information has not been reused so effectively in construction firms as expected. This paper introduces a new approach to reusing construction firm’s management information, which is based on BIM (Building Information Modeling) technology. In the paper, the current approaches are reviewed at first, and then the framework of the new approach is described. Next, the key issues of the new approach are clarified. Finally, a use case of the new approach is demonstrated. It is concluded that the new approach can be used in construction firms to better reuse the accumulated management information.
Introduction
Nowadays most construction firms are using information management systems as management tools and more and more information is being accumulated in the systems over time, related to productive information on elements such as labor, machine, and material and so on, and covering aspects such as schedule, cost, quality, etc. (Rujirayyanyong 2006) Just as the term "information explosion" indicates, huge amount of information thus may be accumulated in the system, because each project leaves several gigabytes of information after completion, while a moderate large construction firm have hundreds of projects in China each year.The information of reuse value is referred to as information resources in the context of this study and it is expected that by making full use of information resources, the ability of decision-making could be improved, which thus enhances the competitiveness of construction firms.
There exist two approaches to reusing information resources.One approach is to manage the major information (e.g., engineering documents) and make it easy to query.For example, Tserng (2004) analyzed five phases of knowledge management, i.e. knowledge collection, knowledge extraction, knowledge storage; knowledge sharing and knowledge update, and developed a system to support information reuse.The other approach is to extract useful information to build data warehouse and excavate potential knowledge by using data mining techniques.For example, Soibelman (2004) conducted schedule delay analysis using decision tree method and artificial neural network method to analyze the information accumulated in enterprise databases.Some construction firms hired management consulting firms to implement Business Intelligence (BI here after for short) tools for them in China.However, the existing approaches are still not convenient and efficient for construction firms to use.
There are mainly two reasons for this.One is that the huge amount of the accumulated information prevents the users from extracting the right information easily and generating valuable information through handling the information.The other is that the existing Ma, Z L (2012) 'A BIM-based approach to reusing construction firm's management information', Australasian Journal of Construction Economics and Building, 12 (4) 29-38 30 information does not include that of the relationship among various information items explicitly.
BIM (Building Information Modeling) technology makes it possible to express the relationship of various information items explicitly so that the various information items in the lifecycle of construction projects can be shared easily.In the study, the classification and identification of information resource items were carried out so that only information resources are extracted for storage from the information management systems and BIM technology was used to represent the information resources in order to simplify the extraction of information resources for analyzing.A new approach based on BIM technology was formulated in order to reuse information resources more efficiently and effectively, and a prototype system was developed to verify it.This paper is intended to present the approach in an integrated way.
A New Conceptual Framework for Reusing Construction Firm's Information Resources
Though investigations against senior managers in construction firms, it is estimated that only about 10 percent of the accumulated information in construction firms' information management systems can be regarded as information recourses in the sense that they can be reused in future.So it is necessary to extract information resources firstly from the information management systems of construction firms to make efficient use of them.Accordingly, a conceptual framework is established for reusing construction firm's information resources, as shown in Figure 1 (Ma 2009).According to the framework, information resources need to be extracted from the information management systems after a project is completed.The information resources then should be standardized and added to a dedicated system for information reuse; just like reusable spare parts that are removed from used cars and stored in the warehouse.Whenever the support for decision-making is needed, the managers can then analyze the information resources by using the dedicated system.This framework reduces the quantity of information stored in the database of the dedicated system so that the information resources could be reused efficiently.
In order to implement the framework, several key techniques need to be used to clarify the key issues; for example, expert workshop is used for identifying the information resource items, BIM technology is used for the representation of information resources, an objectoriented database management system is used for managing them, and data mining technology is used for extracting knowledge to support the manager's decision-making.
Research on the Key Issues Classification and Identification of Information Resource Items
While the classification and identification of information resource items is the basis of the new approach, it is very difficult to carry out the work because it not only involves many aspects of the management of construction firms, but also depends on the experience of the managers.An experienced manager knows what kind of information resources could be reused in his decision-making.In addition, the classification and identification has to be systematic, so that the information resource could be extracted from information management systems without losing the valuable ones.
Hence, the procedure for the classification and identification of information resource items is formulated and implemented in the study as shown in Figure 2. As Step 1, the major decision-making processes in construction firms were summarized based on extensive literature review.As a result, the decision-making processes are divided into two parts, project level processes and enterprise level processes; the former includes 5 project stages such as bidding and contract-signing, and the latter includes 8 functional management categories such as planning and production.As Step 2 and Step 3, the information resource items that are required in the processes and their usage were summarized.Thus totally 63 information resource items were proposed, among which, 37 items are related to projectlevel management and the rest are related to the firm-level management.Then a questionnaire form was formulated for use in next step.In Step 4, a workshop was held and 5 experts from top construction firms in China were invited to discuss and fill in the questionnaire form to evaluate the reusability of the information resource items.The reusability of the information resource items was divided into three levels, i.e.A-level, B-level and C-level, where A-level corresponds to the largest reusability, B-level to the medium one and C level to the smallest one.As an example, Table 1 shows the information resource items of A-level and B-level reusability.The evaluation results on information resource items were verified by holding a meeting in a construction firm which had implemented information management systems successfully for several years.Ten high-level managers including the general manager attended the meeting and were asked to evaluate the reusability of the information resource items independently.As a result of comparison, the evaluation was in agreement with that given by the experts.Thus it is concluded that the result of classification and identification of information resource items is applicable for extracting the information resources from information management systems of construction firms (Ma 2010b).
Representation of Information Resources
In order to manage the information resources that are extracted from existing information management systems, it is better to define a neural format for storing them.In recent years, with the development of BIM technology, the IFC standard has been recognized as a mainstream data standard.The study adopts the IFC standard as the storing format of information resources.There are three reasons for doing this.First, because of the direct or indirect relationship between the construction firms' information and the building products, it facilitates future integration of the information resources with the design information.Second, the IFC standard, which is object-oriented, contributes to classifying and storing information resources semantically and thus can make it more convenient for the construction firms to retrieve information resources that they want than using other representation method.
Finally, the IFC standard contributes to better sharing and exchange of information resources since it does not change with information management systems.
However, the IFC standard has not been used in the field of construction management up to now, except for a few aspects such as schedule management and cost estimating.Hence, in order to use the IFC standard in this approach, it has to be expanded.
In the study, the expansion of the IFC standard was carried out for representing the information resource items of A-level reusability.Object-oriented analysis method was used and eleven classes including labour and organizations were identified.By examining the IFC standard, it was found that only three categories of information related to "data records", "drawings" and "the relationship with drawings" need to be defined as expansions.The hierarchical structure of existing and expanded entities about the above-mentioned categories is shown in Figure 3.In the structure, two entities related to drawing are in the resource layer, so they do not inherit from IfcRoot (Ma 2010a).Thus, the IFC-based model of information resources is formulated as shown in Figure .4. The expanded entities are properly arranged into the domain layer, interoperability layer, core layer and resource layer in the IFC standard respectively.Based on the model, the information resources are supposed to be extracted from the information management systems of a construction firm and transformed to IFCXML files before they are imported into the prototype system in this study.The way to extract information resources depends on the information management systems that the construction firm uses.A mapping between the information items in the information management systems and the information resource items needs to be established by an engineer and a script program needs to be developed based on the mapping in order to extract the information resources from the information management systems.Next, the mapping between the information resource items and the IFC entities' data member needs to be established by the engineer who has the knowledge of the IFC standard.Then the mapping can be established in the dedicated system for reusing the information resources by utilizing the functions provided in the system.
It deserves to note that the quality and consistency of information resources depend on both that of the information in the original information management systems and the quality of the two mappings established by the engineer in the construction firm.While the former is out of control of this study, the latter can be controlled if the construction firm can find an engineer who knows or can learn the information items involved in the information management systems, information resource items and the IFC standard well.The control of the latter is not a problem, since it needs the engineer to do only once in short period as long as the information management systems is not changed and it is not difficult for him to master those knowledge.When making use of the information resources stored in the system, an inverse process needs to be carried out.Since the combination of information resource items may be needed in many cases of decision-making, the mapping between the data member of IFC entities and the process of decision-making support need to be established in advance before the data are used for the analysis to support decision-making.The process of obtaining the data of information resources for decision-making is shown in Figure 5, where the main IFC entity represents IFC entity whose attributive data can be use directly, while the basic IFC entity represents that whose attributive data have to be obtained through relation entities before being used.
Management of Information Resources
Since vast amount of information are anticipated to be in such a system in order to make use of information resources in construction firms, it is necessary to adopt an efficient database management system.Traditionally, relational databases (RDB) management systems are used to manage data in information systems (Demian 2006).Considering that the IFC data are object-oriented, it was expected that object-oriented database (OODB) management system are more efficient than the traditional RDB management system.To verify it, comparisons were carried out by using an OODB management system, Versant Object Database 8 (Versant 2011), and a relational database, SQL Server 2005, respectively.
The comparison was carried out in the following way, i.e., establishing databases for simple data (e.g. a single IFC entity), combined data (a relation entities is used for combining two IFC entities) and complex data (more relational entities are used for linking more IFC entities), and then manipulating the databases by inserting information and retrieving information up to 100000 times.The results are shown in Figure 6.It is clear that in most cases, the OODB management system showed much higher performance than the RDB management system (Ma 2011).Hence, the Versant Object Database 8 was used in the development of the prototype of the dedicated system for information reuse (the prototype system hereafter) in this study.
Reuse of Information Resources
Through investigations it is concluded that the statistical analysis method can satisfy construction firms' daily needs for supporting decision-making in most cases.So the prototype system that was developed in the study provides only the basic analysis functions, i.e. graphical representation and multiple linear regression analysis.Besides, to satisfy users' need for complicated analysis, the prototype system facilitates outputting information resources in the form of XLS, CSV and ARFF files so that users can make in-depth analysis with dedicated software such as BI tools.
Prototype System Introduction to the Prototype System
Based on the above-mentioned research, the author et al. developed a prototype system called the Information Resources Reuse System for Construction Firms (InfoReuse for short) by using C# language and Versant Object Database 8. Figure 7 shows the system model.
The system has two types of users, i.e. information managers and information analysts.The former are supposed to be an engineer in the construction firm who know information resource items and the IFC standard and are responsible for importing information resources from the information management systems.In addition, they are responsible for setting departments, users and user rights, maintaining information resources, decision-making processes and analytical models, and setting the mapping of IFC entities, etc. in the system.The latter can browse, search, use and output information resources that they want by using the system.
Using the System for Case Studies
In the study, a construction firm that has used an ERP system for about 8 years was selected to test the prototype system.The extracted information resources cover project information, suppliers, material, WBS, material purchase records, cost records and enterprise cost records etc.There are about 5000 information resource instances from 21 projects.The size of IFCXML files is about 10MB involving 30,000 IFC entities.The imported information resources are then used it to predict project cost, to assist in material purchase and so on.For example, the information resources of wood-based panel factory projects were used to predict the cost of a planned project.It was found that the construction cost of a wood-based panel factory project mainly depends on the output of the factory and duration of the project.Based on this feature, the cost of the planned project was predicted by using multiple linear regression analysis method.Due to the shortage of similar data, the system could not be validated by predicting a new case.However, when the results were explained to two senior managers of the same construction firm and they valued the findings very much.
Conclusions
A new approach for reusing information resources in construction firms was established based on BIM technology in the study.To verify the approach, the related key issues were clarified by using key techniques and a prototype system was developed and applied.The following summarizes the findings.
(1) This paper reviews the previous undertakings of this study, i.e. the classification and identification of information resource items based on literature review and an expert workshop, the representation of information resources based on the IFC standard, and managing the information resources by using object-oriented database management system.These works has laid a sound foundation for the new approach.
(2) The new approach proposed in this study, which is based on BIM technology can help to achieve higher efficiency of managing and reusing information resources than traditional approaches in order to support the decision-making of construction firms.
(3) As the contribution to the body of knowledge, the integrated use of identification of information resources based on pre-defined items, representing them by using the IFC standard and managing the them represented by using object-oriented database management system provides a more efficient approach for making full use of information resources of construction firms.
Figure 2
Figure 2 Procedure for the classification and identification of information resource items (Lu 2011)
Figure 3
Figure 3The hierarchical structure of existing and expanded IFC entities related to "data records", "drawings" and "the relationship with drawings" (Ma 2010a)
Figure 4
Figure 4 Information resources model based on the IFC standard (Lu 2011)
Figure 6
Figure 6 Comparison of OODB and RDB for the efficiency of manipulating IFC data (Ma 2011) | 4,040.4 | 2012-12-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Non-invasive and continuous monitoring of the sol – gel phase transition of supramolecular gels using a fast ( open-ended coaxial ) microwave sensor †
An open coaxial re-entrant microwave sensor has been used for the non-invasive and continuous monitoring of the sol–gel transition of physical gels characterized by different gelation mechanisms, solvents, compositions, and stabilities. Comparison of measurements by differential scanning calorimetry allowed the identification of the phase transition by a change in the dielectric properties of the material over time.
Self-assembled viscoelastic gels of organic solvents (organogels), water (hydrogels) or water-organic solvent mixtures (aqueous gels) have been recognized as promising materials for bottom-up nanofabrication tools in various fields including biomedicine, sensors, cosmetics, food, catalysis, and environmental remediation. [1][2][3] As soft materials, gels are continuous in structure and solid-like in rheological behavior. In contrast to chemical gels, 4,5 which are based on covalent bonds (usually cross-linked polymers unable to redissolve), physical (also called supramolecular) gels 6,7 are made of either low-molecular-weight (LMW) compounds or polymers -so called gelators -through extensive non-covalent interactions, predominantly hydrogen-bonding, van der Waals, dipole-dipole, charge-transfer, donor-acceptor, p-p stacking and metal-coordination interactions. Furthermore, systems based on both types of connections are also known. 8,9 The solid-like appearance of these gels is the result of the entrapment of the liquid (major component) in the interstices of a solid 3D matrix of large surface area (minor component), usually through surface tension and capillary forces. 1,2 Remarkably, many gels can immobilize up to 10 5 solvent molecules per molecule of gelator and increase the viscosity of the medium by a factor of 10 10 .
In the case of LMW gelators, the formation of the viscoelastic matrix is a consequence of the entanglement of 1D supramolecular fibers (typically of micrometer scale lengths and nanometer scale diameters), which is usually induced by cooling their hot isotropic solutions to room temperature (RT) (Fig. 1A). However, it should be noted that gelation of liquids at RT 10,11 or induced by ultrasound treatment 12 instead of heating-cooling has also been described. Due to the weakness of the non-covalent interactions that maintain The pictures correspond to the material made from compound 1 in acetone (c = 7 g L À1 ). Bottom: a typical gelation mechanism of physical gels, where (a) is usually a heating step and (b) a cooling step. (B) Portable microwave resonance system used to determine the dielectric properties of sol-gel transitions at microwave frequencies. (C) Smith chart of microwave measurement of a sol-gel sample. (D) Magnitude representation of the resonance measurement of a sol-gel sample. Inset: picture of the metallogel made from 9 in DMF (vide infra) -plots C-D correspond to this gel. At least 30 mm of the test tubes used for the measurements (40 Â 7 mm, H Â D) should be filled with the material for accurate data acquisition. a Universität Regensburg, Fakultät für Chemie und Pharmazie, the dynamic supramolecular structure, physical gels are usually thermoreversible. Moreover, the sol-gel (and/or gel-sol) phase transition could also be triggered by other stimuli such as pH, light irradiation or ionic strength if the gelator molecule possesses appropriate structural moieties for recognition. 13,14 It is also important to recognize that the metastable nature of physical gels derives from an elusive equilibrium between dissolution and crystallization, which has stimulated numerous studies and applications in the field of crystal engineering during the last few years. 15,16 Due to the brittleness of these materials, it is usually easier to monitor the gel-sol transition rather than the sol-gel for the construction of phase diagrams according to both the gel-sol transition temperature (T GS ) and the sol-gel transition temperature (T SG ). Among different techniques, rheology, NMR spectroscopy and conventional differential scanning calorimetry (DSC) are the most common and accurate methods used so far for this kind of study, albeit they normally suffer from the disadvantages of being relatively time consuming and requiring the use of very expensive equipments and trained personnel. Techniques of higher specificity such as ESR, NIR and fluorescence spectroscopy have also been used to characterize the solgel transitions of colloids. 17 On the other hand, dielectric measurements have also been used to determine sol-gel transitions, usually below a few kHz. 18,19 At these frequencies the dielectric properties are normally related to the conductive nature of the material and this quantity becomes (less) sensitive to chemical changes that occur at gelation. 20 Dielectric measurements at microwave frequencies, however, are very sensitive to the mobility of molecules in the gel (especially when some water dipoles are involved). Therefore, the use of the mobility of the molecular structure through dielectric properties provides a direct (and in situ) measurement of the chemical and physical state of the matter. 20,21 Changes in dielectric parameters can be related to critical points in different material processes, such as cure reaction onset, gelation, end-of-cure, build-up of the glass-transition temperature, etc. 21,22 For example, a microwave system designed for adhesive cure monitoring has been previously described by some of us 23 where in situ dielectric measurements correlate very well with conventional measurement techniques such as DSC, combining accuracy and rate with simplicity and an affordable cost.
This communication presents a microwave non-destructive system for monitoring the sol-gel transition process of supramolecular gels (Fig. 1A). A microwave sensor adapted to a standard pyrex vial containing the precursor isotropic solution allows in situ measurements of dielectric properties in order to distinguish the changes over time and temperature. Fig. 1B shows a picture of the portable microwave device used to conduct the dielectric measurements. The system comprises a microwave sensor, a microwave transmitter and receiver (from 1.5 to 2.5 GHz) and a control unit to provide real-time information about the gelation progress without interfering with the reaction. The precursor isotropic solution is introduced in a pyrex vial and placed inside an open coaxial re-entrant (microwave) cavity sensor. 24,25 When the low-intensity electromagnetic waves penetrate into the material, its molecules tend to orient with the (applied) external field and the material gains certain polarization, reflecting the back part of the microwave signal from the sensor. This reflected signal is measured continuously to determine the resonance frequency and quality factor of the sensor 23 during gelation to monitor the transition process. Fig. 1C and D show a typical response of the reflected signal in the microwave cavity sensor in the imaginary plane (Smith chart) or in magnitude representation of a gelation experiment at a given temperature. We have reported elsewhere the fundamental details of the microwave system with a different sensor head. 23 Fig . 2 shows the library of known gelators that we prepared (ESI †) to test the ability of the microwave sensor to monitor the sol-gel transition of physical gels. The library included single LMW gelators (1-8) as well as bicomponent (9) 26 and multicomponent gelator systems (10). 15 A number of gels with different solvents and compositions could be easily obtained from this library at well-defined concentrations. Moreover, N,N 0 -dibenzoyl-L-cystine (6) was included in this study for the preparation of aqueous gels. 27 Azobenzene-containing peptide 8 was selected because its phase transition can be triggered either thermally or photochemically. 28 Besides the classical heating-cooling treatment needed for the formation of thermoreversible physical gels made from solid compounds 1-8, gelator systems 9 and 10 enable sol-gel phase transitions at RT and well below RT, respectively. In the case of 9, DMF stock solutions of oxalic acid dihydrate and copper(II) acetate monohydrate were mixed at RT to form the corresponding organogel. Multicomponent solution 10 constitutes a special system used to form organogels at low temperatures upon addition of a small amount of this solution to a suitable organic solvent (ESI †). In contrast to the gels obtained from 1-8, those derived from 9-10 are not thermoreversible despite the non-covalent interactions involved in the gelation process. Moreover, gels made from 10 eventually undergo subsequent transition to a thermodynamically most stable crystallization phase. 15 This collection of gelators offered a versatile scenario for the proof-of-concept of the detection of the sol-gel transition in physical gels by continuously monitoring the dielectric properties of the materials. The isotropic solutions of the gelators were prepared as previously reported (ESI †). Preliminary experiments with solutions prepared at different concentrations of a LMW gelator showed a response of the microwave sensor to viscosity changes of the medium (ESI †). On the basis of this observation, the dielectric properties of the sol-gel transition were continuously monitored at microwave frequencies and the obtained profile was correlated with the actual temperature of the material (ESI †). Moreover, DSC thermograms were recorded separately for model systems in order to draw meaningful comparisons between the change in the dielectric properties of the material and the exothermic effect associated with the sol-gel transition (Fig. 3). The temperature profiles during the sol-gel period were constructed independently by means of a thermocouple probe (+ 0.1 mm) centrally placed inside the mixture. We confirmed that the use of this probe did not affect the gelation kinetics. After each measurement, the state of the material was examined by the ''stable-to-inversion'' test, and the gel condition of model samples that did not show gravitational flow upon turning the vial upside-down was also confirmed by oscillatory rheological measurements (ESI †). Fig. 3 shows the evolution of the dielectric constant (the real part of the dielectric properties) of the isotropic solution of 3 in toluene during the sol-gel period. The curve provides at least three sections with trend lines of different slopes. The first intersection point between the 1st and 2nd trend lines corresponded to a temperature of 48 1C, which lies in the same region compared to the single exothermic peak observed by DSC and attributed to the T SG . However, the visual inspection of the sample vial, as well as in situ monitoring of the temperature profile, showed that complete macroscopic gelation of the solvent (i.e., the material did not flow upon inversion of the vial upside-down) was only achieved when the temperature reached ca. 26 1C. This value was in good agreement with the second intersection point between the 2nd and 3rd trend lines. The reason for the slight difference between the estimated temperatures and the actual values probably falls in the procedure to prepare the sample, which is heated (or cooled in the case of gelator 10) before introducing it into the sensor system, whereas the measurement cell remains at RT. This modifies the temperature of both the sample holder and the material during the measurement of the resonance frequency causing the observed deviations.
Moreover, appropriate control experiments demonstrated the relationship between the molecular aggregation of the gelator molecules and the dielectric properties of the bulk material as recorded by the sensor (vide infra). These experiments included the investigation of systems displaying a sol-gel transition at RT (e.g., metallogel made from the gelator mixture 9; photoinduced reversible phase transition of the gel made from 8), as well as studies in different solvents and in the absence of gelator molecules (ESI †).
In this context, we have recently demonstrated that gelator 8, a tetrapeptide bearing an azobenzene unit in the side-chain, is able to form a variety of physical organogels mainly through hydrogen bonding and p-p interactions as driving forces. 28 Moreover, the gelation can be triggered by thermal, ultrasound or light stimuli. Thus, UV irradiation at l max = 366 nm of the corresponding gel made in toluene caused the trans-cis isomerization of the azobenzene unit, which was accompanied by a gel-sol transition of the gel to the corresponding solution at RT. This photoisomerization was in agreement with the decrease of the p-p* transition band (l max = 328 nm) and simultaneous increase of the n-p* transition (l max = 439 nm) (ESI †). On the other hand, cis-trans back isomerization and, hence, sol-gel transition could be achieved upon exposure of the sample to room light. As expected, the fibrillar morphology of the gel material disappeared in solution as evidenced by FESEM. As shown in Fig. 4, cyclic phase photoinduced transitions at RT were also accompanied by a reproducible change in the dielectric properties of the material.
The use of the gelator system 9 constitutes another interesting example of gelation at RT. In this context, oxalic acid has been proven to be the lowest molecular weight organic ligand able to form stable supramolecular metallogel networks in the presence of metal salts. 26 Moreover, the formation of organogels well below 0 1C can be achieved by the addition of small volumes of the multicomponent liquid gelator system 10 i.e., a 0.13 M methanolic solution of (1R,2R)-1,2-diaminocyclohexane L-tartrate and 2.4 equiv. of concentrated HCl (37 wt% in aqueous solution) to a variety of organic solvents pre-cooled close to their freezing points. 15 This procedure afforded the preparation of isotropic solutions containing the gelator system, which subsequently turned into homogeneous gels while warming up to RT (ESI †). However, in contrast to many organogels the temporal stability of these gels is very short due to the thermodynamic formation of (1R,2R)-1,2-diaminocyclohexane dichloride crystals, which finally causes the complete disruption of the gel matrix. 15 Gel-crystal transitions have been previously related to the Ostwald rule of stages. 29,30 Herein, the microwave sensor permitted not only the detection of the sol-gel transition at low temperature (T = À31 1C) that matched well the value read by a thermocouple in a separate experiment (T = À33 1C), but also the subsequent gel-crystal transition at RT ca. 2 h after gel formation (Fig. 5). Complete gel formation was verified by visual inspection at ca. À15 1C, which matched well with the second intersection point registered at À13 1C. As observed with other gelators, good correlations between measured and actual T SG values were also observed for other solvents (ESI †). Fig. 6 shows a comparative plot of the T SG values obtained for a variety of gels using the microwave sensor, and the actual values were measured by other techniques as explained above (e.g., DSC and in situ thermocouple measurements).
The results indicated a good correlation between the different techniques to recognize the sol-gel transition under different conditions (e.g., solvent nature, concentration, and gelator structure). Finally, preliminary experiments have shown that the microwave sensor could also be used to detect the melting (gel-sol) transitions as we could record the variation of the dielectric properties of the material at single points (upon heating separately) and correlate marked changes with the T GS determined by DSC or the inverse flow method 31 (ESI †).
Conclusions
The foregoing results demonstrate that the sol-gel transition of supramolecular gels can be estimated using a rapid and nondestructive open-ended coaxial microwave sensor, which allows continuous recording of the dielectric properties of dynamic mixtures at microwave frequencies without the need for reference values. The device comprises a microwave sensor, a microwave transmitter and receiver (from 1.5 to 2.5 GHz) and a control unit. When the electromagnetic waves penetrate into the material, its molecules tend to orient with the applied field and the material gains certain polarization, reflecting back part of the microwave signal from the sensor. The reflected signal is measured continuously to determine the resonance frequency and quality factor of the sensor during the gelation phenomenon. The phase transition is accompanied by a clear modification of the dielectric constant of the bulk material. This technique is compatible with a variety of physical gels characterized by different gelation mechanisms. Not only sol-gel but also gel-crystal transitions can be detected at microwave frequencies. Although the adaptation of the sensor for this particular application is still in its infancy, it has shown easy operability and remarkable sensitivity towards both composition and material stability (or structural change). Besides the application of the sensor to study other gel-like materials (e.g., polymer physical gels, chemical gels, and ionogels), work towards the integration of an internal precision heating/cooling controller of broad range in the microwave sensor, minimization of the sample volume, and the adaptation to a highthroughput analysis mode are currently underway. | 3,728.2 | 2015-02-18T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Coaxial waveguide mode reconstruction and analysis with THz digital holography
Terahertz (THz) digital holography is employed to investigate the properties of waveguides. By using a THz digital holographic imaging system, the propagation modes of a metallic coaxial waveguide are measured and the mode patterns are restored with the inverse Fresnel diffraction algorithm. The experimental results show that the THz propagation mode inside the waveguide is a combination of four modes TE11, TE12, TM11, and TM12, which are in good agreement with the simulation results. In this work, THz digital holography presents its strong potential as a platform for waveguide mode charactering. The experimental findings provide a valuable reference for the design of THz waveguides. ©2012 Optical Society of America OCIS codes: (110.6795) Terahertz imaging; (230.7370) Waveguides; (090.1995) Digital holography. References and links 1. G. Gallot, S. P. Jamison, R. W. McGowan, and D. Grischkowsky, “Terahertz waveguides,” J. Opt. Soc. Am. B 17(5), 851–863 (2000). 2. M. Nagel, A. Marchewka, and H. Kurz, “Low-index discontinuity terahertz waveguides,” Opt. Express 14(21), 9944–9954 (2006). 3. C. H. Lai, Y. C. Hsueh, H. W. Chen, Y. J. Huang, H. C. Chang, and C. K. Sun, “Low-index terahertz pipe waveguides,” Opt. Lett. 34(21), 3457–3459 (2009). 4. J. T. Lu, Y. C. Hsueh, Y. R. Huang, Y. J. Hwang, and C. K. Sun, “Bending loss of terahertz pipe waveguides,” Opt. Express 18(25), 26332–26338 (2010). 5. K. L. Wang and D. M. Mittleman, “Metal wires for terahertz wave guiding,” Nature 432(7015), 376–379 (2004). 6. H. Zhan, R. Mendis, and D. M. Mittleman, “Superfocusing terahertz waves below λ/250 using plasmonic parallel-plate waveguides,” Opt. Express 18(9), 9643–9650 (2010). 7. O. Mitrofanov, T. Tan, P. R. Mark, B. Bowden, and J. A. Harrington, “Waveguide mode imaging and dispersion analysis with terahertz near-field microscopy,” Appl. Phys. Lett. 94(17), 171104 (2009). 8. M. Rozé, B. Ung, A. Mazhorova, M. Walther, and M. Skorobogatiy, “Suspended core subwavelength fibers: towards practical designs for low-loss terahertz guidance,” Opt. Express 19(10), 9127–9138 (2011). 9. A. Thoma and T. Dekorsy, “Influence of tip-sample interaction in a time-domain terahertz scattering near field scanning microscope,” Appl. Phys. Lett. 92(25), 251103 (2008). 10. Q. Wu, T. D. Hewitt, and X. C. Zhang, “Two-dimensional electro-optic imaging of THz beams,” Appl. Phys. Lett. 69(8), 1026–1028 (1996). 11. Z. P. Jiang and X. C. Zhang, “2D measurement and spatio-temporal coupling of few-cycle THz pulses,” Opt. Express 5(11), 243–248 (1999). 12. X. K. Wang, Y. Cui, W. F. Sun, Y. Zhang, and C. L. Zhang, “Terahertz pulse reflective focal-plane tomography,” Opt. Express 15(22), 14369–14375 (2007). 13. X. K. Wang, Y. Cui, W. F. Sun, J. S. Ye, and Y. Zhang, “Terahertz polarization real-time imaging based on balanced electro-optic detection,” J. Opt. Soc. Am. A 27(11), 2387–2393 (2010). 14. Y. Zhang, W. H. Zhou, X. K. Wang, Y. Cui, and W. F. Sun, “Terahertz digital holography,” Strain 44(5), 380– 385 (2008). 15. M. S. Heimbeck, M. K. Kim, D. A. Gregory, and H. O. Everitt, “Terahertz digital holography using angular spectrum and dual wavelength reconstruction methods,” Opt. Express 19(10), 9192–9200 (2011). 16. M. Ibanescu, Y. Fink, S. Fan, E. L. Thomas, and J. D. Joannopoulos, “An all-dielectric coaxial waveguide,” Science 289(5478), 415–419 (2000). 17. F. I. Baida, A. Belkhir, D. Van Labeke, and O. Lamrous, “Subwavelength metallic coaxial waveguides in the optical range: Role of the plasmonic modes,” Phys. Rev. B 74(20), 205419 (2006). 18. T. I. Jeon and D. Grischkowsky, “Direct optoelectronic generation and detection of sub-ps-electrical pulses on sub-mm-coaxial transmission lines,” Appl. Phys. Lett. 85(25), 6092–6094 (2004). #161133 $15.00 USD Received 10 Jan 2012; revised 24 Feb 2012; accepted 28 Feb 2012; published 20 Mar 2012 (C) 2012 OSA 26 March 2012 / Vol. 20, No. 7 / OPTICS EXPRESS 7706 19. X. K. Wang, Y. Cui, W. F. Sun, J. S. Ye, and Y. Zhang, “Terahertz real-time imaging with balanced electro-optic detection,” Opt. Commun. 283(23), 4626–4632 (2010). 20. X. K. Wang, Y. Cui, D. Hu, W. F. Sun, J. S. Ye, and Y. Zhang, “Terahertz quasi-near-field real-time imaging,” Opt. Commun. 282(24), 4683–4687 (2009). 21. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, New York, 1996), Chap. 4. 22. N. Marcuvitz, Waveguide Handbook (Peter Peregrinus, London, 1993), Chap. 2. 23. A. Agrawal and A. Nahata, “Coupling terahertz radiation onto a metal wire using a subwavelength coaxial aperture,” Opt. Express 15(14), 9022–9028 (2007).
Introduction
With the development of terahertz (THz) technology, integration and miniaturization of THz systems has become an urgent requirement, which strongly promotes the studies of THz waveguides.In recent years, many investigations about THz waveguides have been published [1][2][3][4].THz propagation inside waveguides is determined by special modes of electromagnetic waves, so the measurement of these modes is a crucial step for studying waveguiding performances, such as the transmission loss and group velocity dispersion.Generally, the mode profile can be detected by imaging the output THz field at the waveguide end [5,6].In 2009, J. A. Harrington et.al demonstrated that waveguide modes can be detected by using THz near-field microscopy [7].In 2011, M. Roze et.al utilized THz near-field microscopy to measure the output mode profiles of two suspended porous core THz fibers [8].However, as the reported techniques adopt a raster scan method to build THz images, it needs longer experimental time and the THz waveform may be distorted in the near-field detection [9].In addition, most of the published results only contain one polarization component of the THz waves, so it is not easy to accurately analyze the waveguide modes.As a novel imaging technique, THz focal plane imaging [10], which was firstly proposed by X. C. Zhang et.al in 1996, has attracted more and more attentions in the latest years [11,12].Recently, the quick and precise acquirement of two-dimensional (2D) THz polarization images has been also reported [13].Furthermore, it was shown that the influence of diffraction on THz waves can be effectively removed by applying the digital holography algorithm [14,15].
The metallic coaxial waveguide, which can perfectly confine electro-magnetic waves between two metallic cylinders, is a typical design for radio frequencies.Compared with the unscreened waveguides, such as dielectric waveguides [3], wire waveguides [5], and parallelplate waveguides [6], the metallic coaxial waveguide can effectively protect propagation signals from outside electro-magnetic interference.Additionally, the flexibility of the metallic coaxial waveguide is better than metallic hollow waveguides [1].Therefore, it has obvious advantages for guiding electro-magnetic waves.Recently, this kind of waveguide has been introduced into the optical region [16].O. Lamrous et.al theoretically discussed the effect of coaxial waveguide plasmonic modes for wave propagation in the visible optical range [17].In 2004, D. Grischkowsky et.al firstly achieved the direct optoelectronic generation of THz pulses in a coaxial waveguide [18].However, the measurement of the waveguide modes in the THz frequency range has not been reported.
In this paper, a significant application of THz digital holography for the determination of waveguide modes is proposed.The transmitted modes of a metallic coaxial waveguide are detected by a THz digital holographic imaging system and are reconstructed by a digital holography algorithm.The propagation properties of the waveguide modes are experimentally and theoretically discussed.This work is essential for the investigation and design of THz waveguides.In this experiment, a THz balanced electro-optic (EO) holographic imaging system [19] is used to measure the transmitted modes of the coaxial metallic waveguide, as shown in Fig. 1(a).Ultrafast 100 fs, 800 nm laser pulses with a repetition rate of 1 kHz and an average power of 750 mW are divided into the pump beam and the probe beam for generating and detecting THz waves.The pump beam illuminates a <110> ZnTe crystal with a 3 mm thickness to radiate a THz beam with an 8 mm diameter.The THz waves are coupled into the coaxial waveguide, which is held by two apertures to fix its direction.Another <110> ZnTe crystal with a 3 mm thickness is mounted close to the waveguide output end as the detection crystal.The probe beam firstly passes through a half wave plate (HWP) and a polarizer.Then, it is reflected by a 50/50 beam splitter (BS) to impinge on the detection crystal.The effect of the HWP and the polarizer is to adjust the probe polarization for detecting different THz polarization components [13].The horizontal component of the THz field can be measured when the probe polarization is parallel to the <001> axis of the detection crystal, while the vertical component of the THz field is measured when the angle between the probe polarization and the <001> axis is 45 [13].In the detection crystal, the probe polarization is modulated by the THz field via Pockels effect to carry the 2D THz information.The reflected probe beam is incident onto the imaging part of the system.This part is composed of a quarter wave plate (QWP), a Wollaston prism (PBS), two lenses (L1 and L2), and a CY-DB1300A CCD camera (Chong Qing Chuang Yu Optoelectronics Technology Company).It supplies the balanced EO detection for THz imaging whose detailed principle has been already discussed in Ref [19].The CCD camera is synchronously controlled with a mechanical chopper to acquire the image of the probe beam with a 2 Hz frame ratio.The THz information is extracted by the dynamic subtraction technique [12].Here, 50 frames are averaged to enhance #161133 -$15.00USD Received 10 Jan 2012; revised 24 Feb 2012; accepted 28 Feb 2012; published 20 Mar 2012 (C) 2012 OSA the signal to noise ratio of the system.By varying the optical path difference between the pump beam and the probe beam, the THz images at the different time-delay points are obtained.The time window is 20 ps and the time resolution is 0.13 ps.The total measurement time is about 2 hours and the average acquisition time of each image is about one minute.The total time consumption can be further reduced to about ten minutes by using a faster CCD camera.
Experimental setup
The structure of the coaxial waveguide is shown in Fig. 1(c).It is a common commercial cable (Model: SYV-50-3-41), which is widely used in industry and research.The metal core is a solid copper cylinder with a radius of b = 0.45 mm.The outer conductor is a hollow copper wire mesh layer with the inner radius a = 1.50 mm.Because the arrangement of the wire mesh is quite dense, the layer is considered as a solid one.The medium between the inner and outer conductors is a dielectric layer.The outside of the waveguide is wrapped by a rubber layer for protection.The photograph of the waveguide cross-section is presented in Fig. 1(b).First of all, we measured the reference THz signal without any sample and obtained the averaged signal from all pixels.Figure 1(d) presents the two polarization components of the reference signal.The ratio between the THz horizontal and vertical fields is about 45:1, which indicates that the incoming THz waves have a very good linear polarization.Figure 1(e) shows the 2D distribution of the maximum of the temporal signal of the THz horizontal field, which exhibits a planar envelope.Because the size of the waveguide cross-section is less than the half of the THz spot diameter, the incident THz waves are considered as linear polarized plane waves.A coaxial waveguide with 30 mm length is selected as the sample and the transmitted THz fields on both the horizontal and vertical directions are measured.By performing Fourier transformation to the temporal THz signal on each pixel, 2D distributions of the THz waves at each frequency are obtained.The THz intensity images of the two polarization components at 0.50 THz are shown in Figs.2(a) and 2(b), respectively.Using the THz holographic imaging system, it only spends about one minute to acquire an image.In conventional THz raster scan imaging, the time consumption is several hours for capturing an image with the same quality.Additionally, the THz holographic imaging system can more precisely reflect the THz field distribution after the introduction of polarization information [13].Therefore, the superiority of the imaging technique is very significant for measuring waveguide modes.However, the qualities of these two images are not ideal.The measured THz images are obviously influenced by diffraction, because there is always a certain propagation distance between the waveguide output end and the measurement plane in fact [20].Therefore, it is not easy to observe and analyze the waveguide modes from the raw data.To solve this problem, we introduced a digital holography algorithm to reconstruct the distributions of the transmitted THz fields.In the experiment, the diffraction distance of the THz waves reaches several wavelengths, so the process can be seen as the Fresnel diffraction and the original THz field can be restored by [21]
Analysis of the mode structures
where ( ) , U x y are the raw image and reconstructed field, ( ) , x y are the spatial coordinates on these two observation planes, respectively.k is the wave vector of the THz radiation in vacuum and λ is the wavelength.The parameter eff d is the propagation distance of the THz waves between the waveguide output end and the measurement plane, as shown in the inset of Fig. 1(a).By varying the distance eff d in the propagation kernel of Eq. ( 1), it is found that the clearest image can be acquired when eff d is set as 1.0 mm.The first derivative of the reconstructed image is adopted to further check the recover distance and it is found that the edge of the waveguide is the sharpest for this distance.The reconstructed images for the horizontal and vertical polarization components are shown in Figs.2(c) and 2(d), respectively.It can be seen that the fine structures of the waveguide modes are clearly presented.For the horizontal component, the THz field exhibits a doughnut shape, but its intensity is not uniform.There are two maximum values along the horizontal direction.For the vertical component, there are four symmetric maximum values around the inner metal core.The experimental results demonstrate that the THz waveguide modes can be accurately restored and the influence of diffraction can be removed by the digital holography algorithm, which is quite important for the further analysis.
where The THz image of r E exhibits a distribution of a pair of bilateral symmetric crescent moons around the inner axes.The E ϕ component is similar with that of r E except for its upper and lower symmetry and its intensity being about half of r E .In the THz frequency range, the copper inner and outer conductors can be considered as perfect metals.By using the Helmholtz equation and special boundary conditions of the coaxial waveguide, the analytical expressions of r E and E ϕ for the TE and TM modes [22] can be expressed as TE modes: ( ) TM modes: where 3(c) and 3(d).The reason why the four modes are chosen can be explained as follows: the four modes have more obvious horizontal linear polarization than others, so they are easily excited when the incident light is horizontal linear polarized [17].To more clearly observe the THz field vector in the waveguide, Figs.3(c) and 3(d) are vector combined to build the electric field pattern, as shown in Fig. 4. In this figure, the red arrows show the direction of the THz field and the blue solid lines give the boundaries of the outer and inner conductors.It can be obviously seen that the whole electric field exhibits a certain unity of the horizontal polarization direction.It shows that the propagation mode is well compatible with a horizontal linearly incoming wave.In addition, the cut-off wavelengths of these four modes can be calculated by . Their patterns are almost identical with the experimental results at 0.50 THz and the simulated ones, which indicates that the THz waves at other frequencies also propagate with the four modes in the waveguide.It should be still noted that the TEM mode cannot be effectively excited by a linearly polarized incident beam due to its radial polarization, so it does not appear in the measurement modes [22,23].
Spatial-temporal analysis of the guided THz waves
Utilizing digital holography, the reconstructed THz fields in both time and frequency domains can be acquired.On the horizontal axis of the waveguide output plane (the dark dashed line in the inset of Fig. 5(a)), the r E component of the transmitted THz field on the space-time map is extracted and presented in Fig. 5(a).In this figure, the red and white dashed lines mark the boundaries of the outer and inner conductors.From the space-time map, it can be seen that the THz field is effectively localized in the dielectric layer and most of its energy is confined within the 1.5 ps pulse duration.Compared with the 1.3 ps pulse duration of the reference THz signal, the THz pulse is only slighted broadened after passing through the waveguide.An equivalent space-frequency map of the THz field is shown in Fig. 5(b).Its spectral range is from 0.20 THz to 1.30 THz.Although the propagation of the THz waves contains four waveguide modes, the interference between them is not obvious in this figure .To further uncover the propagation properties of the THz waves, a windowed Fourier transformation with an 1 ps long Gaussian window is performed to extract the temporal distribution of the Fourier component at a specified frequency.The processing result for 0.50 THz is shown in Fig. 5(c).The map in space and time clearly presents that the energy of all of the waveguide modes concentrate on an envelope with a 1.5 ps width.The four modes do not exhibit a division in the time domain, which demonstrates that their group velocities are almost the same.This is why no interference between the waveguide modes is apparent in Fig. 5(b).Comparing the pulse propagation time in the waveguide and in free space, the group velocity is estimated as 0.65 times the light speed in vacuum.By applying the same procedure, spacetime maps for other frequencies are also obtained.The result for 0.70 THz is presented in Fig. 5(d) as an example.This figure shows the same phenomenon as in Fig. 5(c), which indicates that all frequency components are contained in the same THz pulse.This implies that the waveguide modes have small group velocity dispersion, which can explain that the broadening of the THz pulse after propagating through the waveguide is weak.To further check the conclusions drawn, the other coaxial waveguide with 40 mm length is chosen and measured by using the THz imaging system.The reconstructed images of the 6(a) gives the transmitted modes in polar coordinates (inset) and the space-time distribution of the r E component at 0.50 THz from the waveguide.From these results, it can be found that the THz fields on the output plane of the waveguide present very similar distributions as those of Fig. 3 and the space-time map is also consistent with Fig. 5(c) except for a 17.6 ps time delay.To determine the dispersion property, the two waveguide sections with 30 mm and 40 mm lengths are used as the launch waveguide and the test waveguide, respectively.From the obtained space-time maps of the two waveguides, the two groups of THz temporal signals within the range from r = −0.75mm to r = −1.30mm are extracted as the reference data set and the sample data set, respectively.The average refractive index of the waveguide is calculated by It can be seen that the average refractive index of the waveguide almost keeps constant at 1.53 from 0.20 to 1.30 THz, which indicates that the dispersion of the waveguide is very low over this frequency range and also explains the weak broadening of the transmitted THz signal from the waveguide.In addition, the refractive index of the dielectric layer is about 1.50 in 0.20-1.30THz.The average refractive index of the waveguide is slightly bigger than the refractive index of the dielectric layer, which reflects that the average group velocity .
/ ave v µ of the THz waves in the waveguide is a little lower than the light speed / v µ in the dielectric layer.These phenomena confirm the accuracy of above experimental findings.
Conclusion
In summary, an important application of THz digital holography for the measurement and reconstruction of THz waveguide modes is demonstrated.By applying this technique, the distribution of transmitted THz waves from waveguides can be clearly restored and the THz propagation properties can be accurately analyzed.It is found that the THz propagation inside a metallic coaxial waveguide mainly depends on the superposition of four modes TE
Fig. 1 .
Fig. 1.(a) THz balanced electro-optic (EO) holographic imaging system.The inset shows the distance between the waveguide output end and the measurement plane.(b) and (c) are the picture and the sketch map of the metallic coaxial waveguide cross-section.(d) Reference THz temporal signal without any sample (average of all pixels).(e) Two-dimensional (2D) distribution of the maximum of the temporal signal of the THz horizontal field.
Fig. 2 .
Fig. 2. (a) and (b) show measured 2D images at 0.50 THz for the horizontal and vertical polarization components.(c) and (d) are the reconstructed results of (a) and (b) by utilizing the inverse Fresnel diffraction algorithm.
Fig. 3 .
Fig. 3. (a) and (b) present two polarization components of the electric field at 0.50 THz in polar coordinates.(c) and (d) are simulation results by weighted stacking four waveguide modes TE11, TE12, TM11, and TM12.(e), (f), (g) and (h) are measured polarization images for 0.35 THz and 0.70 THz in polar coordinates, respectively.
x E and y E are the THz fields in Cartesian coordinates, r E and E ϕ are the THz fields in polar coordinates.Figures 3(a) and 3(b) show the intensities of r E and E ϕ at 0.50 THz.
Fig. 4 .
Fig. 4. Vector electric field pattern of the output mode in the coaxial waveguide converted from Figs. 3(c) and 3(d).
Fig. 5 .
Fig. 5. (a) and (b) are measured space-time and space-frequency maps of the r electric field component at the position of the dark dashed line in the inset.(c) and (d) are extracted spacetime maps of the 0.50 THz and 0.70 THz components obtained by performing a windowed Fourier transformation.In these figures, the red and white dashed lines correspond to the boundaries of the outer and inner conductors.
difference for each frequency component between the output THz signals from the launch waveguide and the test waveguide, L is the length difference between the two waveguides, ω is the circular frequency, and v is the light speed in vacuum.For comparison, the refractive index of the dielectric layer is also measured by standard THz time domain spectroscopy.Figure6(b)shows the experimental results with the error bars.The red symbols are the average refractive index of the waveguide, while the blue symbols are the refractive index of the dielectric layer.
Fig. 6 .
Fig. 6.(a) shows the space-time distributions of the transmitted electric fields at 0.5 THz from the coaxial waveguide with 40 mm length.The inset shows the measured THz polarization images at 0.50 THz in polar coordinates.(b) shows the calculated averaged refractive index of the coaxial waveguide and the refractive index of the dielectric layer.
distributions of the THz field for different modes can be calculated.However, it is found that the measured results do not match with any single waveguide mode.Because the chosen waveguide has a cross-section with a millimeter scale, it allows the existence of multiple modes.When four modes TE 11 , TE 12 , TM 11 , and TM 12 are selected and weighted stacked as TE 11 + 0.625TE 12 + 0.625TM 11 + 0.625TM 12 , the simulation results are consistent with the measured ones very well, as shown in Figs.
#161133 -$15.00USD Received 10 Jan 2012; revised 24 Feb 2012; accepted 28 Feb 2012; published 20 Mar 2012 (C) 2012 OSA layer at THz frequencies, which is measured by standard THz time-domain spectroscopy.With these equations, the cut-off wavelengths of TE 11 , TE 12 , TM 11 , and TM 12 are obtained as 8.9 mm, 2.8 mm, 3.0 mm, and 1.6 mm, respectively, which are larger than the longest wavelength of the incident THz waves.It means that these modes can appear in the coaxial waveguide and dominate the propagation of the THz waves.The intensities of r E and E ϕ at 0.35 THz and 0.70 THz are also extracted, as shown in Figs.
#161133 -$15.00USD Received 10 Jan 2012; revised 24 Feb 2012; accepted 28 Feb 2012; published 20 Mar 2012 (C) 2012 OSA 26 March 2012 / Vol. 20, No. 7 / OPTICS EXPRESS 7713 transmitted THz waves are obtained by the digital holography algorithm and space-time maps for different frequency components are extracted by the windowed Fourier transformation.Figure 11 , TE 12 , TM 11 , and TM 12 .The spatial-temporal analyses of a specified spectral component show that these modes have similar group velocities and small group velocity dispersion.This work #161133 -$15.00USD Received 10 Jan 2012; revised 24 Feb 2012; accepted 28 Feb 2012; published 20 Mar 2012 (C) 2012 OSA 26 March 2012 / Vol. 20, No. 7 / OPTICS EXPRESS 7714 provides a valuable research method and experimental basis for studies and designs of THz waveguides. | 5,765.2 | 2012-03-26T00:00:00.000 | [
"Physics"
] |
Extended supersymmetry with central charges in Dirac action with curved extra dimensions
We discuss a new realization of $\mathcal{N}$-extended quantum-mechanical supersymmetry (QM SUSY) with central charges hidden in the four-dimensional (4D) mass spectrum of higher dimensional Dirac action with curved extra dimensions. We show that this $\mathcal{N}$-extended QM SUSY results from symmetries in extra dimensions, and the supermultiplets in this supersymmetry algebra correspond to the Bogomol'nyi--Prasad--Sommerfield states. Furthermore, we examine the model of the $S^2$ extra dimension with a magnetic monopole background and confirm that the $\mathcal{N}$-extended QM SUSY explains the degeneracy of the 4D mass spectrum.
N = QM SUSY in higher dimensional Dirac action
In this section, we show that the structure of N = 2 QM SUSY is always hidden in the 4D mass spectrum of the (4 + d)-dimensional Dirac action with curved extra dimensions.
13)
A = e ∆ −iγˆyeŷ y (∇ y + iqA y ) + W , (2.14) Then, by requiring that the KK mode functions satisfy the orthonormal relations we can obtain the following action: where ψ (n) α (x) = ψ (n) R,α (x)+ψ (n) L,α (x) indicate 4D Dirac spinors with mass m n and ψ (0) L/R,α (x) are massless 4D chiral spinors. The expression of the above effective 4D action coincides with the case of flat extra dimensions given in [45], although the effects of curved spaces and background fields appear as the mass spectrum through the definition of A, A † and (2.16).
Since we have assumed that the KK mode functions f (n) α and g (n) α form the complete set respectively, the orthonormal relations (2.16) lead to From the above relations, we can obtain , (2.19) where the supercharge Q, the Hamiltonian H and the "fermion" number operator (−1) F are defined by F yy ′ is the field strength for A y and R is the Ricci scalar defined on Ω. Then, we can find that the relations (2.19) realize the N = 2 supersymmetric quantum mechanics [1,48]. 5 In this model, the "bosonic" and "fermionic" states which form an N = 2 supermultiplet correspond to the KK mode functions ( f (n) α (y), 0) T and (0, g (n) α (y)) T . Before closing this section, we comment about the Hermiticity of the supercharge. From the action principle δS = 0, we obtain the following condition for the KK mode functions: for all m, n, α, β, where ∂Ω represents the boundary of Ω, and n y (y) is an orthonormal vector on ∂Ω. We can show that the above condition corresponds to the Hermiticity condition for the supercharge. Then, the supercharge Q is Hermitian as long as the action principle is required. Thus, we can conclude that the N = 2 QM SUSY is always realized in the 4D mass spectrum of the higher dimensional Dirac action and the doubly degenerate states ( f (n) α (y), 0) T and (0, g (n) α (y)) T are mutually related by the supercharge Q, except for zero energy states.
N-extended QM SUSY with central charges
In the previous section, we have described the N = 2 QM SUSY hidden in the doubly degeneracy of f (n) α and g (n) α (y). However, we can expect that further hidden structures exist in the 4D mass spectrum and this would lead to the extra degeneracy due to the index α in addition to the doubly one. 5 The N = 2 SUSY algebra {Q i , Q j } = 2Hδ i j (i, j = 1, 2) is obtained with Q 1 = Q and Q 2 = i(−1) F Q.
In this section, we show that the N-extended QM SUSY with central charges can be constructed from symmetries in the extra dimensions. This QM SUSY can explain the extra degeneracy in the 4D mass spectrum. Then, we clarify the representation of this algebra for the nonzero energy states and it will turn out that the eigenstates become BPS states. This section is devoted to the general discussion, and a concrete example will be given in the next section.
N-extended SUSY algebra with central charges
Here, we discuss a new realization of N-extended QM SUSY from symmetries. First, we consider sets of operators {â i (i = 1, 2, · · · , N a )}, {b i (i = 1, 2, · · · , N b )}, · · · , {α i (i = 1, 2, · · · , N α )}, {β i (i = 1, 2, · · · , N β )}, · · · , which are Hermitian and consistent with an imposed boundary condition for the mode functions f (n) α , 6 and commute with A † A Therefore, these operators do not change the mass eigenvalues and would be related to the symmetries in the extra dimensions. Furthermore, we require that these operators commute with the ones in the same sets and anticommute with the ones in the different sets for Roman and Greek letters 4) and the operators with the Roman letters and the ones with the Greek letters commute with each other Then, we define the following extended supercharges and obtain N = (N a + N b + · · · + N α + N β + · · · ) SUSY algebra with the central charges 7 where H denotes the Hamiltonian given by (2.21) and the central charges (3.11) 6 More precisely, we require that the functionsâ i f (n) , · · · also satisfy the imposed boundary condition. 7 Although we can also define the supercharges with the replacement of the operators with the Roman and the Greek letters, those are essentially same as the ones given in the above. Therefore, we can consider that the central charges in this SUSY algebra result from the symmetries in the extra dimensions. If we take the sets of operators as reflection operators and gamma matrices, this extended QM SUSY corresponds the one given in the previous papers [45].
For the existence of the extended SUSY with given sets of operators, the metric of curved spaces and the background fields are restricted to satisfy the condition (3.1). However, it seems difficult to find the constraints without any assumption for sets of operators. 8 Therefore, we will first prepare the geometry and the background fields, and then consider the sets of operators consistent with them when we see an example in section 4.
It should be mentioned that this central extension is given by direct sums of mutually (anti)commuting N = 2 SUSY algebras as well as the previous paper [45], with different "Hamiltonians" for each of them. Especially, the supercharges Q (A) This property is important for the discussion of the BPS states in the next subsection.
Representation of SUSY algebra
Then, let us clarify the representation of this algebra for the nonzero energy states. Since the Hamiltonian and the central charges commute with each other, we first look at the simultaneous eigenstates of them: where n and z indicate the labels of their eigenvalues m n and z (A) i j , 9 and s denotes the extra index to further classify the eigenstates in the following discussions. For these states, the algebra (3.8) is rewritten into (3.14) Since z (A) i j is the real symmetric matrix, it can be diagonalized by the orthogonal matrix U (A) i j . Then, by redefining the supercharges, we can obtain It should be noted that, in this algebra, at most one supercharge among Q ′(A) i (i = 1, · · · , N A ) can become nontrivial and the others must equal to 0 for any eigenstates. This is because the supercharges satisfy the relation due to (3.15) and the commutativity of Q (A) i and Q (A) j . Therefore, the number of nontrivial supercharges for the eigenstates are maximally given by the number of the sets of the operators related to symmetries, and the eigenstates necessarily become the BPS states if the sets have more than one operator.
Next, to construct the supermultiplets in this SUSY algebra, we introduce the following operators by means of the nontrivial redefined supercharges for the eigenstates: where A, B, C, and D are different with each other according to the above discussion. These operators commute with each other, and therefore, we can further classify the eigenfunctions by these operators. Since these operators satisfy for the eigenstates, we can parametrize their eigenvalues as 10 where s (AB) i j = ±. Here, we have described the index s as s (AB) i j s (CD) kl · · · . From the relation i j = ± , s (CD) kl = ± , · · · and we can explicitly construct the supermultiplet from Φ (n) ++··· ,z as
Example
In this section, we will confirm that the N-extended QM SUSY given in the previous section can be realized in higher dimnsional Dirac action with curved extra dimensions. As an example, we examine the S 2 -extra dimension with the Wu-Yang magnetic monopole background.
Spin-weighted spherical harmonics
As is well known, the mode functions on S 2 -can be expressed by the spin-weighted spherical harmonics [51][52][53]. Thus, we briefly review the Newman-Penrose ð (eth) formalism and this function. First, we consider the rotation of the orthogonal basis e θ , e φ defined in tangent space on S 2 (4.1) We call that a quantity η has spin weight s, if η transforms as follows under the above transformation: η → e isα η . Furthermore, we introduce the operators ð (eth) andð (eth bar) which act as follows for η with the spin weight s: We can show that ðη has spin weight s + 1 andðη has spin weight s − 1. Therefore, ð andð correspond to the spin weight raising and lowering operators, respectively.
From (4.3) and (4.4), we obtain
The spin-weighted spherical harmonics s Y jm with the spin weight s is given as the eigenfunction ofðð and ðð :ð or equivalently where the spin weight is given by s = 0 , ±1/2 , ±1 , ±3/2 · · · . The index j (= |s| , |s| + 1 , |s| + 2 , · · · ) denotes the main total angular momentum quantum number and the index m (= − j , − j + 1 , · · · , j − 1 , j) indicates the secondary total angular momentum quantum number. Since the spin-weighted spherical harmonics s Y jm form a complete set for the fixed spin weight s, any function on S 2 with the spin weight s can be decomposed into s Y jm . The explicit form of the normalized spin-weighted spherical harmonics is written as which satisfies the orthonormal relation Here, we have chosen the phase of this function in such a way that the function satisfies In the case of s = 0, the spin-weighted spherical harmonics corresponds to the spherical harmonic Y jm . As well as the ordinary spherical harmonics, this function corresponds to the representation of su(2) algebra where L 2 , L z and L ± are the angular momentum operators for quantities with spin weight s and satisfy Furthermore, this function satisfies the following properties:
KK mode functions and mass spectrum of S 2 -extra dimension with magnetic monopole
Then, we discuss the KK mode functions and the mass spectrum of S 2 -extra dimension with a magnetic monopole. We consider the space M 4 × S 2 with the radius a ds 2 = η µν dx µ dx ν + a 2 (dθ 2 + sin θ dφ 2 ) , (4.18) and we choose the basis of the vielbein as (1, 1, 1, 1, a, a sin θ) (K = 0, 1, 2, 3, θ, φ ,K =0,1,2,3,θ,φ) . Furthermore, we introduce the Wu-Yang magnetic monopole background where q is the gauge coupling constant. 11 The gauge fields A N and A S are defined on the north patch (0 ≤ θ < π , 0 ≤ φ < 2π) and the south patch (0 < θ ≤ π , 0 ≤ φ < 2π) on S 2 , respectively. 11 In the case of n = 0, the Einstein equation leads to a → ∞. However, in the case of n 0, the radius a is stabilized and given by a 2 = n 2 κ 2 /8q 2 where κ is the 6D gravitational coupling constant [54]. In this paper, we concentrate the structure of the mass spectrum of this model and we will not take into account the stability of a.
Then, we consider the 6D Dirac action with the monopole background and the bulk mass M: where we require that the Dirac fields on the north patch Ψ N (x, θ, φ) and on the south patch Ψ S (x, θ, φ) are related by gauge transformation Here, we define the internal chiral matrix γ in which satisfies and we decomposition Ψ N/S (x, θ, φ) as where the upper and lower signs in the exponential denote the ones for the north and south patches, respectively. (Here after, we use the same notation for the signs in the exponential if appear.) The operators ð s + andð s − represent the spin weight raising and lowering operators for the spin weight s ± = −(n ± 1)/2, and P ± = (1 ± γ in )/2 are the projection matrices for γ in . Then, A † A and AA † are given by Thus, we can find that the mode functions and the mass eigenvalues are obtained as follows from (2.19) and (4.6): where s ± , j and m are given by |s + | , |s + | + 1 , |s + | + 2 , · · · (for α = +) |s − | , |s − | + 1 , |s − | + 2 , · · · (for α = −) , m = − j , − j + 1 , · · · , j − 1 , j , (4.30) 12 The massive mode functions g (k)N/S α ∝ A f (k)N/S α are not the eigenfunction with γ in = α because A and γ in do not commute with each other. and we define e ± as the 2-component orthonormal vectors which satisfy γ in e ± = ±e ± , γθe ± = ie ∓ , γφe ± = ∓e ∓ . (4.31) Then, the degeneracy of j-th KK level is 2 j + 1 for each mode function.
We can see that the mode functions f This is also useful for the construction of the N-extended QM SUSY.
• properties of reflection operators
In the model without the magnetic monopole (n = 0), we can consider the reflection operators with gamma matrices γ in γθR θ and γ in γφR φ where R θ and R φ represent the reflections for the coordinates θ and φ In the case of S 2 with monopole (n 0), the above operators are ill defined and do not commute with A † A by the monopole background. However, the operator with the combination of the above reflections and the gauge transformation is Hermitian and well defined for the mode functions and satisfies We find that this operator connects the mode functions f Then, we can obtain various kinds of N-extended QM SUSYs from the above operators. As examples, we consider the following two N-extended QM SUSYs: • N = 6 extended QM SUSY with central charges from the angular momentum operator First, we consider the N = 6 extended QM SUSY with central charges which consist of the angular momentum operator L (n) z : where the supercharges Q k and the nonzero components of central charges Z kl are given by In this QM SUSY, Q 3 and Q 4 commute with each other and also Q 5 and Q 6 .
According to the discussion given in the section 3.2, we redefine the supercharges for the eigenfunctions with the eigenvalues H = m 2 j , Z 34 = m m 2 j and Z 33 = (−1 + m 2 ) m 2 j (where m indicates the angular momentum number): The SUSY algebra (4.40) is written into the following form for the redefined supercharges where z ′ k (k = 1, · · · , 6) is given by and also the eigenfunctions Φ ( jm) for n < 0 and j = s + . (4.53) Here z ′ indicates {z ′ i (i = 1, · · · , 6)} given in (4.51) and s 12 = ±, s 35 = ± denote the signs of the eigenvalues of Then we can show that the above eigenfunctions form the supermultiplets and satisfy the same relation as (3.21). In this extended QM SUSY, the four (two) -fold degenerated eigenfunctions with H M 2 (H = M 2 ) are related by the supercharges Q ′ k with k = 1, 2, 3, 5 (k = 3, 5). Furthermore, the extra (2 j + 1)-fold degeneracy exists in the eigenfunctions. The origin of this degeneracy comes from the angular momentum number m, which corresponds to the eigenvalues of central charges.
• N = 6 extended QM SUSY with central charges from the angular momentum operator and the reflection operator Next, we construct the N = 6 extended QM SUSY with central charges from the angular momentum operator and the reflection operator: where the supercharges Q k and the nonzero components of the central charges are taken to be the form of We can show that the above eigenfunctions form the supermultiplets and satisfy the same relation as (3.21). The supermultiplets for the BPS states can be constructed by use of the mode functions f ( jm)N/S α,r and g ( jm)N/S α,r in the same way. In this extended QM SUSY, the eight-fold degenerated non-BPS states is related by the six supercharges Q k (k = 1, · · · , 6). Therefore, we see that the additional two-fold degeneracy can be further explained by the supercharges compared with the previous extended QM SUSY (although the total degeneracy including the BPS states with m = 0 is not changed). This additional degeneracy corresponds to the parity even and odd for the reflection. Since the eigenfunctions with m = 0 are only equivalent for the parity even from the definition (4.59), they should become the BPS states.
Although we considered two examples, there are more extended QM SUSYs constructed from the symmetries. For instance, we can obtain them by using the reflection operators γ in γθR θ and γ in γφR φ in the case without the magnetic monopole, since those operators correspond to the symmetries in such model.
Summary and discussion
In this paper, we have constructed the new realization of the N-extended QM SUSY with central charges hidden in the higher dimensional Dirac action with curved extra dimensions. This extended QM SUSY results from symmetries in extra dimensions, and the supercharges and the central charges are obtained by use of them. We have also investigated the representation of the SUSY algebra and shown that the supermultiplets would become the BPS states. In addition, we considered the model of S 2 -extra dimension with the magnetic monopole as the concrete example. Then we have confirmed that the KK mode functions properly correspond to the representations in the two types of N-extended QM SUSYs which are obtained from the rotational and reflection symmetries in the extra dimensions.
The characteristic property of our extended QM SUSY is that certain supercharges commute with each other. Recently, a new generalization of supersymmetry is proposed, that is Z n 2 -graded supersymmetry whose supercharges have n degrees [55][56][57]. For their supercharges with different degrees, the algebra may not close in anticommutator but commutator. Therefore, it is interesting to investigate the relation between this supersymmetry and our QM SUSY.
Our analysis is not perfect. As we have seen in the section 4, there appear various extended QM SUSYs in a model, according to sets of symmetries. Thus, we should clarify what kinds of extended QM SUSYs can be obtained from a model. It is also important to reveal that what symmetries exist by the choice of boundary conditions, extra dimensional spaces and background fields since they would affect the structures of extended QM SUSYs. Furthermore, it is known that central charges are closely related to topological properties [40,58,59]. Although the central charges given in the section 4 might be related to the topology of S 2 and the magnetic monopole, the detailed structures are not unveiled. Thus, we should more investigate models with nontrivial topology. We can also expect the possibilities that the further structures are hidden in the 4D mass spectrum. Since our discussions are not completely general, other realizations of extended QM SUSY might be constructed.
Moreover, since we have obtained the new extended QM SUSY, it is fascinating to study new types of exactly solvable quantum-mechanical models. The issues mentioned in this section will be reported in a future work. | 4,996.6 | 2019-05-28T00:00:00.000 | [
"Physics"
] |
Universal preprocessing of single-cell genomics data
We describe a workflow for preprocessing a wide variety of single-cell genomics data types. The approach is based on parsing of machine-readable seqspec assay specifications to customize inputs for kb-python, which uses kallisto and bustools to catalog reads, error correct barcodes, and count reads. The universal preprocessing method is implemented in the Python package cellatlas that is available for download at: https://github.com/cellatlas/cellatlas/.
Introduction
A frequent refrain in single-cell genomics is that "uniform processing" of data is desirable and important.However when contrasted against the large number of single-cell genomics assays (Klein et al. 2015;Hagemann-Jensen et al. 2020;Zheng et al. 2017;Rosenberg et al. 2018;W. Xu et al. 2022;Gehring et al. 2020;Stoeckius et al. 2017;Schraivogel et al. 2020), and given that ideally analysis methods should be optimized and tailored for the assays to which they will be applied, the question arises why is uniform processing desirable?For example, in the case of single-cell RNA-seq it may be interesting to study cell-type isoform specificity (Booeshaghi et al. 2021), whereas for single-cell ATAC-seq the notion of isoforms is meaningless because the assay is based on genomic DNA, not cDNA, sequencing (Cusanovich et al. 2015).The reason uniform preprocessing can be important, is that with increasing amounts of data being generated (Svensson, da Veiga Beltrame, and Pachter 2020), datasets are seldom analyzed in isolation, and to the extent possible, it is desirable to avoid batch effects resulting from assay-or data-specific methods.Thus, uniform processing is increasingly intended to mean uniform preprocessing1 of data so that meta-analyses can be shielded from batch effects (Zhang et al. 2021;Davis et al. 2018).
Uniform preprocessing of single-cell genomics data is possible because single-cell genomics protocols are all rooted in a handful of technologies.They can be organized according to their physical-isolation and molecular-capture modes: cellular contents can be isolated in wells (Picelli et al. 2014;Gierahn et al. 2017;Hagemann-Jensen et al. 2020), shells (Zheng et al. 2017;Macosko et al. 2015;Klein et al. 2015), or cells, where the cell (or nucleus) itself serves as a container (Rosenberg et al. 2018;Cao et al. 2017;Ma et al. 2020).Physical isolation, whether by wells, shells, or cells, is followed by wet-lab methods to capture molecular features.These features are tagged using barcodes that associate synthetic nucleotide sequences to individual objects such as cells, nuclei, pixels, or beads.These "molecular capture modes", that may include RNA, DNA, proteins, or synthetic tags, are coupled with barcode sequences to produce libraries for sequencing.The physical-isolation and molecular-capture methods used for an assay are reflected in the structure of reads produced by these methods (Figure 1).The seqspec assay specification (Booeshaghi, Chen, and Pachter 2023) provides a machine-readable format for describing this structure, thereby translating the organizational themes of single-cell genomics into a medium that can, in principle, facilitate automatic and universal preprocessing of single-cell genomics data.While several tools have been developed for preprocessing data from different single-cell RNA-seq assays (He et al. 2022;Battenberg et al. 2022;Melsted et al. 2021), we demonstrate that seqspec (Booeshaghi, Chen, and Pachter 2023), along with kallisto bustools (Melsted et al. 2021), kITE (Sina Booeshaghi et al. 2022) and snATAK (Sina Booeshaghi, Gao, and Pachter 2023), can in principle be used for preprocessing data from any single-cell genomics assay.
Results
To consistently preprocess single-cell genomics data, reads sequenced from libraries produced with different technologies must to be cataloged, error-corrected, and counted.Cataloging reads requires determining their sequence of origin by mapping them either to known or unknown categories.For example, single-cell RNA-seq reads might be aligned to a known set of transcripts to count RNA molecules.Similarly, in cases where the set of possible orthogonal barcodes is known a priori, as in PerturbSeq experiments (Schraivogel et al. 2020), cell assignment can be performed based on mapping to known barcode categories.On the other hand, unknown categories may be informed by the sequencing reads themselves, such as in ATAC-seq assays where accessible chromatin categories can be determined by the reads.The kallisto bustools workflow can build known and unknown categories for a wide range of assays such as RNA-seq, ATAC-seq, and synthetic and protein-tag based methods such as CITE-seq (Stoeckius et al. 2017), MultiSeq (McGinnis et al. 2019), and Clicktags (Gehring et al. 2020).Genomic and transcriptomic categories are generated for scRNAseq and synthetic tag libraries for orthogonal barcode methods while peak-categories are generated from the reads for chromatin accessibility libraries.
Universal preprocessing also requires identifying and handling sequence elements from reads in a consistent manner.Elements such as barcodes, unique molecular identifiers (UMIs), and genomic features such as cDNA must be accurately identified and appropriately parsed by preprocessing tools to ensure that cataloging, error correcting, and counting are correctly performed.To enable proper element identification and universal preprocessing, we use the seqspec index command (Booeshaghi, Chen, and Pachter 2023) to extract sequenced elements in a tool-specific manner and perform read cataloging against the previously-indexed categories.This indexing strategy correctly identifies and extracts relevant single-cell features in all assay types, a result of the consistent and controlled vocabulary of the seqspec specification.Sequencing machines used to sequence the libraries produced with genomics assays may generate reads with errors in barcode sequences affecting the ability for barcode-barcode registration in multi-modal assays.Preprocessing tools that perform discordant barcode error correction strategies on each modality may lose barcodes due to mismatches in correction.To address this, universal preprocessing requires consistent error correction strategies.The kallisto bustools workflow achieves a good tradeoff between speed, memory, and barcode recovery for the Hamming-1 distance barcode error correction strategy and can be consistently applied to barcodes from multiple assays (Melsted et al. 2021).
After read cataloging and error correction, the final step in universal preprocessing is read counting.To quantify the number of features per cell in single-cell genomics assays, molecular counting approaches must be applied in a manner that balances accuracy, speed, and memory while being consistent between assays that do, and do not, have UMIs.We leverage the kallisto bustools counting approach as it performs UMI and read counting in a consistent way.This means that UMIs quantified in scRNAseq data are resolved in a manner that is consistent with reads quantification for scATAC-seq data (Sina Booeshaghi, Gao, and Pachter 2023).
These universal principles guided our development of cellatlas for universal preprocessing of single-cell genomics data.The cellatlas program requires sequencing reads, genomic references, and a seqspec specification file.From there, cellatlas leverages seqspec functionality to auto generate the kallisto technology string that specifies the 0-index position of the cellular barcodes, molecular barcodes, and genomic features such as cDNA or genomic DNA.It also either points to, or generates, a barcode onlist compatible with the barcodes in the FASTQ files.Thus, cellatlas can be used to process data with a single command: To demonstrate the universality of cellatlas and its application to uniform preprocessing we applied cellatlas to preprocess nine datasets from three physical isolation methods which capture five different molecular modalities: spatial RNA, nuclear RNA, cytoplasmic RNA, accessible chromatin, synthetic tags, CRISPR guides, and cell surface proteins (see Methods).
Uniform preprocessing makes it straightforward to compare modalities within the same technology.Figure 2 shows a comparison of data from three different DOGMA-seq (Mimitou et al. 2021) modalities as assayed in (Z.Xu et al. 2022), all preprocessed with cellatlas and the tools it uses: snATAK, kallisto bustools, and kITE.Uniform preprocessing also allows for quantifying the efficiency of assaying the different modalities.The fact that the RNA and protein libraries have a similar number of UMIs captured for a given number of reads (Figure 2a,b) is consistent with similar dilutions of the libraries (1.99nM and 2.2nM respectively), whereas the tag library efficiency (Figure 2c) is distinctly different, likely as a result of dilution to 0.27nM.While the exact cause for the difference requires further investigation, the control for preprocessing that cellatlas provides points to an experimental cause for the difference.Uniform preprocessing also provides a direct comparison of the distinct modalities in terms of their detection efficiency (Figures 2d,e,f).The protein detection saturates because DOGMA-seq protein modality only assayed 163 proteins.The discrepancy between cells in which protein is detected but not ATAC peaks, versus those in which protein is detected but not RNA, is interesting and warrants further investigation.
Uniform preprocessing also allows for comparing technologies.We compared single-cell RNA-seq and ATAC-seq produced with DOGMA-seq to those produced with 10x Multiome on PBMCs (see Methods).This cross-assay comparison was possible because we uniformly processed the data using the same read pseudoalignment algorithm, barcode error correction, and UMI/read counting.Figure 3 shows RNA-ATAC count statistics for DOGMA-seq alongside 10x Genomics Multiome for both datasets, and their constituent RNA and ATAC parts preprocessed using cellatlas.The DOGMA-seq assay appears to be more efficient than the 10x Multiome, and displays less background noise.
Discussion
The importance of uniform preprocessing of single-cell genomics has been evident in the development of several workflows for multiple single-cell RNA-seq assays (Battenberg et al. 2022;He et al. 2022;Parekh et al. 2018;Tian et al. 2018).Here we have introduced an approach to uniform preprocessing that generalizes to many more assays and modalities by leveraging the modularity and flexibility of the kallisto and bustools programs, along with the seqspec standard for describing common elements in reads derived from single-cell genomics assays.Crucially, our approach enables quantitative comparisons both of distinct modalities within assays, as well as across technologies.
In addition to facilitating uniform analysis across many data types and studies, we anticipate that universal preprocessing methods will be useful in the development of new single-cell genomics assays, for example by helping to reveal cross-platform tradeoffs (Sina Booeshaghi, Gao, and Pachter 2023).Furthermore, uniform preprocessing will improve reproducibility of studies, because the association of a seqspec with an assay adds transparency to the protocol, and fast re-processing of data with cellatlas will allow users to quickly reproduce work that is currently a bioinformatics bottleneck for many projects.Finally, we envision that cellatlas will be useful for large-scale cell atlas coordination efforts.
Figure 1 :
Figure 1: Single-cell genomics assays are variations on a theme of physical isolation, molecular capture, and library generation.The read structure shown here is provided as an example; read structure may vary depending on assay type.
Figure 2 :
Figure 2: Efficiency of UMI detection as measured by relating the number of reads uniquely aligned for each cell to the number of UMIs for the a) RNA, b) protein, and c) tag modalities.
Figure 3 :
Figure 3: Cross-technology comparison of registered ATAC and RNA quantifications from PBMCs assayed with (a) DOGMA-seq and (b) 10x Multiome. | 2,409.4 | 2023-09-15T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Iron-specific Signal Separation from within Heavy Metal Stained Biological Samples Using X-Ray Microtomography with Polychromatic Source and Energy-Integrating Detectors
Biological samples are frequently stained with heavy metals in preparation for examining the macro, micro and ultra-structure using X-ray microtomography and electron microscopy. A single X-ray microtomography scan reveals detailed 3D structure based on staining density, yet it lacks both material composition and functional information. Using a commercially available polychromatic X-ray source, energy integrating detectors and a two-scan configuration labelled by their energy- “High” and “Low”, we demonstrate how a specific element, here shown with iron, can be detected from a mixture with other heavy metals. With proper selection of scan configuration, achieving strong overlap of source characteristic emission lines and iron K-edge absorption, iron absorption was enhanced enabling K-edge imaging. Specifically, iron images were obtained by scatter plot material analysis, after selecting specific regions within scatter plots generated from the “High” and “Low” scans. Using this method, we identified iron rich regions associated with an iron staining reaction that marks the nodes of Ranvier along nerve axons within mouse spinal roots, also stained with osmium metal commonly used for electron microscopy.
are now turning to XRM as a means of tracking ROI from LM to EM, there is a potential demand for a means to track specific metals by XRM 3,4,10-14 . Various strategies have been developed for performing material analysis using X-rays. These include methods using monochromatic X-ray sources either from large synchrotrons 15 or laser produced plasma 16 , and recently a method using a polychromatic source and Ross filter pairs 17 . While the former methods involve using specialized sources, the latter has the advantage of using accessible (laboratory scale) X-ray tubes. On the other hand, the Ross filter pair involves taking four measurements -one for each filter, generating effectively two narrow band-pass filters resulting in relatively low source power utilization. In essence, all the above referenced methods present some sort of K-edge imaging principles 18 whereby the element of interest is typically imaged with energy just below and just above the K-edge, using the large abrupt variation in absorption to generate high contrast. Other methods include dual energy as used in medical imaging, where the different contributions of photoelectric and Compton scatter to the total absorption are used (e.g., to separate calcium or bone from iodine 19 ). Typically, in those cases the minimum X-ray energy of substantial power due to filtration is considerably above any low Z-element K-edge energy. Also it is worth noting that this method will be further enhanced by employing next generation photon counting detectors [20][21][22] , now being deployed on a growing number of systems.
In the work presented here we used XRM to detect specifically stained iron regions of micron and sub-micron scale in a biological sample stained with other heavy metals typically used for electron microscopy staining, such as osmium. We used a two-scan configuration and scatter plots to perform material analysis. The first scan uses the characteristic emission lines of the tungsten source to enhance the absorption of iron, which has a strong K-edge onset at an energy just smaller than the first in a series of emission lines. The second scan is performed after X-pray filtration which absorbs strongly the characteristic emission lines relative to the higher energy continuum part of the spectrum causing a reduction in iron absorption. We initially demonstrated results on a phantom, then theoretically explored optimal scan configuration and finally applied this information to identify iron rich regions associated with the nodes of Ranvier in samples of mouse spinal root stained with iron and osmium. These results could potentially allow following iron specific biological markers such as ferritin 23 and thus, introduce functionality into XRM measurements.
Results and Discussion
Initially, we tested feasibility using a phantom containing microtubes holding either aqueous iron, aqueous uranium, or pure water, as shown in Fig. 1. Uranium was used to represent the heavy metal stains typically used in biological staining, which can include osmium ( 76 Os), lead ( 82 Pb), and uranium ( 92 U). Uranium was chosen over osmium tetroxide (OsO 4 ) as it is considerably less toxic. Iron ( 26 Fe) was the target metal of interest. The exact phantom composition is detailed in supplemental material S1. The microtubes contained aqueous ferric chloride (FeCl 3 ·6H 2 O) at 0.2% and 0.1% by weight or uranyl acetate, UA (UO 2 (CH 3 COO) 2 ·2H 2 O) at 0.02% and 0.01% by weight. A pure water tube was included for calibration. A reconstructed 3D volume rendering of the phantom is shown in Fig. 1(a). Slice images from the two scans performed at 60kVp with 25um-thick iron filter and at 30kVp without any filtration denoted by "High" and "Low", respectively are shown in Fig. 1(b,c). Both have been calibrated using the water and air regions such as to express attenuation (μ) in units [cm −1 ]. Water attenuation was calibrated to be 0.25 cm −1 and 3.3 cm −1 for the "High and "Low" scans respectively, as estimated theoretically. The grayscale in each slice image was in the range ±25% of water attenuation, to enable comparison between the two. The composition of each of the microtube is marked in Fig. 1(b). Clearly, observing each slice on its own, it is impossible to know which solution is present in each microtube. Close examination of Fig. 1(c), shows visible enhancement of the iron containing microtubes at the "Low" energy scan as compared to the "High". It is this enhancement that enables material analysis.
Two common methods for performing material analysis are scatter plot material analysis 24 and the related material basis decomposition 19,25,26 . Both methods can be applied in the case of K-edge imaging, as shown in Fig. 1(d-f) and (g,h), respectively. In this study, we focus on the scatter plot approach for material analysis. In the scatter plot material analysis, a scatter plot is generated for all pixels in the slice image. Each point derives its (x-y) coordinate from the attenuation values in the "High" and "Low" images, respectively, as shown in Fig. 1(d). For clarity, "point" will be used to refer to elements in the scatter plots and "pixel" to elements in the images. The scatter plot consists of clouds of points, each associated with material composition in one microtube. Increasing concentration will shift the scatter point to higher attenuation values, i.e. points will move to the top-right position in the scatter plot. The exact slope will depend on the chemical composition of the material. Slopes (α) of ∝ Fe = 16.4 ± 1.6 and ∝ U = 9.5 ± 0.47 were obtained for iron and uranium at this scan configuration, respectively. Material analysis capability will strongly depend on three factors. First, noise will affect the spread of points in the scatter clouds. Denoising methods that preserve resolution are typically applied to confine this spread. Second, concentration magnitude determines the distance from background along a given slope. The smaller the concentration, the more difficult the analysis becomes. Third, chemical composition determines the slope magnitude in the scatter plot. Chemically distinct materials will have largely different slopes enabling easier separation. Noise, slope magnitude and the lower bound for concentration detectability will all be strongly influenced by the selected scan configuration. The performed material analysis, was based on the mid-slope by which any point above is considered to be iron, and any point below is considered to be uranium. A calibration based on the known material concentration was performed to scale either the "High" or "Low" image, obtaining the iron material density image [mgFe/cc] shown in Fig. 1(e) in red colormap, and uranium material density image [mgU/cc] shown in Fig. 1(f) in green colormap.
Material analysis can also be done by way of material basis decomposition 19,25,26 , whereby the absorption of any material can be expressed by weighted summation of absorption of two basis material, iron and uranium in our case. A simplified approach formulates the problem as for two monoenergetic beams. Then, by solving two where, µ M Low is the attenuation (or absorption interchangeably) of any material M, at "Low" energy scan configuration, μ Fe Low is the mass attenuation coefficient of iron at "Low" scan configuration, i.e., µ = µ ρFe Low Fe Low Fe with µ Fe Low the attenuation coefficient of iron and ρ Fe is the density of iron. Similarly, μ U Low is the mass attenuation coefficient of uranium. ρˆF e and ρˆU are the material density components for iron and uranium, respectively, which together are used to represent material M. Equation (2) is similar to eq. (1) but for "High" scan configuration. As detailed in the supplemental material S2, ρˆF e , is a weighted subtraction of the "Low" image minus the "High" image and ρˆU is a weighted subtraction of the "High" image minus the "Low" image. The generated material basis decomposition images or simply material density images for iron and uranium are shown in Fig. 1(g) and (h), respectively.
We used the measured X-ray tube spectrums and theoretical elemental mass attenuation (energy) curves from NIST XCOM database 27 , as shown in Fig. 2(a), to simulate osmium to iron absorption ratio, in order to find conditions of maximal iron absorption. We varied input spectrum peak tube potential, filtration material, filtration thickness, and accounted for detector stopping power. We found the two scan conditions with maximum and minimum absorption ratio, termed as "Low" and "High", to be 30 kVp with no filtration and 30 kVp with 25 μm thick iron foil filtration, respectively. The terms "Low" and "High" stand for the relative value of the mean spectrum energy. Using principles of K-edge imaging 18 , we note: 1. Tungsten source characteristic emission L-lines (8.3 keV, 9.6 keV and 11.3 keV) are dominant at low tube voltage setting, e.g. at 30 kVp peak tube voltage, these account for almost 50% of the total spectrum energy being below 14 keV. The total spectrum energy being defined as the integrated photon energy, , ( ) number of photons at energy (3) 0 2. Iron K-edge absorption onset (7.1 keV) is at energy just smaller than the source emission lines, resulting in good emission-absorption overlap enhancing iron absorption. The abrupt jump in iron absorption at the K-edge location is approximately 8-fold. 3. Filtered spectrum (30 kVp with 25 μm thick iron filter) characteristic emission lines are suppressed, making continuous emission dominant and shifting the energy spectrum to a higher average value, resulting in the reduction of iron absorption relative to other metals. Now, less than 9% of the total spectrum is energy below 14 keV.
Scatter plot slopes for Fe, Pb, Os, and U using the two-scan configuration, assuming 5 um thick metal, are shown in Fig. 3. A steeper slope for iron signifies its enhancement relative to other metals due to the location of its K-edge at regions of high X-ray tube flux characteristic emission, as discussed in detail before. Based on the scatter plots we defined a quantitative separation capacity measure of any two-metal pair arranged in a distance-type chart ( Table 1). The scatter capacity is defined as the perpendicular distance of mid-slope to any one of the two slopes at unit attenuation.
The higher the value, the stronger is the separation capability. It is evident that iron has the potential to be separated at these conditions from the other metal elements. On the other hand, for example, separating osmium and lead would be a very difficult task. As tube output increases with voltage improving signal to noise ratio, we calculated the impact of increasing the tube voltage for the "High" scan, keeping existing filtration. At 60 kVp peak tube potential for which power is increased by a factor x2.5, we found the calculated slope and separation capacity ] are plotted in the lower part of the graph using the right y-axis, showing two scan configurations, "Low" 30 kVp unfiltered and "High" 30 kVp filtered with 25 μm thick iron foil. "High" energy spectrum was scaled x1.8. The peak tube potential can be identified by the cuff-off energy of the spectrum. (b) Includes, iodine mass attenuation coefficient and third scan configuration, "High-2" 60 kVp filtered with 25 μm thick tungsten foil. Some of the other curves have been fainted out for clarity. "High-2" energy spectrum was scaled x2. The two scans "High" and "High-2" are pre-and post-iodine K-edge enhancing iodine contrast -a second potential marker.
Scientific RepoRts | (2018) 8:7553 | DOI:10.1038/s41598-018-25099-z to be within 2% of previously calculated values. And so, we performed the following two experimental scans, "Low" energy scan at 30 kVp with no filtration and a "High" energy scan at 60 kVp with 25 mm thick iron filter.
Next, we investigated a sample of spinal root extracted from the peripheral nervous system of a mouse, as shown in Fig. 4. The sample has been stained with iron, which is known to specifically target nodes of Ranvier [28][29][30] , and then subsequently stained with 1.0% osmium tetroxide 31 as detailed in the supplemental material S3. The iron staining protocol was originally used to investigate axonal membrane structure, targeting the nodal axolemma with iron. Osmium is known to dominantly stain the myelin sheaths of the neuronal axon. Figure 4(a,b) are the "Low" and "High" slice images of the spinal root cross-section. Images have been calibrated to express attenuation in units of [cm −1 ]. The packed bright circular structures are myelin sheaths surrounding axons, heavily stained with osmium. As can be seen, staining is largest at the root periphery and reduced towards the center due to limited reagent accessibility. Additional sagittal and coronal views (perpendicular cross sections of the "Low" scan) are shown in Fig. 4(c,d). Iron-stained nodes of Ranvier appear as bright spots surrounded by a black rim (Fig. 4(a)). The nodes are situated at points along axons where there is an absence of myelin, explaining their characteristic appearance, i.e. a bright spot due to iron surrounded by a dark ring due to the absence of osmium. Other bright spots throughout the volume were found to be associated with iron staining of Schwann cell nuclear nucleus.
To analyze material composition within the sample, we generated scatter plots from the "High" and "Low" images of the spinal root, as shown in Fig. 5. All pixels in the slice image, shown in Fig. 5(a) were used to generate the scatter plot shown in Fig. 5(b). Once the points crowd and overlap, we look now at density of points in units of point count. Air and plastic are both highly abundant and generate high count regions marked as "Air" and "Plastic".
As expected for calibrated images the air region is centered at attenuation value (0,0). Pixels associated with other ROI within the image, as marked in Fig. 5a, can be mapped within the attenuation scatter plot. For clarity we show only the "Low" image, as it had stronger iron contrast and better signal to noise. As the spinal root center is only lightly stained with osmium, any iron present here will build primarily from background, which is plastic as shown in Fig. 5(b). Osmium regions populate the space in the scatter plots extending from plastic to high osmium concentration. As shown in Fig. 5(c), image slice 2 exhibits a node closer to the spinal root periphery, containing higher osmium staining. As expected, the iron is co-mingled with osmium, and yet it has a slope characteristic of iron. Thus, it is possible to distinguish ROI containing iron even in the presence of moderate levels of osmium. Location of slice 1 and 2 can be seen in Fig. 4(c,d) . Calculated absorption scatter/slope plot of various metals present in the sample using the "Low" -(30 kVp peak tube potential and no filtration) and "High"-(30 kVp peak tube potential and 25 μm thick iron filter) scan configurations. The separation capacity is a measure of ease of separation of two elements, proportional to the depicted perpendicular distance between the two slopes. The steeper slope for iron indicated it has the potential to be separated from the other metals. Table 1. Separation capacity for metal pairs in a distance type chart. The separation capacity is defined as the perpendicular distance of mid-slope to any one of the two slopes at unit attenuation, depicted in Fig. 3. The larger the separation capacity value, the easier it will be separating the two elements. The iron containing pairs had considerably larger separation capacity than those expected between pairs containing both heavy metals. We looked at the average line slopes (α) generated by iron and osmium material. For slice 1 and 2 we measured α ( Fe slice1 = 28 ± 21, α Os slice1 = 11.5 ± 3.7) and α ( Fe slice2 = 27 ± 16, α Os slice2 = 14.5 ± 5.7), respectively. Though the spread of points can be large, looking at the average value aids the analysis. It is the ability to identify different slopes (i.e. material lines) within the same sample, for iron and osmium in this case, that enables material analysis as shown in Fig. 6.
To locate the iron-rich regions throughout a slice image, we applied a material separation line to the scatter plot, as shown in Fig. 6(b). To overcome noise and refine the material analysis, we used a scheme that not only looks at the location within the scatter plot but also considers the character of neighboring pixels, to improve the material analysis, as described below. 1) Points above the material separation line, i.e. the iron rich region, are initially selected to be iron pixels.
2) For each selected point, we examine the points associated with its neighboring pixels in the slice image within a specified radius (e.g. radius 3 pixel). If the centroid of these points is within a given distance to the original separation line (which can be tuned) this confirms the classification of the initial pixel as ironrich and suggests nearest neighbors as possible iron-rich candidates as well. If not, i.e. it is too close to the osmium-rich region, it is removed. 3) In an iterative manner, the added pixels are checked with the aid of their neighboring pixels to decide if they should be classified as iron-rich.
Nodes of Ranvier can be clearly identified as iron-rich in images analyzed by this method, as shown in Fig. 6(c) and marked N1-3. Surprisingly, other iron-rich regions with a circular morphology were also detected, as shown in Fig. 6(c) and marked S1-4. Examination of these ROI by transmission EM (TEM) revealed these structures to be related to Schwann cell nucleus. The iron image shown in Fig. 6(d), is proportional to iron density and was calculated from the distance of iron points to the material separation line, yet not quantified due to lack of calibration.
We used a Transmission Electron Microscope (TEM) equipped with an in-column Electron Energy-Loss Spectrometer (EELS), to further validate our findings and to check that iron was indeed localized to the nodes of Ranvier and was not present elsewhere in the sample (Fig. 7). The TEM image of the nodes of Ranvier in a central spinal root region reveals dense staining (Fig. 7a) which is confirmed to be iron from the iron map obtained by Energy-Filtered TEM (EFTEM) mapping (Fig. 7c). The EELS spectra taken at the node reveals presence of both iron (Fig. 7b1) and osmium (Fig. 7b2) edge onset of which occurs at 708 eV (L 2,3 Edge) and 1960 eV (M 4,5 Edge) respectively 32 . Additionally, we performed EELS measurements at a control region in the same sample i.e. a region away from the node of Ranvier. (Fig. 7d). As can be seen from the spectrum (Fig. 7e1 and e2), this region had approximately the same amount of osmium but only negligible amount of iron in comparison with the node of Ranvier. The absence of iron in the control region, is further corroborated by the EFTEM iron map (Fig. 7f), which shows only noise. TEM images also revealed that the circular-shaped objects observed in the Figure 5. Scatter plots of the spinal root sample. Scatter plots (b,d) were generated by mapping each pixel by its "High" and "Low" attenuation value. Regions within the scatter plot can be traced to specific regions in the slice images shown in (a,c). Slice 1 (a) exemplifies a node in a low staining background. Two large point densities in the scatter plot are associated with air and plastic regions. A node (green circle) and another bright spot (white square) project in the scatter plot at a slope characteristic of iron, as seen when individual pixels associated with ROI are marked similarly in the scatter plot. Background locations with no prominent staining (red diamonds) were taken from the circular dark ring around the node and are associated with plastic. Myelin (black circles) ROI project along a line with a slope characteristic for osmium. Slice 2 (c) exemplifies a node in a region with considerable osmium staining, and so the iron is projected in the scatter plot (d) from a location already exhibiting some given osmium concentration.
Scientific RepoRts | (2018) 8:7553 | DOI:10.1038/s41598-018-25099-z X-ray, were Schwann cells nuclei. The EELS spectrum taken at these locations confirmed the presence of iron (data not shown). Schwann cells are responsible for producing the myelin sheath around the neuronal axons and staining of the nuclear membrane and heterochromatin were visible.
Conclusions
We show that we can use the methods developed and described here to identify iron-rich regions in a sample stained with other elements (heavy metals) prepared for X-ray microtomography and EM imaging. Applying a two-scan configuration, iron absorption was selectively either enhanced or suppressed, providing strong contrast due to differences in X-ray source spectrum in the two cases. Good overlap of source characteristic emission lines and iron K-edge absorption was needed to achieve this contrast.
The results demonstrated here for iron can be extended to other elements in the future, by using non-tungsten X-ray sources, e.g. chrome or nickel target X-ray sources, and checking overlap of elements absorption edges. For example, it can be expected that cerium (6.6 keV L1-edge) will present similar results to iron (6.1 keV K-edge) with current tungsten source.
Some limitation is associated with the maximum sample size for which reasonable contrast can still be achieved. In this work, we imaged biological samples of physical size in the [mm] scale looking at features at sub-micron resolution. Too thick of a sample will result in X-ray beam filtration, depleting primarily the low energy part of the spectrum in a process called "beam hardening". As a result, the source characteristic emission lines will be suppressed, decreasing the iron absorption and resulting in smaller iron contrast. We found samples with path lengths of approximately 1 mm are free of this limitation. For the same reason, filtered X-ray sources used in medical imaging (used to reduce skin dose) will not exhibit strong iron absorption, as shown here due to complete depletion of the X-ray spectrum below 30 keV. A second elemental detection can potentially be achieved by adding a third scan configuration. Targeting iodine, a third post-edge scan configuration (iodine K-edge 33.2 keV), labelled "High-2" together with previous "High" scan will achieve a pre-post-iodine K-edge pair achieving strong iodine contrast. Specifically, source 60 kVp peak tube potential filtered with 25 μm thick tungsten foil will have good iodine absorption and source spectrum overlap to achieve iodine enhancement. The third scan configuration "High-2" and iodine mass attenuation (energy) curve are shown in Fig. 2(b). The addition of a second element detection to the shown iron detection, to be included in future work, will potentially achieve multi-color X-ray imaging similar to what was demonstrated in EM using lanthanide labels and EELS 5 .
In summary, we have shown a method that uses XRM with a polychromatic source to detect a specific element within a heavy metal stained biological sample, by maximizing X-ray source emission and elemental absorption overlap. Scatter plots where shown to be instrumental in identifying regions of distinct chemical signature, enabling automated material analysis. Using the scheme presented, a second element detection could be pursued. Correlating the target metals (e.g., Fe) to fluorescent genetic-probes, similar to that demonstrated in multicolor electron microscopy 5 , will provide important large volume functional localization information. Methods X-ray microtomography. XRM was performed on a Versa 510, (Carl Zeiss X-ray Microscopy, Pleasanton, CA 94588). The Versa 510 is equipped with a polychromatic tungsten source, tunable in the voltage range 30-160 kVp. Scan configuration: "Low" −30 kVp peak tube potential without filtration, exposure time 20-90 sec. "High"− 60 kVp peak tube potential, filtration 25 μm thick iron filter, exposure time 10-120 sec. Common scan conditions: binning 1, source-rotation axis distance 7.5 mm, detector-rotation axis distance 5 mm, pixel size 0.394μm, optical magnification x20, rotation angle −180° to +180°, number of views 3201. Iron filtration used a 0.001"x1"x2" foil placed at the X-ray tube exit slit. The foil was of 3N5 purity and sold by ESPI Metals, Ashland, OR 97520.
Staining protocol. Staining protocol is provided as Supplementary Material S3. Simulation. Material mass attenuation (energy) curves for the different elements were taken from NIST -XCOM database 27 for energy range 1-160 keV at intervals 0.5 keV. The X-ray spectrums were measured experimentally using AMETEK Material Analysis Division X-123CdTe X-ray Spectrometer, Amptek, Inc. Bedford, MA 01730. Typical resolution at FWHM was specified to be <1.2 keV at 120 keV. Experimentally measured X-ray spectrums for 30 kVp with and without 25 um thick iron filter, and 60 kVp with 25 um thick tungsten filter were presented in Fig. 2. TEM and EELS measurements. The TEM imaging was performed with a JEOL JEM-Z3200EF operating at 300 kV. This TEM has an Electron Energy-Loss Spectrometer in the form of an in-column Omega filter. All images, spectra and maps were acquired using Digital Micrograph software on a Gatan (Pleasanton, USA) US 4000 CCD camera. Prior to imaging, the sample was pre-irradiated at a low magnification for about 30 min to minimize the effects of contamination 33 .
The Iron spectra were acquired for 5 sec exposures. The EELS signals drop progressively with increasing energy loss and typically reaching negligible levels around and above 2000 eV 34 . The energy loss edge onset for osmium is at 1960 eV making it particularly challenging to obtain spectra in this energy domain. We overcame this problem by binning the camera pixels to 4 and using the microscope in diffraction mode, so to boost the signal-to-noise. The Osmium spectra were acquired for 360 sec exposures. The EFTEM images for computing the iron maps were acquired for 120 sec exposures with the CCD bin by 4 pixels. Data Availability. XRM image data generated and analyzed in this study will be available to download from the Cell Centered Database and Cell Image Library under the link, http://cellimagelibrary.org/groups/50351. | 6,276.2 | 2018-05-15T00:00:00.000 | [
"Biology",
"Materials Science",
"Physics"
] |
Morphological Plant Modeling: Unleashing Geometric and Topological Potential within the Plant Sciences
The geometries and topologies of leaves, flowers, roots, shoots, and their arrangements have fascinated plant biologists and mathematicians alike. As such, plant morphology is inherently mathematical in that it describes plant form and architecture with geometrical and topological techniques. Gaining an understanding of how to modify plant morphology, through molecular biology and breeding, aided by a mathematical perspective, is critical to improving agriculture, and the monitoring of ecosystems is vital to modeling a future with fewer natural resources. In this white paper, we begin with an overview in quantifying the form of plants and mathematical models of patterning in plants. We then explore the fundamental challenges that remain unanswered concerning plant morphology, from the barriers preventing the prediction of phenotype from genotype to modeling the movement of leaves in air streams. We end with a discussion concerning the education of plant morphology synthesizing biological and mathematical approaches and ways to facilitate research advances through outreach, cross-disciplinary training, and open science. Unleashing the potential of geometric and topological approaches in the plant sciences promises to transform our understanding of both plants and mathematics.
Morphology from the Perspective of Plant Biology
The study of plant morphology interfaces with all biological disciplines (Figure 1). Plant morphology can be descriptive and categorical, as in systematics, which focuses on biological homologies to discern groups of organisms (Mayr, 1981;Wiens, 2000). In plant ecology, the morphology of communities defines vegetation types and biomes, including their relationship to the environment. In turn, plant morphologies are mutually informed by other fields of study, such as plant physiology, the study of the functions of plants, plant genetics, the description of inheritance, and molecular biology, the underlying gene regulation (Kaplan, 2001).
Plant morphology is more than an attribute affecting plant organization, it is also dynamic. Developmentally, morphology reveals itself over the lifetime of a plant through varying rates of cell division, cell expansion, and anisotropic growth (Esau, 1960;Steeves and Sussex, 1989;Niklas, 1994). Response to changes in environmental conditions further modulate the abovementioned parameters. Development is genetically programmed and driven by biochemical processes that are responsible for physical forces that change the observed patterning and growth of organs (Green, 1999;Peaucelle et al., 2011;Braybrook and Jönsson, 2016). In addition, external physical forces affect plant development, such as heterogeneous soil densities altering root growth or flows of air, water, or gravity modulating the bending of branches and leaves (Moulia and Fournier, 2009). Inherited modifications of development over generations results in the evolution of plant morphology (Niklas, 1997). Development and evolution set the constraints for how the morphology of a plant arises, regardless of whether in a systematic, ecological, physiological, or genetic context (Figure 1).
Plant Morphology from the Perspective of Mathematics
In 1790, Johann Wolfgang von Goethe pioneered a perspective that transformed the way mathematicians think about plant morphology: the idea that the essence of plant morphology is an underlying repetitive process of transformation (Goethe, 1790;Friedman and Diggle, 2011). The modern challenge that Goethe's paradigm presents is to quantitatively describe transformations resulting from differences in the underlying genetic, developmental, and environmental cues. From a mathematical perspective, the challenge is how to define shape descriptors to compare plant morphology with topological and geometrical techniques and how to integrate these shape descriptors into simulations of plant development.
Mathematics to Describe Plant Shape and Morphology
Several areas of mathematics can be used to extract quantitative measures of plant shape and morphology. One intuitive representation of the plant form relies on the use of skeletal descriptors that reduce the branching morphology of plants to a set of intersecting lines or curve segments, constituting a mathematical graph. These skeleton-based mathematical graphs can be derived from manual measurement Watanabe et al., 2005) or imaging data (Bucksch et al., 2010;Aiteanu and Klein, 2014). Such skeletal descriptions can be used to derive quantitative measurements of lengths, diameters, and angles in tree crowns (Bucksch and Fleck, 2011;Raumonen et al., 2013;Seidel et al., 2015) and roots, at a single time point (Fitter, 1987;Danjon et al., 1999;Lobet et al., 2011;Galkovskyi et al., 2012) or over time to capture growth dynamics (Symonova et al., 2015). Having a skeletal description in place allows the definition of orders, in a biological and mathematical sense, to enable morphological analysis from a topological perspective (Figure 2A). Topological analyses can be used to compare shape characteristics independently of events that transform plant shape geometrically, providing a framework by which plant morphology can be modeled. The relationships between orders, such as degree of selfsimilarity (Prusinkiewicz, 2004) or self-nestedness (Godin and Ferraro, 2010) are used to quantitatively summarize patterns of plant morphology. Persistent homology (Figure 2B), an extension of Morse theory (Milnor, 1963), transforms a given plant shape gradually to define self-similarity (MacPherson and Schweinhart, 2012) and morphological properties (Edelsbrunner and Harer, 2010;Li et al., 2017) on the basis of topological event statistics. In the example in Figure 2B, topological FIGURE 1 | Plant morphology from the perspective of biology. Adapted from Kaplan (2001). Plant morphology interfaces with all disciplines of plant biology-plant physiology, plant genetics, plant systematics, and plant ecology-influenced by both developmental and evolutionary forces. events are represented by the geodesic distance at which branches are "born" and "die" along the length of the structure.
In the 1980s, David Kendall defined an elegant statistical framework to compare shapes (Kendall, 1984). His idea was to compare the outline of shapes in a transformation-invariant fashion. This concept infused rapidly as morphometrics into biology (Bookstein, 1997) and is increasingly carried out using machine vision techniques (Wilf et al., 2016). Kendall's idea inspired the development of methods such as elliptical Fourier descriptors (Kuhl and Giardina, 1982) and new trends employing the Laplace Beltrami operator (Reuter et al., 2009), both relying on the spectral decompositions of shapes (Chitwood et al., 2012;Laga et al., 2014;Rellán-Álvarez et al., 2015). Beyond the organ level, such morphometric descriptors were used to analyze cellular expansion rates of rapidly deforming primordia into mature organ morphologies (Rolland-Lagan et al., 2003;Remmler and Rolland-Lagan, 2012;Das Gupta and Nath, 2015).
From a geometric perspective, developmental processes construct surfaces in a three-dimensional space. Yet, the embedding of developing plant morphologies into a threedimensional space imposes constraints on plant forms. Awareness of such constraints has led to new interpretations of plant morphology (Prusinkiewicz and de Reuille, 2010;Bucksch et al., 2014b) that might provide avenues to explain symmetry and asymmetry in plant organs (e.g., Martinez et al., 2016) or the occurrence of plasticity as a morphological response to environmental changes (e.g., Royer et al., 2009;Palacio-López et al., 2015;Chitwood et al., 2016).
Mathematics to Simulate Plant Morphology
Computer simulations use principles from graph theory, such as graph rewriting, to model plant morphology over developmental time by successively augmenting a graph with vertices and edges as plant development unfolds. These rules unravel the differences between observed plant morphologies across plant species (Kurth, 1994;Prusinkiewicz et al., 2001;Barthélémy and Caraglio, 2007) and are capable of modeling fractal descriptions that reflect the repetitive and modular appearance of branching structures (Horn, 1971;Hallé, 1971Hallé, , 1986. Recent developments in functional-structural modeling abstract the genetic mechanisms driving the developmental program of tree crown morphology into a computational framework (Runions et al., 2007;Palubicki et al., 2009;Palubicki, 2013). Similarly, functional-structural modeling techniques are utilized in root biology to simulate the efficiency of nutrient and water uptake following developmental programs (Nielsen et al., 1994;Dunbabin et al., 2013). Alan Turing, a pioneering figure in 20th-century science, had a longstanding interest in phyllotactic patterns. Turing's approach to the problem was twofold: first, a detailed geometrical analysis of the patterns (Turing, 1992), and second, an application of his theory of morphogenesis through local activation and longrange inhibition (Turing, 1952), which defined the first reactiondiffusion system for morphological modeling. Combining physical experiments with computer simulations, Douady and Coudert (1996) subsequently modeled a diffusible chemical signal FIGURE 2 | Plant morphology from the perspective of mathematics. (A) The topological complexity of plants requires a mathematical framework to describe and simulate plant morphology. Shown is the top of a maize crown root 42 days after planting. Color represents root diameter, revealing topology and different orders of root architecture. Image by Jennifer T. Yang and provided by JPL (Pennsylvania State University). (B) Persistent homology deforms a given plant morphology using functions to define self-similarity in a structure. In this example, a geodesic distance function is traversed to the ground level of a tree (that is, the shortest curved distance of each voxel to the base of the tree), as visualized in blue in successive images. The branching structure, as defined across scales of the geodesic distance function is recorded as an H 0 (zero-order homology) barcode, which in persistent homology refers to connected components. As the branching structure is traversed by the function, connected components are "born" and "die" as terminal branches emerge and fuse together. Each of these components is indicated as a bar in the H 0 barcode, and the correspondence of the barcode to different points in the function is indicated by vertical lines, in pink. Images provided by ML (Danforth Plant Science Center).
EMERGING QUESTIONS AND BARRIERS IN THE MATHEMATICAL ANALYSIS OF PLANT MORPHOLOGY
A true synthesis of plant morphology, which comprehensively models observed biological phenomena and incorporates a mathematical perspective, remains elusive. In this section, we highlight current focuses in the study of plant morphology, including the technical limits of acquiring morphological data, phenotype prediction, responses of plants to the environment, models across biological scales, and the integration of complex phenomena, such as fluid dynamics, into plant morphological models.
Technological Limits to Acquiring Plant Morphological Data
There are several technological limits to acquiring plant morphological data that must be overcome to move this field forward. One such limitation is the acquisition of quantitative plant images. Many acquisition systems do not provide morphological data with measurable units. Approaches that rely on the reflection of waves from the plant surface can provide quantitative measurements for morphological analyses. Time of flight scanners, such as terrestrial laser scanning, overcome unitless measurement systems by recording the round-trip time of hundreds of thousands of laser beams sent at different angles from the scanner to the first plant surface within the line of sight (Vosselman and Maas, 2010) (Figure 3). Leveraging the speed of light allows calculation of the distance between a point on the plant surface and the laser scanner.
Laser scanning and the complementary, yet unitless, approach of stereovision both produce surface samples or point clouds as output. However, both approaches face algorithmic challenges encountered when plant parts occlude each other, since both rely on the reflection of waves from the plant surface (Bucksch, 2014). Radar provides another non-invasive technique to study individual tree and forest structures over wide areas. Radar pulses can either penetrate or reflect from foliage, depending on the selected wavelength (Kaasalainen et al., 2015). Most radar applications occur in forestry and are being operated from satellites or airplanes. Although more compact and agile systems are being developed for precision forestry above-and belowground (Feng et al., 2016), their resolution is too low to acquire the detail in morphology needed to apply hierarchy or similarity oriented mathematical analysis strategies.
Image acquisition that resolves occlusions by penetrating plant tissue is possible with X-ray (Kumi et al., 2015) and magnetic resonance imaging (MRI; van Dusschoten et al., 2016). While both technologies resolve occlusions and can even penetrate soil, their limitation is the requirement of a closed imaging volume. Thus, although useful for a wide array of purposes, MRI and X-ray are potentially destructive if applied to mature plant organs such as roots in the field or tree crowns that are larger than the imaging volume (Fiorani et al., 2012). Interior plant anatomy can be imaged destructively using confocal microscopy and laser ablation (Figure 4) or nano-or micro-CT tomography techniques, that are limited to small pot volumes, to investigate the first days of plant growth.
The Genetic Basis of Plant Morphology
One of the outstanding challenges in plant biology is to link the inheritance and activity of genes with observed phenotypes. This is particularly challenging for the study of plant morphology, as both the genetic landscape and morphospaces are complex: modeling each of these phenomena alone is difficult, let alone trying to model morphology as a result of genetic phenomena (Benfey and Mitchell-Olds, 2008;Lynch and Brown, 2012;Chitwood and Topp, 2015). Although classic examples exist in which plant morphology is radically altered by the effects of a few genes (Doebley, 2004;Clark et al., 2006;Kimura et al., 2008), many morphological traits have a polygenic basis (Langlade et al., 2005;Tian et al., 2011;Chitwood et al., 2013).
Natural variation in cell, tissue, or organ morphology ultimately impacts plant physiology, and vice versa. For example, formation of root cortical aerenchyma was linked to better plant growth under conditions of suboptimal availability of water and nutrients (Zhu et al., 2010;Postma and Lynch, 2011;Lynch, 2013), possibly because aerenchyma reduces the metabolic costs of soil exploration. Maize genotypes with greater root cortical cell size or reduced root cortical cell file number reach greater depths to increase water capture under drought conditions, possibly because those cellular traits reduce metabolic costs of root growth and maintenance (Chimungu et al., 2015). The control of root angle that results in greater water capture in rice as water tables recede was linked to the control of auxin distribution (Uga et al., 2013). Similarly, in shoots, natural variation can be exploited to find genetic loci that control shoot morphology, e.g., leaf erectness (Ku et al., 2010;Feng et al., 2011).
High-throughput phenotyping techniques are increasingly used to reveal the genetic basis of natural variation (Tester and Langridge, 2010). In doing so, phenotyping techniques complement classic approaches of reverse genetics and often lead to novel insights, even in a well-studied species like Arabidopsis thaliana. Phenotyping techniques have revealed a genetic basis for dynamic processes such as root growth (Slovak et al., 2014) and traits that determine plant height (Yang et al., 2014). Similarly, high-resolution sampling of root gravitropism has led to an unprecedented understanding of the dynamics of the genetic basis of plasticity (Miller et al., 2007;Brooks et al., 2010;Spalding and Miller, 2013).
The Environmental Basis of Plant Morphology
Phenotypic plasticity is defined as the ability of one genotype to produce different phenotypes based on environmental differences (Bradshaw, 1965;DeWitt and Scheiner, 2004) and adds to the phenotypic complexity created by genetics and development. Trait variation in response to the environment has been analyzed classically using 'reaction norms, ' where the phenotypic value of a certain trait is plotted for two different environments (Woltereck, 1909). If the trait is not plastic, the slope of the line connecting the points will be zero; if the reaction norm varies across the environment the trait is plastic and the slope of the reaction norm line will be a measure of the plasticity. As most of the responses of plants to their environment are nonlinear, more insight into phenotypic plasticity can be obtained by analyzing dose-response curves or dose-response surfaces (Mitscherlich, 1909;Poorter et al., 2010).
Seminal work by Clausen et al. (1941) demonstrated using several clonal species in a series of reciprocal transplants that, although heredity exerts the most measureable effects on plant morphology, environment is also a major source of phenotypic variability. Research continues to explore the range of phenotypic variation expressed by a given genotype in the context of different environments, which has important implications for many fields, including conservation, evolution, and agriculture (Nicotra et al., 2010;DeWitt, 2016). Many studies examine phenotypes across latitudinal or altitudinal gradients, or other environmental clines, to characterize the range of possible variation and its relationship to the process of local adaptation (Cordell et al., 1998;Díaz et al., 2016).
Below-ground, plants encounter diverse sources of environmental variability, including water availability, soil chemistry, and physical properties like soil hardness and movement. These factors vary between individual plants (Razak et al., 2013) and within an individual root system, where plants respond at spatio-temporal levels to very different granularity (Drew, 1975;Robbins and Dinneny, 2015). Plasticity at a microenvironmental scale has been linked to developmental and molecular mechanisms (Bao et al., 2014). The scientific challenge here is to integrate these effects at a whole root system level and use different scales of information to understand the optimal acquisition in resource limited conditions (Rellán-Álvarez et al., 2016) (Figure 5).
Integrating Models from Different Levels of Organization
Since it is extremely difficult to examine complex interdependent processes occurring at multiple spatio-temporal scales, mathematical modeling can be used as a complementary tool with which to disentangle component processes and investigate how their coupling may lead to emergent patterns at a systems level (Hamant et al., 2008;Band and King, 2012;Jensen and Fozard, 2015). To be practical, a multiscale model should generate well-constrained predictions despite significant parameter uncertainty (Gutenkunst et al., 2007;Hofhuis et al., 2016). It is desirable that a multiscale model has certain modularity in its design such that individual modules are responsible for modeling specific spatial aspects of the system (Baldazzi et al., 2012). Imaging techniques can validate multiscale models (e.g., Willis et al., 2016) such that simulations can reliably guide experimental studies.
To illustrate the challenges of multi-scale modeling, we highlight an example that encompasses molecular and cellular scales. At the molecular scale, models can treat some biomolecules as diffusive, but others, such as membrane-bound receptors, can be spatially restricted (Battogtokh and Tyson, 2016). Separately, at the cellular scale, mathematical models describe dynamics of cell networks where the mechanical pressures exerted on the cell walls are important factors for cell growth and division (Jensen and Fozard, 2015) (Figure 6A). In models describing plant development in a two-dimensional cross-section geometry, cells are often modeled as polygons defined by walls between neighboring cells. The spatial position of a vertex, where the cell walls of three neighboring cells coalesce, is a convenient variable for mathematical modeling of the dynamics of cellular networks (Prusinkiewicz and Runions, 2012). A multiscale model can then be assembled by combining the molecular and cellular models. Mutations and deletions of the genes encoding the biomolecules can be modeled by changing parameters. By inspecting the effects of such modifications on the dynamics of the cellular networks, the relationship between genotypes and phenotypes can be predicted. For example, Fujita et al. (2011) model integrates the dynamics of cell growth and division with the spatio-temporal dynamics of the proteins involved in stem cell regulation and simulates shoot apical meristem development in wild type and mutant plants ( Figure 6B).
Modeling the Impact of Morphology on Plant Function
Quantitative measures of plant morphology are critical to understand function. Vogel (1989) was the first to provide quantitative data that showed how shape changes in leaves reduce drag or friction in air or water flows. He found that single broad leaves reconfigure at high flow velocities into cone shapes to reduce flutter and drag (Figures 7A,B). More recent work discovered that the cone shape is significantly more stable than other reconfigurations such as U-shapes (Miller et al., 2012). Subsequent experimental studies on broad leaves, compound leaves, and flowers also support rapid repositioning in response to strong currents as a general mechanism to reduce drag (Niklas, 1992;Ennos, 1997;Etnier and Vogel, 2000;Vogel, 2006) ( Figure 7C). It is a combination of morphology and anatomy, and the resultant material properties, which lead to these optimal geometric re-configurations of shape.
From a functional perspective, it is highly plausible that leaf shape and surface-material properties alter the boundary layer of a fluid/gas over the leaf surface or enhance passive movement that can potentially augment gas and heat exchange. For example, it has been proposed that the broad leaves of some trees flutter for the purpose of convective and evaporative heat transfer (Thom, 1968;Grant, 1983). Any movement of the leaf relative to the FIGURE 6 | Integration of tissue growth and reaction-diffusion models. (A) Vertex model of cellular layers (Prusinkiewicz and Lindenmayer, 1990). K, l a , and l 0 are the spring constant, current length, and rest length for wall a. K P is a constant and S A is the size of cell A. t is time step. Shown is a simulation of cell network growth. (B) Reaction diffusion model of the shoot apical meristem for WUSCHEL and CLAVATA interactions (Fujita et al., 2011). u = WUS, v = CLV, i = cell index, is a sigmoid function. E, B, A S , A d , C, D, u m , D u , D v are positive constants. Shown are the distributions of WUS and CLV levels within a dynamic cell network. Images provided by DB (Virginia Tech). movement of the air or water may decrease the boundary layer and increase gas exchange, evaporation, and heat dissipation (Roden and Pearcy, 1993). Each of these parameters may be altered by the plant to improve the overall function of the leaf (Vogel, 2012).
The growth of the plant continuously modifies plant topology and geometry, which in turn changes the balance between organ demand and production. At the organismal scale, the 3D spatial distribution of plant organs is the main interface between the plant and its environment. For example, the 3D arrangement of branches impacts light interception and provides the support for different forms of fluxes (water, sugars) and signals (mechanical constraints, hormones) that control plant functioning and growth (Godin and Sinoquet, 2005).
MILESTONES IN EDUCATION AND OUTREACH TO ACCELERATE THE INFUSION OF MATH INTO THE PLANT SCIENCES
Mathematics and plant biology need to interact more closely to accelerate scientific progress. Opportunities to interact possibly involve cross-disciplinary training, workshops, meetings, and funding opportunities. In this section, we outline perspectives for enhancing the crossover between mathematics and plant biology.
Education
Mathematics has been likened to "biology's next microscope, " because of the insights into an otherwise invisible world it has to offer. Conversely, biology has been described as "mathematics' next physics, " stimulating novel mathematical approaches because of the hitherto unrealized phenomena that biology studies (Cohen, 2004). The scale of the needed interplay between mathematics and plant biology is enormous and may lead to new science disciplines at the interface of both: ranging from the cellular, tissue, organismal, and community levels to the global; touching upon genetic, transcriptional, proteomic, metabolite, and morphological data; studying the dynamic interactions of plants with the environment or the evolution of new forms over geologic time; and spanning quantification, statistics, and mechanistic mathematical models.
Research is becoming increasingly interdisciplinary, and undergraduate, graduate, and post-graduate groups are actively trying to bridge the gap between mathematics and biology skillsets. While many graduate programs have specialization tracks under the umbrella of mathematics or biology-specific programs, increasingly departments are forming specially designed graduate groups for mathematical/quantitative biology 1,2 to strengthen the interface between both disciplines. This will necessitate team-teaching across disciplines to train the next generation of mathematical/computational plant scientists.
Public Outreach: Citizen Science and the Maker Movement
Citizen science, which is a method to make the general public aware of scientific problems and employ their help in solving them 3 , is an ideal platform to initiate a synthesis between plant biology and mathematics because of the relatively low cost and accessibility of each field. Arguably, using citizen science to collect plant morphological diversity has already been achieved, but has yet to be fully realized. In total, it is estimated that the herbaria of the world possess greater than 207 million voucher specimens 4 , representing the diverse lineages of land plants collected over their respective biogeographies over a timespan of centuries.
Digital documentation of the millions of vouchers held by the world's botanic gardens is actively underway, allowing for researchers and citizens alike to access and study for themselves the wealth of plant diversity across the globe and centuries (Smith et al., 2003;Corney et al., 2012;Ryan, 2013).
The developmental changes in plants responding to environmental variability and microclimatic changes over the course of a growing season can be analyzed by studying phenology. Citizen science projects such as the USA National Phenology Network 5 or Earthwatch 6 and associated programs such as My Tree Tracker 7 document populations and individual plants over seasons and years, providing a distributed, decentralized network of scientific measurements to study the effects of climate change on plants. Citizen science is also enabled by low-cost, specialized equipment. Whether programming a camera to automatically take pictures at specific times or automating a watering schedule for a garden, the maker movement-a do-ityourself cultural phenomenon that intersects with hacker culture-focuses on building custom, programmable hardware, whether via electronics, robotics, 3D-printing, or time-honored skills such as metal-and woodworking. The focus on programming is especially relevant for integrating mathematical approaches with plant science experiments. The low-cost of single-board computers like Raspberry Pi, HummingBoard, or CubieBoard is a promising example of how to engage citizen scientists into the scientific process and enable technology solutions to specific questions.
Workshops and Funding Opportunities
Simply bringing mathematicians and plant biologists together to interact, to learn about new tools, approaches, and opportunities in each discipline is a major opportunity for further integration of these two disciplines and strengthen new disciplines at the interface of both. This white paper itself is a testament to the power of bringing mathematicians and biologists together, resulting from a National Institute for Mathematical and Biological Synthesis (NIMBioS) workshop titled "Morphological Plant Modeling: Unleashing Geometric and Topologic Potential within the Plant Sciences, " held at the University of Tennessee, Knoxville, September 2-4, 2015 8 (Figure 8). Other mathematical institutes such as the Mathematical Biology Institute (MBI) at Ohio State University 9 , the Statistical and Applied Mathematical Sciences Institute (SAMSI) in Research Triangle Park 10 , the Institute for Mathematics and Its Applications at University of Minnesota 11 , and the Centre for Plant Integrative Biology at the University of Nottingham 12 have also hosted workshops for mathematical and quantitative biologists from the undergraduate student to the faculty level.
There are efforts to unite biologists and mathematics through initiatives brought forth from The National Science Foundation, including Mathematical Biology Programs 13 and the Joint DMS/NIGMS Initiative to Support Research at the Interface of the Biological and Mathematical Sciences 14 (DMS/NIGMS). Outside of the Mathematics and Life Sciences Divisions, the Division of Physics houses a program on the Physics of Living Systems. Societies such as The Society for Mathematical Biology and the Society for Industrial and Applied Mathematics (SIAM) Life Science Activity Group 15 are focused on the dissemination of research at the intersection of math and biology, creating many opportunities to present research and provide funding. We emphasize the importance that funding opportunities have had and will continue to have in the advancement of plant morphological modeling.
Open Science
Ultimately, mathematicians, computational scientists, and plant biology must unite at the level of jointly collecting data, analyzing it, and doing science together. Open and timely data sharing to benchmark code is a first step to unite these disciplines along with building professional interfaces to bridge between the disciplines (Bucksch et al., 2017;Pradal et al., 2017).
A number of platforms provide open, public access to datasets, figures, and code that can be shared, including Dryad 16 , Dataverse 17 , and Figshare 18 . Beyond the ability to share data is the question of open data formats and accessibility. For example, in remote sensing research it is unfortunately common that proprietary data formats are used, which prevents their use without specific software. This severely limits the utility and community building aspects of plant morphological research. Beyond datasets, making code openly available, citable, and user-friendly is a means to share methods to analyze data. Places to easily share code include web-based version controlled platforms like Bitbucket 19 or Github 20 and software repositories like Sourceforge 21 . Furthermore, numerous academic Journals (e.g., Nature Methods, Applications in Plant Sciences, and Plant Methods) already accept publications that focus on methods and software to accelerate new scientific discovery (Pradal et al., 2013).
Meta-analysis datasets provide curated resources where numerous published and unpublished datasets related to a specific problem (or many problems) can be accessed by researchers 22 . The crucial element is that data is somehow reflective of universal plant morphological features, bridging the gap between programming languages and biology, as seen in the Root System Markup Language and OpenAlea (Pradal et al., 2008. Bisque is a versatile platform to store, organize, and analyze image data, providing simultaneously open access to data and analyses as well as the requisite computation (Kvilekval et al., 2010). CyVerse 23 (formerly iPlant) is a similar platform, on which academic users get 100 GB storage for free and can create analysis pipelines that can be shared and reused (Goff et al., 2011). For example, DIRT 24 is an automatic, high throughput computing platform (Bucksch et al., 2014a; that the public can use hosted on CyVerse using the Texas Advanced Computing Center 25 (TACC) resources at UT Austin that robustly extracts root traits from digital images. The reproducibility of these complex computational experiments can be improved using scientific workflows that capture and automate the exact methodology followed by scientists (Cohen-Boulakia et al., 2017). We emphasize here the importance of adopting open science policies at the individual investigator and journal level to continue strengthening the interface between plant and mathematically driven sciences.
CONCLUSION: UNLEASHING GEOMETRIC AND TOPOLOGICAL POTENTIAL WITHIN THE PLANT SCIENCES
Plant morphology is a mystery from a molecular and quantification point of view. Hence, it fascinates both mathematical and plant biology researchers alike. As such, plant morphology holds the secret by which predetermined variations of organizational patterns emerge as a result of evolutionary, developmental, and environmental responses.
The persistent challenge at the intersection of plant biology and mathematical sciences might be the integration of measurements across different scales of the plant. We have to meet this challenge to derive and validate mathematical models that describe plants beyond the visual observable. Only then we will be able to modify plant morphology through molecular biology and breeding as means to develop needed agricultural outputs and sustainable environments for everybody.
Cross-disciplinary training of scientists, citizen science, and open science are inevitable first steps to develop the interface between mathematical-driven and plant biology-driven sciences. The result of these steps will be new disciplines, that will add to the spectrum of researchers in plant biology. Hence, to unleash the potential of geometric and topological approaches in the plant sciences, we need an interface familiar with both plants and mathematical approaches to meet the challenges posed by a future with uncertain natural resources as a consequence of climate change.
AUTHOR CONTRIBUTIONS
AB and DC conceived, wrote, and organized the manuscript. All authors contributed to writing the manuscript.
FUNDING
This work was assisted through participation in the Morphological Plant Modeling: Unleashing geometric and topological potential within the plant sciences Investigative Workshop at the National Institute for Mathematical and Biological Synthesis, sponsored by the National Science Foundation through NSF Award #DBI-1300426, with additional support from The University of Tennessee, Knoxville, Knoxville, TN, United States. | 7,205.8 | 2017-02-08T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science",
"Mathematics"
] |
Data Field-Based K-Means Clustering for Spatio-Temporal Seismicity Analysis and Hazard Assessment
Microseismic sensing taking advantage of sensors can remotely monitor seismic activities and evaluate seismic hazard. Compared with experts’ seismic event clusters, clustering algorithms are more objective, and they can handle many seismic events. Many methods have been proposed for seismic event clustering and the K-means clustering technique has become the most famous one. However, K-means can be affected by noise events (large location error events) and initial cluster centers. In this paper, a data field-based K-means clustering methodology is proposed for seismicity analysis. The application of synthetic data and real seismic data have shown its effectiveness in removing noise events as well as finding good initial cluster centers. Furthermore, we introduced the time parameter into the K-means clustering process and applied it to seismic events obtained from the Chinese Yongshaba mine. The results show that the time-event location distance and data field-based K-means clustering can divide seismic events by both space and time, which provides a new insight for seismicity analysis compared with event location distance and data field-based K-means clustering. The Krzanowski-Lai (KL) index obtains a maximum value when the number of clusters is five: the energy index (EI) shows that clusters C1, C3 and C5 have very critical periods. In conclusion, the time-event location distance, and the data field-based K-means clustering can provide an effective methodology for seismicity analysis and hazard assessment. In addition, further study can be done by considering time-event location-magnitude distances.
Introduction
Seismic event clustering provides an effective way to understand the underlying information (such as geological structure interpretation and seismic hazard assessment; a detailed use of seismic event clusters is shown in Section 2) of microseismic events [1,2].This task can be conducted by experts, following a two-step clustering process: selecting the time period concerned and dividing seismic events into clusters visually by event locations.However, experts' clusters are subjective, non-quantitative and have trouble handling a large number of seismic events [3].Motivated by this, various clustering algorithms have been proposed for seismicity analysis (e.g., K-means cluster, hierarchical clustering, and self-organizing map (SOM)), among which K-means clustering technique is the most well-known one.Yet, K-means cannot remove noise events (which have relatively large event location errors) and it uses random initial cluster centers (which have a large influence on cluster results).In addition, no K-means clustering-based methodologies previously applied to this problem have considered time parameters.In fact, seismic events are not only related to the event location but also to the event time [1].
In this work, a data field-based K-means clustering methodology has been proposed for spatio-temporal seismicity analysis.This method uses a data field-based threshold value to remove noises and a new distance-based method to select good initial cluster centers for K-means.Furthermore, we introduced the time-event location distance into the K-means clustering process, which provides a spatio-temporal insight into the seismicity analysis.The real seismic data application shows its effectiveness in seismic hazard assessment.In addition, we compared its performance with classical K-means cluster, hierarchical clustering and SOM clustering.The rest of this paper is organized as follows.In Section 2, related works about seismic event clustering are introduced.Then, in Section 3, the data field theory is briefly illustrated, and the data field-based algorithm is proposed to better select K-means initial cluster centers.In Section 4, the proposed data field-based K-means clustering was applied to 1210 microseismic events obtained from the Chinese Yongshaba mine and the energy index (EI) is selected to assess seismic hazard for the clustering zones.The application detail effect and comparisons with classical K-means cluster, hierarchical clustering and SOM clustering are discussed in Section 5. Finally, brief conclusions of the work and future work are shown in Section 6.
Related Works
Seismic event clustering using clustering algorithms has been studied since the 1990s, and the K-means clustering algorithm has become the most famous one.K-means is a hard-partitioning algorithm proposed by Hartigan and Wong [4].It starts with K randomly selected initial cluster centers and uses an iterative process until no data point changes the cluster assignations.Burton et al. [5] and Weatherill and Burton [6,7] proposed a line-source K-means clustering technique to better identify initial cluster centers and they applied it to capture the seismic spatial variation, interpret the fault type and analyze the probabilistic seismic hazard in the Java island and the Aegean region.Ramdani et al. [8] took advantage of the final K-Means cluster centers to find subduction evidence beneath the Gibraltar Arc and the Andean regions.Rehman et al. [9] utilizes the K-means algorithm to identify the earthquake spatial differences and estimate seismic hazard and risk in Pakistan.Morales-Esteban et al. [10] proposed an efficient adaptive Mahalanobis-based K-means algorithm and this has been applied to study the seismic catalogues of Croatia and the Iberian Peninsula.Shang et al. [11] proposed a Krzanowski-Lai and Silhouette combined index to select the optimum number of clusters for K-means and they interpreted the geological structure in the Chinese Yongshaba mine.The K-means cluster is useful for a large dataset cluster, and some studies have been done to reduce the effect of initial cluster centers.Nevertheless, the above-mentioned methods have not considered the time parameter and the effect of noise events.
Hierarchical clustering is also widely used in seismicity partitioning.It is based on the core idea that an event is more related to a near one than to a far event, and it connects events into clusters based on a preset distance.Hierarchical clustering does not need to set a cluster number (just select a preset distance) and it can cluster events into different shapes.Wardlaw et al. [12], Frohlich and Davis [13], Davis and Frohlich [14], Hudyma and Potvin [15], and Hashemi and Mehdizadeh [16] used hierarchical clustering (single-link analysis) to study earthquake catalogues and seismic activities.Notwithstanding, hierarchical clustering has a tendency to form seismic events into linear groups [13] and there may be many clusters that just have few events (see the discussion).
Another commonly used seismic event cluster method is the SOM [17].The SOM converts high dimension input space into a low-dimension (typically two-dimensional), which has a very good visualizing image.Zamani and Hashemi [3] compared hierarchical clustering (Ward's method) and SOM clustering with an application to seismic zoning in Iran.Moreover, Zamani et al. [18] applied the SOM for clustering tectonic zoning in Iran.The same method was tested in Zamani et al. [19], in which Wilk's Lambda criterion was used to select an optimum number of clusters.Mojarab et al. [20] discussed the effect of SOM input parameters and proposed a combined model to select a cluster number for Iran seismicity analysis.Besheli et al. [21] performed a K-means clustering along with a SOM on an Iranian foreshock database and they found a foreshock zone which has a very close relation with large earthquakes.Nonetheless, research results [11,22] have shown that the SOM clustering may only be valid for high seismic activity areas, while for low seismic activity zones it may have bad cluster results.Furthermore, the SOM clustering may have discontinuous zones, which makes it hard to interpret cluster results [11,22].
Some other cluster methods are proposed for seismicity analysis.Ansari et al. [23], Benitez et al. [24] and Monem and Hashemy [25] used fuzzy clusters, including Gath-Geva clustering, proposed Gath-Geva clustering, fuzzy c-means clustering and Gustafson-Kessel (GK) clustering, to study seismic spatial pattern recognition in Iran, south-west Colombia and the Ghazvin canal irrigation network.The fuzzy clusters can be used for a large dataset cluster; however, its cluster result is sensitive to the initial cluster centers.Mukhopadhyay et al. [26] used a point density clustering to gain insight into subduction kinematics and seismic potentiality.Georgoulas et al. [1] modified a density-based clustering (DBC) algorithm and then used a hierarchical agglomerative scheme for seismic clustering.Nanda and Panda [27] proposed two DBC algorithms and compared them with the DBSCAN (density-based spatial clustering of applications with noise).DBC can remove noise events, yet when the density varies much, it usually obtains bad cluster results.Martínez-Álvarez et al. [22] used a TriGen-based clustering algorithm [28,29] for seismic zoning in the Iberian Peninsula.A comparison of using different seismic clusters can be found in [11,30,31].
Only a few of the above investigations have considered the time factor.Nevertheless, the time parameter can play an important role in seismic event clusters.The commonly used clustering techniques considering a time parameter are the time-event location and time-event location-magnitude distance-based clusters.Frohlich and Davis [13] and Davis and Frohlich [14] suggested equalizing the time and space intervals using a scaling constant and introduced the time-event location distance by l ij = d ij 2 + (at ij ) 2 , where d ij and t ij are the event location distance and time interval respectively.
Then, they used the hierarchical cluster for seismic event clusters.Baiesi and Paczuski [32] defined the time-event location-magnitude distance between event i and j by n ij = Ct ij l ij d f 10 −bm i , where d f is the fractal dimension of epicenters, b is the Gutenberg-Richter b-value and m i is the event magnitude.A single link method was used in this work for the seismic event cluster.Zaliapin et al. [33] and Zaliapin and Ben-Zion [34][35][36][37][38] defined the n ij by magnitude-normalized time component T ij and space component R ij (T ij = t ij 10 −qbm i and R ij = l ij d f 10 −pbm i , where q + p = 1).Event j is clustered to the main event i if it satisfies the following conditions: T ij < T 0 , R ij < R 0 and m j < m i .It is clear that no time parameter has been applied in K-means clustering.In this work, we just introduced the time-event location distance into a K-means clustering algorithm.Further study can be done by using a time-event location-magnitude distance.
Preliminary Studies and the Proposed Method
In first place, to prove the limitations of the K-means technique, a brief study with synthetic data was performed in Section 3.1.Next, the data field theory is introduced in Section 3.2.Then, the data field-based K-means clustering procedure is proposed in Section 3.3.Finally, an application test of data field-based K-means clustering is shown in Section 3.4.
K-Means Clustering Preliminary Study
The classical K-means algorithm is based on the following steps: take a dataset of points (x 1 , x 2 , . . ., x n ), randomly select K points from the data set as the initial cluster centers and allocate each point x i to the nearest center point.Then, it calculates the mean of each parameter respectively for each group to make up a set of new updated cluster centers.This procedure is repeated until no points change their cluster or the number of iterations reaches a preset maximum.
The most widely used distance is the Euclidean function and this was chosen for this work.The Euclidean distance between event x i and x j is defined in Equation (1).Then, the mean of the widely used clustering criterion-sum of Euclidean distance (SED) is used to evaluate the cluster performance.We called this the MSED index.(The number of undenoised and denoised events is different, so we used the MSED instead of the SED to evaluate the clustering performance).MSED is defined in Equation ( 2), the smaller the MSED index of clusters the better the clustering results. (1) where d ij is the distance between event x i and x j , p is the dimension of the parameters, C k is the kth cluster, K is the cluster number, m k is the cluster center of C k , and n is the number of cluster events.From the principle of K-means clustering, we know that the algorithm may be affected by the initial cluster centers.To prove this issue, a synthetic two-dimension dataset with four well-differenced clusters is used to test the effect of K-means initial cluster centers (Figure 1a).The algorithm was tested 15 times by using randomly generated initial cluster centers (using K = 4 for all the experiments) and their MSED indexes are shown in Figure 1b.It is clear that the K-means clustering can be heavily affected by the initial cluster centers: the MSED index varies from 0.100 to 0.192.The K-means clustering results for six typical MSED values are shown in Figure 1c-h.It is clearly seen that for high MSED values, quite a lot of close events are in different clusters (Figure 1c,h); for relatively high MSED values, many close events are in different clusters (Figure 1d,g); while for low MSED values, the K-means cluster has good results (Figure 1e,f).In conclusion, the lower the MSED index, the better the clustering result.Therefore, there is a high need to optimize the K-means initial cluster centers.Furthermore, there are some noise events placed in inter-cluster regions, which should be removed before clustering.
Data Field Theory
Motivated by physical field theory, Wang et al. [39] proposed a data field theory to describe the interaction among a set of events.The potential value (scalar field strength) of event x i in the global data field is defined as: where m j is the mass of event x j , K(x) is the unit potential function, x i − x j represents the distance between event x i and event x j , and σ is the impact factor.
The m j is set to 1, for all the j values, and the commonly used function K(β) = e −β 2 is selected.Then, the potential value in the global data field for event x i can be written as in Equation ( 4), where x i − x j was defined by the Euclidean function.
The impact factor controls the interacted distance between events and its value can have a heavy impact on potential values.The relationship between the interacted distance and the potential value of Equation ( 4) is shown in Figure 2a.It is easy to see that the longer the distance between events the lower the potential value for a same impact factor, and the higher the impact factor the larger the potential value for a same interacted distance.In other words, for a low impact factor, there will be many local maximum potential values; while for a large impact factor, there will be few local maximum potential values.Some research [39,40] has used potential entropy to obtain an optimum impact factor, and the minimum potential entropy corresponds to the optimum impact factor.The potential entropy is defined as in [39]: where The potential entropy using different impact factors is shown in Figure 2b.The potential entropy has a minimum value when σ = 0.028.Then, the potential value is calculated for each event using Equation (4).The contour map of potential value is shown in Figure 2c.It is easy to see that the potential value contour has a good agreement with the dataset density.Potential value may provide a good way for clustering (e.g., Figure 2c): a potential threshold value can be used to divide the dataset into small datasets.Notwithstanding, for an irregular dataset, it is difficult to produce adequate clusters based on a potential threshold value (e.g., Figures 5a and 6).Therefore, instead of using a potential value to cluster seismic events, we just applied potential threshold values to remove noise events and better selected K-means initial cluster centers.
Data Field-Based K-Means Clustering Procedure
The data field-based K-means clustering takes advantage of the event with high potential value being more likely to be an initial cluster center and that the distance between initial cluster centers should have a larger distance.There is a low event density around the noise events and they have larger distances to most events, which usually implies small potential values.Then, a threshold value ϕ th can be set to remove noise events.The flow diagram of the data field-based K-means clustering process is shown in Figure 3 containing the following steps: Step 1: Step 2: Select the impact factor of the data field by the impact factor-potential entropy curve calculated by Equation ( 5) and calculate the potential value ϕ(x i ) for each event in U dataset using Equation (4).
Step 3: Remove noise events and obtain a new dataset U dataset-noise (x j ∈ U dataset-noise , j = 1, 2, . . ., n dataset-noise , n dataset-noise ≤ n): If a potential value is smaller than a predefined threshold value ϕ th , then its corresponding event is marked as a noise event and it is to be removed from the dataset U dataset .If there are no noise events, then there is no need to do Step 3 or a ϕ th that is smaller than the minimum potential value can be set to do Step 3. ϕ th = Min-0.1 is used in this paper for no denoising, where Min-0.1 means the minimum potential value minus 0.1.
Step 4: Select K-means initial cluster centers based on the potential value and a maximum distance-based algorithm.
(4-1) Select possible initial cluster centers U po (x k ∈ U po , k = 1, 2, . . ., n po , n po ≤ n) by choosing events which have potential values larger than a predefined threshold value ϕ po ; (4-2) Select K-means initial cluster centers based on a maximum distance-based algorithm.The maximum distance-based algorithm contains the following steps: (4-2-1) The event which has a maximum potential value in U po is set to the first initial cluster center m 1 ; (4-2-2) Calculate the distances between m 1 and each point in U po -m 1 , and the largest distance corresponding event is set to the second initial cluster center m 2 ; (4-2-3) Calculate the distances between m 1 , m 2 and each point in U po -m 1 -m 2 , then select the smallest distance to obtain the distance set V sd .For example, for event x i (x i ∈ U po -m 1 -m 2 ), there are two distances (m 1 ,x i ) and (m 2 ,x i ), and the Min((m 1 ,x i ), (m 2 ,x i )) is set to V i sd .Calculate the smaller distance for each event in U po -m 1 -m 2 , then the smallest distances make up the distance set V sd .The point with the largest value in V sd corresponds to the third initial cluster center m 3 ; (4-2-4) Repeat Step (4-2-3) to obtain m 4 , m 5 , . . ., m K .
Step 5: Perform the K-means clustering for dataset U dataset-noise using the m 1 , m 2 , . . ., m K as the initial cluster centers.
Step 6: Select the optimum number of clusters using the Krzanowski-Lai (KL) index (see Section 4.3.2) and interpret the clustering results.
Application Test
The relationship between the MSED index and the threshold value
Application Test
The relationship between the MSED index and the threshold value ϕ po for data field-based K-means clustering using undenoised data is shown in Figure 2d, where O 1 , O 2 , . . .., O 7 are the first, second, . . ., and seventh octiles of potential values, respectively.It is clear that the data field-based K-means clustering usually obtains a smaller MSED index than the classical K-means algorithm.This indicates that the data field-based K-means clustering obtains better clusters than those produced by the classical K-means.A larger ϕ po achieves a lower MSED index in the application test, which can be interpreted as that, for a low ϕ po , the selected possible initial cluster centers may contain noise events.The noise events are usually far away from the final optimum cluster centers.Although we used a largest distance principle to select the initial cluster centers, the noise events may be selected as initial cluster centers, and this may cause bad cluster results.For a large ϕ po , some possible initial cluster centers may not be included, and this can cause a bad clustering result.The testing dataset is nearly equal-distributed, which results in similar local maximum potential values.Furthermore, the MSED index of data field-based K-means clustering results with 7.5% data removed is also shown in Figure 2d.The MSED index of the denoised cluster is much smaller than that of the undenoised cluster.For engineering data, we usually obtain many local maximum potential values with relatively large differences.The effect of ϕ po in seismic event clusters has been discussed in Section 5.The results show that a threshold value ϕ po between Q1 and the Median obtains a good cluster result (Q1 and Median are the first quartile and second quartile of the potential values, respectively).In this paper, the ϕ po is set to Q1 for further study.The data field-based K-means cluster result using undenoised data, when ϕ th = Min-0.1 and ϕ po = Q1, is shown in Figure 2e.This shows that the data field-based K-means clustering obtains good clusters.Furthermore, the results with 7.5% events removed are shown in Figure 2f.The noise events are well separated and the data field-based K-means clustering obtains good cluster results.Therefore, we found that this clustering methodology can have a good application to seismic event clustering.
Seismic Dataset Description
To test the efficiency of the proposed method, we used microseismic events obtained from the Institute of Mine Seismology (IMS) system at the Chinese Yongshaba mine (Figure 4a).The Yongshaba mine is in Guiyang city of Guizhou province (China, 26 • 38 N, 106 • 37 E) and has an explored phosphate reserve amount of 40 million tons and an exploitation depth of more than 650 m.The strike and dip angle of the ore body are 280 • ~293 • and 10 • ~30 • , respectively, and the roof of the ore body is very unstable.It uses a multi-stage open stope mining method, which results in many mined-out areas.In addition, there are several main faults (F310, F313, F316, F302 and F309) and many small faults (Figure 4a), which may cause seismic hazards such as rock bursts and large areas of rock instability.Motivated by these concerns, we built the IMS system in the mine.The IMS system contains 28 geophones distributed at the 930 m level (12 geophones), 1080 m level (12 geophones) and 1120 m level (4 geophones).All the geophones have a sampling frequency of 6000 Hz.
We used an absolute-value method (Equation (2) in [41]) based on manual picked P phase arrivals to locate microseismic events.We set the time period from 1 April 2014 to 30 April 2014, then we used a cuboid (380,600 ≤ X ≤ 382,100 (X = East), 2,995,000 ≤ Y ≤ 2,998,200 (Y = North) and 750 ≤ Z ≤ 1600 (Z = Up)) and a moment magnitude ≥ −1.5 to filter events that have very large location errors and small moment magnitudes.Finally, we obtained 1210 microseismic events, which are used as the input dataset (Figure 4b).Generally, the microseismic event locations have a good relationship with the geophone distribution.However, there are some microseismic events which are relatively far away from the geophones.These microseismic events are thought to be badly located and should be removed before clustering.
Noise Event Removing Process
To remove badly located events, the Euclidean distance of the seismic event location parameters (X, Y, Z) is selected to calculate the potential entropy.The microseismic event can have a large location error due to the P phase picking quality, geophone distribution and location algorithm, while the event time can be accurately recorded.After calculating the potential entropy of the data studied, we found that it had a minimum value when σ = 22.00.The contour map of the potential value σ = 22.00 is shown in Figure 5a.It shows that the contour map fits well with the seismic event density.Then, we tried to remove 5%, 7.5%, 10%, 12.5% and 15% of the seismic events which have the lowest potential values.For those five denoising ratios, all the denoised microseismic events have many events in area C.This is due to the approximately linear distributed geophones near area C being able to result in a relatively large location error.For 5% removed seismic events, areas A and D still have few seismic events (Figure S1a); for 7.5% removed seismic events, area D still has few seismic events (Figure S1b); while for 12.5% and 15% removed seismic data, area B removes some useful seismic events (Figure S1c,d).Therefore, microseismic events with 10% of seismic events removed are selected for further study in this paper.
Noise Event Removing Process
To remove badly located events, the Euclidean distance of the seismic event location parameters ( , Y, Z) is selected to calculate the potential entropy.The microseismic event can have a large location error due to the P phase picking quality, geophone distribution and location algorithm, while the event time can be accurately recorded.After calculating the potential entropy of the data studied, we found that it had a minimum value when σ = 22.00.The contour map of the potential value σ = 22.00 is shown in Figure 5a.It shows that the contour map fits well with the seismic event density.Then, we tried to remove 5%, 7.5%, 10%, 12.5% and 15% of the seismic events which have the lowest potential values.For those five denoising ratios, all the denoised microseismic events have many events in area C.This is due to the approximately linear distributed geophones near area C being able to result in a relatively large location error.For 5% removed seismic events, areas A and D still have few seismic events (Figure S1a); for 7.5% removed seismic events, area D still has few seismic events (Figure S1b); while for 12.5% and 15% removed seismic data, area B removes some useful seismic events (Figure S1c,d).Therefore, microseismic events with 10% of seismic events removed are selected for further study in this paper.
Time-Event Location Distance-Based Data Field
Most traditional seismic clustering methods only take advantage of seismic event location parameters (X, Y, Z), since the time parameter has a different measurement unit.Frohlich and Davis [13] and Davis and Frohlich [14] suggested equalizing the time and location parameters using the following time-event location distance equation: where d ij (meters) and t ij (days) are the event location distance and the time difference between events, and a is the time-event location distance conversion factor.In this paper, a is set to d mean /( √ 3t mean ), where d mean and t mean are the mean values of d ij and t ij (i = 1, 2, . . ., n dataset-noise ; j = 1, 2, . . ., n dataset-noise ), respectively.
We calculated the d mean and t mean of the denoised microseismic events resulting in 919.31 m and 9.88 d, respectively.Then, the a factor is set to 53.72 m/d by calculating the ratio between d mean and √ 3t mean .The potential entropy is calculated by Equation ( 5) and it has a minimum value when σ = 83.50.The contour map of potential value (σ = 83.50)for the denoised microseismic events taking advantage of the time-event location distance is shown in Figure 6.Then, initial cluster centers were calculated by Step 4 of the data field-based K-means cluster algorithm, and they are also shown in Figure 6.The time-event location distances between the initial cluster centers are listed in Table 1.The initial cluster centers have a good distribution and there are no small time-event location distances between the initial cluster centers, which shows that this algorithm can obtain good initial cluster centers.Most traditional seismic clustering methods only take advantage of seismic event location parameters (X, Y, Z), since the time parameter has a different measurement unit.Frohlich and Davis [13] and Davis and Frohlich [14] suggested equalizing the time and location parameters using the following time-event location distance equation: where dij (meters) and tij (days) are the event location distance and the time difference between events, and a is the time-event location distance conversion factor.In this paper, a is set to m ean m ean , where dmean and tmean are the mean values of dij and tij (i = 1, 2, …, ndataset-noise; j = 1, 2, …, ndataset-noise), respectively.
We calculated the dmean and tmean of the denoised microseismic events resulting in 919.31 m and 9.88 d, respectively.Then, the a factor is set to 53.72 m/d by calculating the ratio between dmean and mean 3t . The potential entropy is calculated by Equation ( 5) and it has a minimum value when σ = 83.50.The contour map of potential value (σ = 83.50)for the denoised microseismic events taking advantage of the time-event location distance is shown in Figure 6.Then, initial cluster centers were calculated by Step 4 of the data field-based K-means cluster algorithm, and they are also shown in Figure 6.The time-event location distances between the initial cluster centers are listed in Table 1.The initial cluster centers have a good distribution and there are no small time-event location distances between the initial cluster centers, which shows that this algorithm can obtain good initial cluster centers.
Cluster Results
There are many indexes in the literature to select an adequate number of clusters, among which the Silhouette index [43] and Krzanowski-Lai index [44] are the most popular ones.In this paper, the Krzanowski-Lai index is selected.The KL index produces the traces of the within-clusters matrices for two consecutive clusters.The higher the KL index, the better the cluster results.The KL index is defined as in [44].
where DIFF(K) = (K − 1) 2/p WK K−1 − K 2/p WK K , and p is the number of clustering parameters in the dataset.In this paper, the seismic parameters (X, Y, Z, t) are used as the cluster input parameters, which means p is equal to 4. WK is the sum of all point-to-point distances of the observations within each cluster, summed across all clusters.WK = , where x i and x j are two events in cluster C k , p is the number of clustering parameters, and K is the cluster number.The KL index of the time-event location distance and the data field-based K-means clustering is shown in Figure 7.As this figure indicates that the KL index has a maximum when K = 5, the number of clusters K = 5 is selected for further study.The data field-based K-means clusters using denoised events taking advantage of time-event location parameters (X, Y, Z, t) and event location parameters (X, Y, Z) are drawn in Figure 8a,b, respectively.Figure 8a shows that the time-event location distance-based cluster divides the microseismic events by both space and time, while the location distance-based cluster has few XY overlaps, and the events in each cluster cover almost all the event time (Figure 8b).With the passing of time, the same area events may have large differences, which should be divided (e.g., Figure 9c).Therefore, time-event location distance-based K-means clusters can provide a new insight for seismicity analysis.The commonly used statistical parameter-energy index (EI) is selected to evaluate microseismic activity.EI is a parameter related to the concentration and accumulation of stress in a rock mass [45], and it is defined as the ratio between the radiated energy and the average energy (expected energy) for a given seismic moment M.An EI > 1 signifies that more energy has been released than expected, while an EI < 1 signifies that less energy has been released than expected.Research has shown that when an EI > 1 the rock mass is accumulating stress, and when the rock mass starts to fail, the EI drops below one [46].The EI is defined as in [47]: where is the seismic radiation energy, ρ is the rock density, v is the wave propagation speed, c J is the integral of the squared ground speed, Fc is the radiation pattern The commonly used statistical parameter-energy index (EI) is selected to evaluate microseismic activity.EI is a parameter related to the concentration and accumulation of stress in a rock mass [45], and it is defined as the ratio between the radiated energy and the average energy (expected energy) for a given seismic moment M.An EI > 1 signifies that more energy has been released than expected, while an EI < 1 signifies that less energy has been released than expected.Research has shown that when an EI > 1 the rock mass is accumulating stress, and when the rock mass starts to fail, the EI drops below one [46].The EI is defined as in [47]: where wave F c is 0.52, while for an S wave F c is 0.63.E(M) is the average energy derived from the logE vs. logM relation for a given moment M, and c and d are the regression constants.
(2) Seismicity analysis of clusters The least squares linear regressions between log(energy) and log(moment) for the five clusters of Figure 8a are shown in Figure S2.Then, we calculated the EI through Equation ( 8) and the logEI of the five clusters are shown in Figure 9.Some research [48,49] defined the predictive period and critical period from the EI time series: when the EI decreases, this indicates that parts of the rock mass are beginning to have an unstable status.This period could be seen as a strain-softening stage.It is the beginning of potential damage and regarded as a warning indicator.This period is called a predictive period.An increasing EI means that the rock mass is entering an unstable state, and this period is called a critical period.The quicker the EI decreases, the larger the risk of a big microseismic event happening.So, when the EI decreases sharply, we can suggest the miners to be careful in the predictive period and not work in the upcoming critical period in the corresponding cluster area.
For C1, the logEI has a sharp decrease from 5 April to 6 April (a predictive period)-the miners should be careful when mining in this area-and the logEI increases from 7 April to 13 April (a critical period)-the miners should not work in this area on these days.Then, there is a relatively small predictive period (from 13 April to 14 April) and a critical period (15 April).For C2, the logEI slowly decreases from 23 April to 27 April (predictive period), and the miners should be careful on 28 April when mining.For C3, the logEI quickly decreases on 10 April (a predictive period and special attention should be paid) and then the logEI increases from 13 April to 16 April (a critical period and the miners should not work in this area).For C4, the logEI increases and decreases alternately, which means the rock system releases and absorbs energy steadily and there are no predictive periods and critical periods.For C5, there are high logEI from 16 April to 19 April and there is a sharp logEI decreasing from 25 April to 28 April (a predictive period and the miners should be very careful when mining), then the logEI increases from 22 April to 24 April (a critical period and the miners should not work in this area).Though Figure 8a shows that C4 and C5 have a similar location area, C5 has critical periods while C4 has no critical periods, which proves that the time parameter can provide a new insight for seismic clustering analysis.In conclusion, there are very critical periods for C1, C3 and C5 when the miners should not work.
Discussion
This paper proposed a data field-based K-means clustering methodology to denoise noise events and select initial cluster centers.However, this can be affected by the threshold value ϕ po , and its performance with respect to classical K-means clustering using denoised and undenoised events should be discussed.In addition, we also chose the commonly used hierarchical clustering and SOM clustering as comparisons.The MSED indexes of time-event location distance and data field-based K-means clustering (ϕ po = Min-0.1,Q1, Median and Q3, where Min, Q1, Median and Q3 correspond to the minimum, the first quartile, the second quartile and the third quartile of the potential values, respectively), classical K-means clustering using denoised and undenoised events (randomly selected initial cluster centers) and SOM clustering are drawn in Figure 10.For denoised event clustering, this shows that the MSED index of the data field-based K-means clustering is usually smaller than that of the classical K-means clusters.For a small cluster number (K = 2 and K = 3), the data field-based K-means (ϕ po = Min-0.1,Q1 and Median) has a similar MSED index to the classical K-means.This is because the classical K-means clustering has a relatively good global search for a small number of clusters.The data field-based K-means clustering (ϕ po = Q3) produces better cluster results than the classical K-means algorithm (K = 3 and K = 4).This is due to the higher potential value event being more likely to be a final cluster center.For a relatively large cluster number (K = 5~7), the data field-based K-means clustering usually improves the classical K-means clustering, which proves the effectiveness of the data field-based algorithm in selecting initial cluster centers.For a large cluster number (K = 8~10), the data field-based K-means clustering (ϕ po = Q3) has a result that is nearly as equal/bad as the classical K-means clustering.This is caused by the data field density-based algorithm using a threshold value ϕ po to select high potential value events.Yet, the low potential value events may have useful initial cluster centers for a large cluster number, which results in the randomly selected initial cluster centers at times having a better cluster result.The data field-based K-means cluster (ϕ po = Q1 and ϕ po = Median) has a good cluster result in general and a threshold value ϕ po between Q1 and the Median, and a cluster number smaller than 9 is suggested in the data field-based K-means clustering algorithm.
that of classical K-means clustering using undenoised events.This is caused by the noise events being relatively far away from most events (Figure 11a).The SOM clustering has a very large MSED index compared with K-means-based clusters (Figure 10).This is due to the SOM clusters sometimes mixing together (Figure 11b).Furthermore, the mixed clusters make it hard to interpret the cluster results.For the hierarchical clustering, we tried different cluster numbers: for a small cluster number, 99% of seismic events are in one cluster; for 100 clusters, 36% and 50% of events are in two clusters; for 200 clusters, 42% and 28% of events are in two clusters; and for 400 clusters, 25% and 12%, 5%, 7% of events are in four clusters.For these numbers of clusters, every other cluster makes up less than 1%.This is brought about by the hierarchical clustering connecting close events together.Although there are some far away events in the seismic data, every event will usually be divided into one cluster or several close events connected into one cluster.Therefore, it is hard to obtain a certain number of clusters which contain a relatively large number of events, and there are many events that will be treated as noises (the clusters with few events).In conclusion, the data field-based K-means clustering can have a good cluster result as well as a good interpretation compared with the above discussed clusters.The classical K-means clustering using denoised events usually has a smaller MSED index than that of classical K-means clustering using undenoised events.This is caused by the noise events being relatively far away from most events (Figure 11a).The SOM clustering has a very large MSED index compared with K-means-based clusters (Figure 10).This is due to the SOM clusters sometimes mixing together (Figure 11b).Furthermore, the mixed clusters make it hard to interpret the cluster results.For the hierarchical clustering, we tried different cluster numbers: for a small cluster number, 99% of seismic events are in one cluster; for 100 clusters, 36% and 50% of events are in two clusters; for 200 clusters, 42% and 28% of events are in two clusters; and for 400 clusters, 25% and 12%, 5%, 7% of events are in four clusters.For these numbers of clusters, every other cluster makes up less than 1%.This is brought about by the hierarchical clustering connecting close events together.Although there are some far away events in the seismic data, every event will usually be divided into one cluster or several close events connected into one cluster.Therefore, it is hard to obtain a certain number of clusters which contain a relatively large number of events, and there are many events that will be treated as noises (the clusters with few events).In conclusion, the data field-based K-means clustering can have a good cluster result as well as a good interpretation compared with the above discussed clusters.
Figure 1 .
Figure 1.K-means clustering results using different initial cluster centers for a typical dataset.The colored circles are the events and the same colored events correspond to a same cluster.(a) Dataset; (b) MSED indexes for different randomly selected initial cluster centers (K = 4); (c-h) The cluster results for executions 2nd, 3rd, 4th, 9th, 11th and 12th of the K-means clustering process (K = 4).
Figure 1 .
Figure 1.K-means clustering results using different initial cluster centers for a typical dataset.The colored circles are the events and the same colored events correspond to a same cluster.(a) Dataset; (b) MSED indexes for different randomly selected initial cluster centers (K = 4); (c-h) The cluster results for executions 2nd, 3rd, 4th, 9th, 11th and 12th of the K-means clustering process (K = 4).
Figure 2 .
Figure 2. Data field application (a-c) and K-means clustering results based on the data field (d-f); (a) Potential values based on different impact factors; (b) Relationship between potential entropy and the impact factor; (c) Contour map of potential values (
Figure 2 .
Figure 2. Data field application (a-c) and K-means clustering results based on the data field (d-f); (a) Potential values based on different impact factors; (b) Relationship between potential entropy and the impact factor; (c) Contour map of potential values (σ = 0.028); (d) The MSED indexes based on different potential threshold values ϕ po , where Min, O 1 , O 2 , . . .., O 7 are the minimum, first, second, . . ., and seventh octiles of potential values, respectively; (e) The data field-based K-means clustering results using undenoised data when the potential threshold value ϕ po is equal to the second octile; (f) The data field-based K-means clustering results with 7.5% data removed and ϕ po is equal to the second octile.
Figure 3 .
Figure 3.The flow diagram of the data field-based K-means clustering.
po ϕ for data field-based Kmeans clustering using undenoised data is shown in Figure 2d, where O1, O2, …., O7 are the first, second, …, and seventh octiles of potential values, respectively.It is clear that the data field-based Kmeans clustering usually obtains a smaller MSED index than the classical K-means algorithm.This indicates that the data field-based K-means clustering obtains better clusters than those produced by
Figure 3 .
Figure 3.The flow diagram of the data field-based K-means clustering.
Figure 4 .
Figure 4.The location, geological structures, geophone distribution and microseismic event dataset of the Chinese Yongshaba mine.(a) The location and geological structures of the Chinese Yongshaba mine; (a1,a2); the location of the Chinese Yongshaba mine [42]; (a3) geological structures of the Chinese Yongshaba mine.The solid lines represent determined faults, while the dashed lines represent faults inferred by mine geologists; (b) The geophone distribution and microseismic event dataset of the Chinese Yongshaba mine.The triangle and circle represent the geophone location and the microseismic event location, respectively.The color of events indicates their Z location.
Figure 4 .
Figure 4.The location, geological structures, geophone distribution and microseismic event dataset of the Chinese Yongshaba mine.(a) The location and geological structures of the Chinese Yongshaba mine; (a1,a2); the location of the Chinese Yongshaba mine [42]; (a3) geological structures of the Chinese Yongshaba mine.The solid lines represent determined faults, while the dashed lines represent faults inferred by mine geologists; (b) The geophone distribution and microseismic event dataset of the Chinese Yongshaba mine.The triangle and circle represent the geophone location and the microseismic event location, respectively.The color of events indicates their Z location.
Figure 5 .
Figure 5. Contour map of potential values (σ = 22.00) for the seismic dataset based on event location distance and seismic event locations with 10% of data removed.(a) Contour map of potential values for the seismic dataset; (b) Seismic event locations with 10% of data removed.The red circles are typical areas used to remove noise events.
Figure 5 .
Figure 5. Contour map of potential values (σ = 22.00) for the seismic dataset based on event location distance and seismic event locations with 10% of data removed.(a) Contour map of potential values for the seismic dataset; (b) Seismic event locations with 10% of data removed.The red circles are typical areas used to remove noise events.
Figure 6 .
Figure 6.Contour map of potential values (σ = 83.50)for the denoised seismic events based on the time-event location distance and the selected initial cluster centers using the maximum distance-based algorithm.(a) Potential values in Y-X-Z views; (b) Potential values in Y-Z-t views.
Figure 6 .
Figure 6.Contour map of potential values (σ = 83.50)for the denoised seismic events based on the time-event location distance and the selected initial cluster centers using the maximum distance-based algorithm.(a) Potential values in Y-X-Z views; (b) Potential values in Y-Z-t views.
Figure 7 .
Figure 7.The KL index of the time-event location distance and data field-based K-means clustering for the denoised microseismic data.
Figure 8 .
Figure 8.The cluster results (K = 5) of data field-based K-means clustering for the denoised microseismic data.(a) Time-event location distance-based clusters; (b) Event location distance-based clusters.
Figure 7 . 22 Figure 7 .
Figure 7.The KL index of the time-event location distance and data field-based K-means clustering for the denoised microseismic data.
Figure 8 .
Figure 8.The cluster results (K = 5) of data field-based K-means clustering for the denoised microseismic data.(a) Time-event location distance-based clusters; (b) Event location distance-based clusters.
Figure 8 .
Figure 8.The cluster results (K = 5) of data field-based K-means clustering for the denoised microseismic data.(a) Time-event location distance-based clusters; (b) Event location distance-based clusters.
Figure 9 .
Figure 9. LogEI of the five clusters in Figure 8a.
Figure 9 .
Figure 9. LogEI of the five clusters in Figure 8a.
Figure 10 .
Figure 10.The MSED indexes of time-event location distance and data field-based K-means clustering, classical K-means clustering using denoised and undenoised events and SOM clustering using denoised events.The Min, Q1, Median and Q3 correspond to the minimum, first quartile, second quartile and third quartile of potential values, respectively.
Figure 10 .
Figure 10.The MSED indexes of time-event location distance and data field-based K-means clustering, classical K-means clustering using denoised and undenoised events and SOM clustering using denoised events.The Min, Q1, Median and Q3 correspond to the minimum, first quartile, second quartile and third quartile of potential values, respectively.
Table 1 .
Distances (m) between the initial cluster centers in Figure6. | 10,865.4 | 2018-03-15T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Geology"
] |
Implication of Quadratic Divergences Cancellation in the Two Higgs Doublet Model
With the aim of exploring the Higgs sector of the Two Higgs Doublet Model (2HDM), we have chosen the exact and soft $Z_2$ symmetry breaking versions of the 2HDM with non-zero vacuum expectation values for both Higgs doublets (Mixed Model). We consider two SM-like scenarios: with 125 GeV $h$ and 125 GeV $H$. We have applied the condition for cancellation of quadratic divergences in the type II 2HDM in order to derive masses of the heavy scalars. Solutions of two relevant conditions were found in the considered SM-like scenarios. After applying the current LHC data for the observed 125 GeV Higgs boson, the precision electroweak data test and lower limits on the mass of $H^+$, the allowed region of parameters shrink strongly.
I. INTRODUCTION
The SM describes the physics of elementary particles with a very good accuracy [1]. Experiments have confirmed its predictions with remarkable precision. One of the most precise aspects of the model is associated with the Higgs sector. However, the SM is not a completely perfect model, since it is unable to provide adequate explanations for many questions in particle and astrophysics [2][3][4][5][6][7]. One of the problems of the SM is the naturalness of the Higgs mass. From experimental data, we know that the Higgs boson mass (125 GeV) is of the order of the electroweak scale, but from the naturalness perspective, this mass is much larger than the electroweak scale. This is because of the large radiative corrections to the Higgs mass which implies an unnatural tuning between the tree-level Higgs mass and the radiative corrections. These radiative corrections diverge, showing a quadratic sensitivity to the largest scale in the theory [8]. Solutions to this hierarchy problem imply new physics beyond the SM, which must be able to compensate these large corrections to the Higgs boson mass. This goal can be obtained with the presence of new symmetries and particles. Veltman suggested that the radiative corrections to the scalar mass vanish (or are kept at a manageable level) [9]. This is known as the Veltman condition.
In this paper, we apply Veltman condition to the 2HDM to predict masses of the additional neutral scalars in two possible scenarios, with the SM like-h and the SM like-H bosons. The reader can find similar discussions in [10][11][12][13][14][15]. In 2HDM Lagrangian, four different types of Yukawa interactions arise with no FCNC at tree level [16,17]. In type I, all the fermions couple with the first doublet Φ 1 and none with the Φ 2 . In type II, the downtype quark and the charged leptons couple to the first doublet, and the up-type quarks to the second doublet. In type III or the flipped models, the down-type quarks couple to the Φ 1 and the up-type quarks and the charged leptons couple to the Φ 2 . In type IV or the lepton-specific models, all quarks couple to the Φ 1 and the charged leptons couple to the Φ 2 . Among fermions, we include only the dominate top and bottom quark contributions and neglect leptons. Therefore type I and type IV become identical. The same statement is true for the type II and the type III. In 2HDM type I, vanishing quadratic divergence are possible due to negative scalar quartic couplings, but this solution is contradict to the condition of positivity of the Higgs potential [15,18,19], therefore we concentrate on the 2HDM Model type II .
II. MIXED MODEL WITH A SOFT Z2
SYMMETRY BREAKING The Higgs sector of the 2HDM consists of two SU(2) scalar doublets, Φ 1 and Φ 2 . The 2HDM potential depends on quadratic and quartic parameters, respectively m 2 11 , m 2 22 , m 2 12 and λ i (i= 1..., 5), from which five Higgs boson masses come up after the spontaneous symmetry breakdown (SSB). The most general SU (2) × U (1) invariant Higgs potential for two doublets, All parameters are assumed to be real, so that CP is conserved in the model. In order to have a stable minimum, the parameters of the potential need to satisfy the positivity conditions leading to the potential bounded from below. This behaviour is governed by the quadratic terms, which have the following positivity conditions [20]: The high energy scattering matrix of the scalar sector at tree level contains only swave amplitudes that are described by the quartic part of the potential. The tree level unitarity constraints require that the eigenvalues of this scattering matrix, |Λ i |, be less than the unitarity limit [21,22]. This means, the requirement |Re(a 0 )| < 1/2 (or |(a 0 )| < 1, with a 0 being the 0th partial s-wave amplitude for the 2 → 2 body scatterings) corresponds to |Λ i | ≤ 8π. Finally, one can also impose harder constraints on the parameters of the potential based on arguments of perturbativity, by demanding that the quartic Higgs couplings fulfill |λ i | ≤ 4π [21][22][23][24][25][26].
There are five Higgs particles, with masses as follows: with the other two mass squared, M 2 h,H , being the eigenvalues of the matrix M 2 where tan β = υ 2 /υ 1 . This matrix, written in terms of the mass squared of physical particles M 2 h,H , with M H ≥ M h , and the mixing angle α is given by The ratio of the coupling constant (g i ) of the neutral Higgs boson to the corresponding SM coupling g SM i , called the relative couplings are summarised in the table I (see e.g. reference [28]). One sees that all basic couplings can be represented by the couplings to the gauge boson V, χ V = sin(β − α) (cos(β − α)) for h(H) and the tan β parameter. So, we consider two cases which define our SM-like scenarios: In both cases, we identify a SM-like Higgs boson with the 125 GeV Higgs particle observed at LHC. Therefore, in the SM-like h scenario, the neutral Higgs partner (H) can only be heavier, while in the SM-like H scenario -the partner particle h can only be lighter than ∼ 125 GeV.
III. CANCELLATION OF THE QUADRATIC DIVERGENCES
The cancellation of the quadratic divergences at oneloop applied to 2HDM with Model II for Yukawa interaction leads to a set of two conditions [10], namely: It should be noted that the Eqs. (10) and (11) do not depend on the choice of a gauge parameter and the cancellation of the quadratic divergences in the tadpole graphs ( [29]) does not give the additional independent conditions [30]. We include only the dominate top and bottom quarks contributions (m D → m b , m U → m t ). Expressing λ's parameters by masses and the mass parameter m 12 , we have where χV (W and Z) χu(up-type quarks ) and In the following section, we solve the Eqs. (12), expressing the condition for cancellation of the quadratic divergences to derive masses of the partner Higgs particles.
IV. APPROXIMATE SOLUTION OF THE CANCELLATION CONDITIONS
It is useful to look first at the approximate solution, which can be obtained analytically. In the 2HDM model with the soft Z 2 symmetry breaking we have for the strict SM-like (alignment) scenarios, with sin(β − α) = 1 or cos(β − α) = 1: These formula follow directly from equations (7) and (8).
The difference of λ 1 and λ 2 is given in terms of tan β, ν and the mass of the neutral partner for the SM-like h or H particle, mining respectively the H or the h boson. From difference of equations (10) and (11) we found that Combining the equations (19), (20) and (21), we obtain the following expressions for masses squared of the partner of the SM-like h or H Higgs particle, Obviously, the above prediction for the mass of the partner Higgs particle has been obtained without additional constraints. We have plotted M versus tan β, as given by Eq.
V. SOLVING THE CANCELLATION CONDITIONS
Here we present the results of numerical solutions of Eqs. (12) for the SM-like scenarios, as described above. We apply the positivity and the perturbative unitarity constraints on parameters of the model. We have performed three scans for three considered SM-like scenarios (SM-like h, SM-like H+ and SM-like H-) with the mass window of the SM-like Higgs 124-127 GeV and the relative coupling to gauge bosons χ V between 0.90 and 1.00, in agreement with the newest LHC data, which are presented in table II. We assume v being bounded to the region 246 GeV < v < 247 GeV and using following regions of the parameters of the model: Note that very recently the new lower bound on M H ± has been derived, much higher than the used by us in the scan 360 GeV [31], namely: M H ± > 570−800 GeV [32]. In the calculations, we use m t = 172.44 GeV, m b = 4.18 GeV, M W = 80.38 GeV, M Z = 91.18 GeV [33,34]. Solutions of the Eqs. (12) were found by using Mathematica and independently by a C + + program, written by us. Performing our scanning we found no solution for the SM-like H scenario in both cases cos(β − α) ∼ ±1, for mass of the charged Higgs boson larger than 360 GeV [31].
For SM-like h scenario there are solutions only for positive m 12 , in the region 200 -400 GeV. Figure 2 shows the correlation between m 12 and M h (panel (a)) and the correlation between m 12 and M H (panel (b)). In both panels, the first region from the left (lower m 12 region) is obtained for large tan β (above 40), while the right one corresponds to low tan β (below 5). Below, we will look closer to the obtained results, by confronting the obtained predictions for observables with experimental data. We propose 5 benchmarks for the SM-like h scenario, which will be discussed in section VII, in agreement with theoretical constraints (positivity and perturbative unitarity) and experimental constraints, mainly coming from the measurements of SMlike Higgs boson. In addition, properties of the partner particles (H) is checked, which is important for future search. In such calculations the 2HDMC program was used [36].
VI. EXPERIMENTAL CONSTRAINTS
We apply experimental limits from LHC for the SMlike Higgs particle h and solve the cancellation conditions by scanning over M h and the mixing parameters (α, β), keeping the values of mass and coupling to the gauge bosons (i.e. sin(β − α)) within the experimental bounds. We confront the resulting solutions with existing data for the 125 GeV Higgs boson, in particular the experimental data on Higgs boson couplings (χ V , χ t and χ b ) and Higgs signal strength (R ZZ , R γγ and R Zγ ) from ATLAS [37] and CMS [38], as well as the combined ATLAS+CMS results [39]. Also the total Higgs decay width measured at the LHC is an important constraint, see tables II and III.
We also keep in mind other existing limits on additional Higgs particles which appear in 2HDM, as the lower mass limit of the H + taken to be 360 GeV (based on the earlier analysis [31]) and check if the obtained solutions are in agreement with the oblique parameters S, T , U constraints, being sensitive to presence of extra (heavy) Higgses that are contained in the 2HDM.
VII. BENCHMARKS
Results from scan lead to five benchmark points h1-h5, presented in table II. The results of the scanning show that for the SM-like h scenario, solutions in agreement with existing data only exist for small tan β (0.45 -1.07). Values of observables for the SM-like Higgs particle h for five benchmark points h1-h5 are presented in table III. The large tan β solutions, (above 42) exist, however they lead to too large R γγ , (above 2). For convenience, we add the experimental data to both tables (with 1 σ accuracy from the fit assuming |χ v | ≤ 1 and B SM ≥ 0).
Benchmarks can limit our benchmarks to (h3, h4) only. There is a small tension, at the 2 σ, for all benchmarks for the coupling hbb with the newest combined LHC result [39], which has surprisingly small uncertainty 0.16 (before the individual results were ATLAS 0.61 +0.24 −0.26 [37] and CMS 0.49 +0. 26 −0.19 [38], in perfect agreement with all our benchmarks). Also, benchmarks (h2,h3) correspond to slightly too small mass of the h in the light of the new combined CMS and ATLAS value of 125.09 ± 0.24 GeV [35].
We compare our benchmarks to the experimental data on 2HDM (II) on the plot tan β versus cos(β − α), see figure 3. Benchmark h4 is very close to the best fit point found by ATLAS. We would like to point out that our benchmarks results from the cancellation of the quadratic divergences and no fitting procedure has been performed. Note, that the h4 benchmark corresponds to heavy and degenerate A and H + bosons, with mass ∼ 650 GeV, while H is even heavier with mass ∼ 830 GeV.
It is worth to look for the properties of the heavy neutral Higgs boson H, the partner of the SM-like h bosons. The corresponding observables are given in table IV, here the relative couplings are calculated in respect to couplings the would-be SM Higgs boson with the same mass as H. The coupling to t quark is negative, what is easy to understand looking at the table I. Its absolute value |χ H V | is enhanced, as compared to the SM value, and the corresponding ratio varies from 1.1 to 2.30. One observes a huge enhancement in R γγ , it is from 50 to 153 times larger the SM one, at the same time Zγ decay channel looks modest (0.31-1.44). Also, the total width is similar to the one predicted by the SM -the corresponding ratio varies from 0.3 to 1.09. For a possible search for such particle, the γγ channel would be the best. The ZZ channel is hopeless, but the H decays to ZA and H + W − , govern by sin(β − α) coupling, may be useful. Note, that all heavy Higgs bosons have masses below 850 GeV, in the energy range being currently probed by the LHC.
VIII. SUMMARY AND CONCLUSION
In this paper, we have investigated the cancellation of the quadratic divergences in the 2HDM, applying positivity conditions and perturbativity constraint. We have chosen the soft Z 2 symmetry breaking version of the 2HDM with non-zero vacuum expectation values for both Higgs doublets (Mixed Model) and considered two SMlike scenarios, with 125 GeV h and 125 GeV H.
We have chosen 5 benchmarks in agreements with experimental data for the 125 GeV Higgs particle from the LHC and checked that our benchmark points are in agreement with the oblique parameters S, T and U , at the 3 σ.
We compare our benchmarks to the experimental constraints for 2HDM (II) on the tan β versus cos(β − α) plane. All benchmarks are close to or within the allowed 95% CL region, especially our benchmark h4 is very close to the best fit point found by ATLAS. We would like to point out that our benchmarks results from the cancellation of the quadratic divergences and no fitting procedure has been performed. Note, that the h4 benchmark corresponds to heavy and degenerate A and H + bosons, with mass ∼ 650 GeV, while H is even heavier with mass ∼ 830 GeV. | 3,668 | 2017-09-21T00:00:00.000 | [
"Physics"
] |
Nanoscale Control of Amyloid Self-Assembly Using Protein Phase Transfer by Host-Guest Chemistry
Amyloid fibrils have recently been highlighted for their diverse applications as functional nanomaterials in modern chemistry. However, tight control to obtain a targeted fibril length with low heterogeneity has not been achieved because of the complicated nature of amyloid fibrillation. Herein, we demonstrate that fibril assemblies can be homogeneously manipulated with desired lengths from ~40 nm to ~10 μm by a phase transfer of amyloid proteins based on host-guest chemistry. We suggest that host-guest interactions with cucurbit[6]uril induce a phase transfer of amyloid proteins (human insulin, human islet amyloid polypeptide, hen egg lysozyme, and amyloid-β 1–40 & 1–42) from the soluble state to insoluble state when the amount of cucurbit[6]uril exceeds its solubility limit in solution. The phase transfer of the proteins kinetically delays the nucleation of amyloid proteins, while the nuclei formed in the early stage are homogeneously assembled to fibrils. Consequently, supramolecular assemblies of amyloid proteins with heterogeneous kinetics can be controlled by protein phase transfer based on host-guest interactions.
Amyloid-β 1-42 peptide was purchased from AnyGen (Jangheung, Republic of Korea). A protein stock solution was prepared at a concentration of 100 μM in 0.1% FA, and a 50 mM CB [6] stock solution was prepared in 50% FA. We prepared all solutions under acidic conditions because the net charges of amyloid proteins are positive in low pH, and CB [6] preferentially interacts with positively charged molecules. 1 In addition, INS is highly soluble under acidic conditions and is converted from multimeric states (dimer, tetramer, and hexamer) to the monomeric state. 2 Stock solutions were transferred to 4-mL borosilicate glass vials and diluted to a final volume of 1 mL. Protein concentrations were adjusted to 50 μM for INS and LYZ and 10 μM for amyloid-β and human islet amyloid polypeptide to prevent aggregation prior to incubation in the 5% FA solution. All protein solutions were incubated at 50 °C for four days. Agitation was performed at 200 rpm. When we examined CB [6]-mediated fibrillation at a high temperature (60 °C), CB [6] was not effective for controlling the fibrillation process ( Supplementary Fig. S15). In contrast, at a low temperature (40 °C), the fibrillation process itself was not promoted. The fibrillation process using CB [6] was best controlled at 50 °C. calculated using the mathematical definition of PDI described in the Supplementary Text (vide infra).
Thioflavin T (ThT) assays. ThT assays were performed to obtain information about the fibrillation kinetics. A ThT stock solution (1 mg/mL) was prepared. An aliquot of incubated protein solution (25 μL), ThT stock solution (25 μL), and 5% FA (450 μL) were mixed prior to the fluorescence measurements. The excitation wavelength of ThT was set to 452 nm and the emission was scanned from 460 to 490 nm. The emission at 482 nm was used for monitoring the fibrillation kinetics. Since the fluorescence intensity of ThT increases upon complexation with CB [6], the baseline was corrected using a ThT-CB [6] solution without proteins. All data points were normalized using the height of the stationary phase.
Isothermal titration calorimetry (ITC). ITC experiments were performed using a VP-ITC calorimeter (MicroCal, Worcestershire, UK) to obtain an equilibrium association constant (Ka) for the interaction between INS and CB [6]. INS and CB [6] solutions were freshly prepared and degassed prior to each measurement. The reference cell was filled with 50% FA, and an INS solution in 50% FA (100 µM) was loaded into the ITC cell. A CB [6] solution (1 mM) was injected 30 times through the ITC syringe. The interval of each injection was 3 min. The ITC cell was stirred at 502 rpm and maintained at 25 °C. The heats of dilution by CB [6] were subtracted for analysis. All results were treated using the 'single-set-of-sites' model internally stored in Origin 7.0 (associated with the VP-ITC calorimeter).
Ultraviolet-visible spectroscopy. Protein solutions with and without CB [6] were freshly Infrared spectroscopy. Infrared (IR) spectroscopy was performed using a Nicolet iS10 spectrometer (Thermo Scientific, Waltham, MA, USA). A total of 128 spectra for each sample were obtained with a resolution of 4 cm -1 , which were averaged to generate a single spectrum. Fibril samples were prepared following the procedure described in the 'fibril sample preparation' section above. Unwashed samples were directly cast onto a silicon wafer and fully dried for a day under air. Washed samples were loaded into an Amicon 0.5-mL 100k centrifugal filter (Millipore, Billerica, MA, USA) and then washed three times with 5% FA to remove the excess CB [6] and non-fibrillar species in the samples 6 before being loaded onto silicon wafers. All IR spectra were normalized using the highest peak intensity in the range of 1500-1900 cm -1 .
Solution Small-Angle X-ray Scattering (SAXS). All SAXS measurements were performed at the 4C SAXS II beamline of the Pohang Accelerator Laboratory (PAL) in Pohang University of Science and Technology (POSTECH). The concentration of INS was measured as 0.76 mg/mL in 1% FA. The sample-to-detector distance was set to 2 m, and the temperature was fixed at 20 °C. Each measurement was performed four times using fresh INS samples. The scattering patterns were recorded six times (5 s each with 1-s intervals). SAXS measurements of standard proteins (trypsin, human serum albumin, and concalvalin A) were performed following the same procedures. After the measurements, the concentrations of the standard proteins were also measured. The radius of gyration (Rg) and molecular weight (Mw) calibration using the zero-angle scattering intensity (I(0)) were estimated using a procedure in the literature. 3
Supplementary text
Polydispersity index. The polydispersity index (PDI) 4 is the ratio of the mass average degree of polymerization ( ) to the number average degree of polymerization ( ). If the polymer is ideally monodisperse, PDI = 1. The PDI value is usually larger than 1 because of the heterogeneous distribution of the polymers. In the present study, the length of the amyloid fibril was adopted as the degree of polymerization because its length was linearly proportional to the number of protein monomers within the fibril. Temperature is crucial for the kinetics of amyloid fibrillation, in that the activation energy for protein folding/unfolding is related to the melting temperature of the proteins (the melting temperature of INS is approximately 60 °C). 8 When we examined CB [6]-mediated fibrillation at a high temperature (60 °C), CB [6] was not effective for controlling the fibrillation process. In contrast, at a low temperature (40 °C), the fibrillation process itself was inhibited. This seems to be because CB [6] cannot kinetically modulate the selfassembly if the nucleation rate is too fast; if the nucleation rate is too slow, the fibrillation process itself is inhibited. | 1,566.6 | 2017-07-18T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Development of Composite, Reinforced, Highly Drug-Loaded Pharmaceutical Printlets Manufactured by Selective Laser Sintering—In Search of Relevant Excipients for Pharmaceutical 3D Printing
3D printing by selective laser sintering (SLS) of high-dose drug delivery systems using pure brittle crystalline active pharmaceutical ingredients (API) is possible but impractical. Currently used pharmaceutical grade excipients, including polymers, are primarily designed for powder compression, ensuring good mechanical properties. Using these excipients for SLS usually leads to poor mechanical properties of printed tablets (printlets). Composite printlets consisting of sintered carbon-stained polyamide (PA12) and metronidazole (Met) were manufactured by SLS to overcome the issue. The printlets were characterized using DSC and IR spectroscopy together with an assessment of mechanical properties. Functional properties of the printlets, i.e., drug release in USP3 and USP4 apparatus together with flotation assessment, were evaluated. The printlets contained 80 to 90% of Met (therapeutic dose ca. 600 mg), had hardness above 40 N (comparable with compressed tablets) and were of good quality with internal porous structure, which assured flotation. The thermal stability of the composite material and the identity of its constituents were confirmed. Elastic PA12 mesh maintained the shape and structure of the printlets during drug dissolution and flotation. Laser speed and the addition of an osmotic agent in low content influenced drug release virtually not changing composition of the printlet; time to release 80% of Met varied from 0.5 to 5 h. Composite printlets consisting of elastic insoluble PA12 mesh filled with high content of crystalline Met were manufactured by 3D SLS printing. Dissolution modification by the addition of an osmotic agent was demonstrated. The study shows the need to define the requirements for excipients dedicated to 3D printing and to search for appropriate materials for this purpose.
Introduction
Application of Floating Drug Delivery Systems (FDDS) is the primary method of increasing gastric residence time [1]. Therefore, FDDS are suitable for the treatment of local highly drug-loaded, polymer-reinforced composite formulation, using FDDS as a working example; (2) to evaluate the properties of nylon as an inert 3D SLS printable excipient for Met incorporation; (3) to investigate the extended in vitro drug release behavior and to assess possible degrees of freedom for dissolution tuning.
Materials and Methods
Metronidazole (Hubei Hongyuan Pharmaceutical Technology Co., Ltd. Fengshan, China), commercial, carbon-stained nylon 12 for 3D printing (PA12 Powder, Sintratec AG, Brugg, Switzerland) and sodium chloride (Merck KGaA, Darmstadt, Germany) were used in the study. All other applied materials were of analytical grade.
3D Printing
Initially homogenized through sieve 160 mesh, powders of fine, pure crystals of Met, PA12 or sodium chloride (optional) were placed volumetrically (without tapping) in the glass measuring cylinder (1 L). The measured volumes were then weighed. The mixtures were prepared by mixing volumetric ratios of components in a polyethylene bag for 5 min. The volumetric and corresponding weight compositions of the printed tablets (printlets) are presented in Table 1. The capsule-shaped templates for printing were prepared in STL format with Autodesk Inventor 2020 (Autodesk Inc., San Rafael, CA, USA). The printlets were 1.8 cm long, and the diameter of the capsules was 1 cm (Figure 1a). SLS was performed using Sintratec Kit 3D (Sintratec AG, Brugg, Switzerland) 3D printer equipped with a blue laser (2.3 W, 445 nm wavelength), operating under Sintratec Central software (v. 1.2.5, Sintratec AG, Brugg, Switzerland). The initial printing parameters were tuned experimentally, and the following temperature targets were applied: powder surface temperature 115 °C and chamber temperature 100 °C. The thickness of a powder layer was 150 μm, and the distance between two consecutive laser scans in the SLS was performed using Sintratec Kit 3D (Sintratec AG, Brugg, Switzerland) 3D printer equipped with a blue laser (2.3 W, 445 nm wavelength), operating under Sintratec Central software (v. 1.2.5, Sintratec AG, Brugg, Switzerland). The initial printing parameters were tuned experimentally, and the following temperature targets were applied: powder surface temperature 115 • C and chamber temperature 100 • C. The thickness of a powder layer was 150 µm, and the distance between two consecutive laser scans in the layer (hatch spacing) was 250 µm for all formulations. Laser speed was in the range of 75-150 mm/s depending on the particular formulation, according to Table 1. Nine printlets per print job were manufactured.
Scanning Electron Microscopy (SEM)
The powder microstructure and surface of printed samples were observed using a scanning electron microscope JSM-6610LV (JEOL GmbH, Tokyo, Japan). The tests were conducted using a low vacuum with a backscattered electron-composition image (BEC). The accelerating voltage was 15 kV. The chemical analysis was performed with energy dispersion spectroscopy (EDS) using the X-Max detector, equipped with Aztec v.2.1 software (Oxford Instruments, Oxford, UK).
Differential Scanning Calorimetry (DSC)
Differential scanning calorimetry measurements were performed using the DSC822e cell with IntraCooler and STARe software v.8.10 (Mettler-Toledo, Columbus, OH, USA). Samples were weighed as received without preparation. About 10 mg of a studied sample was weighed into the standard aluminum crucible (40 µL). The crucible was hermetically sealed, and a lid was perforated before measurement. The samples were heated from 25 • C to 200 • C at 10 • C/min under a nitrogen atmosphere with 60 mL/min flow. The following temperature program was also used: (1) heating from 25 • C to 200 • C at 10 • C/min; (2) 5 min at 200 • C; (3) cooling from 200 • C to −60 • C at 50 • C/min, heating from −50 • C to 200 • C at 10 • C/min.
IR Measurements
The infrared spectra were recorded on the Nicolet iS10 FT-IR spectrometer (Thermo Scientific, Waltham, MA, USA) using an ATR sampling module, on diamond crystal, in the spectral range from 4000 to 650 cm −1 and a spectral resolution of 4 cm −1 . For one spectrum, 200 scans were recorded.
Mechanical Parameters of the Printlets and Apparent Density of Printlets
The hardness of the printlets was evaluated according to European Pharmacopoeia 2.9.8 Resistance to crushing General Monograph. The test was carried out on 10 tablets in hardness tester YD-3 (Hinotek Group Ltd., Ningbo, China).
The apparent density of the printlets was calculated by dividing the weight of the printlets of a particular formulation and the volume of the printout. For this purpose, 10 printlets of each formulation were weighed accurately, the apparent density was calculated. Due to the printing precision, the volume of the printlets was treated as constant (1.152 cm 3 ).
Drug Release Studies and Buoyancy Observation
Met release studies were performed using USP Apparatus 3 and Apparatus 4, according to The European Pharmacopoeia (Ph. Eur.).
The USP Apparatus 3 dissolution system consisted of a BIO-DIS Reciprocating Cylinder Apparatus G7970-64011 in closed on-line configuration with Agilent 89092-60002 Multichannel Pump paired with an Agilent Cary 8454 UV-Vis spectrophotometer and controlled by UV-Visible GLP Chem Station Software v.B.05.04 (all from Agilent Technologies Inc., Penang, Malaysia). Seven vessels filled with 250 mL of 0.01 M HCl pH 2.0 dissolution medium were placed at 37 ± 0.5 • C. The apparatus was set at 15 dips per minute (DPM).
The release of Met in Apparatus 4 was carried out in a closed-loop, semi-automated flow-through cell dissolution system (SOTAX AG, Aesch, Switzerland). The dissolution system consisted of a SOTAX CE 7 smart unit with a set of seven 22.6 mm internal diameter cells, SOTAX CP 7-35 piston pump, and single-beam Agilent Cary 8454 UV-Vis spectrophotometer controlled by WinSOTAX Plus Dissolution Software v.2.60.4.1. As a dissolution medium, 0.01 M HCl, pH 2.0 in a volume of 250 mL was used.
Results and Discussion
Formulation A (80/20/0/100) was a basic composition. This formulation was modified to analyze the influence of changes of several factors on the properties of the FDDS. The possibility of introducing an osmotic substance into the matrix was tested on Formulation B (80/18/2/100). In the case of Formulation C (80/20/0/150), the influence of laser speed was tested. Formulation D (90/10/0/75) was prepared to test the possibility of decreasing PA12 content. In this case, the laser speed was reduced to obtain the acceptable printout.
The Characteristics of the Printlets
The technical drawing of the model and photo of the printlet manufactured by SLS are presented in Figure 1a,b, respectively. All formulations obtained with this technique were regular shapes without damage or chipping. The surface of the printlets was rough, which suggested that the final formulation should be coated or encapsulated to enable their swallowing. The mean weight of the tablets was 800 mg (n = 10), and the weight distribution was in the range ±5% The thermal studies of raw materials applied for printlet preparation (Figure 3) showed that the melting point for Met was about 160 °C, while the PA12 had a melting point of about 186 °C. The PA12 particles absorbed the laser energy more effectively as they contained dye, and the sintering process could be described in two steps. (1) Since the glass transition temperature (Tg) of PA12 is about 50 °C [42], it is believed that the laser energy kept the amorphous regions of PA12 in a viscous state. Sintered PA12 The thermal studies of raw materials applied for printlet preparation (Figure 3) showed that the melting point for Met was about 160 • C, while the PA12 had a melting point of about 186 • C. The PA12 particles absorbed the laser energy more effectively as they contained dye, and the sintering process could be described in two steps. (1) Since the glass transition temperature (T g ) of PA12 is about 50 • C [42], it is believed that the laser energy kept the amorphous regions of PA12 in a viscous state. Sintered PA12 formed the mesh (continuous structure). (2) Thermal energy was also transferred to Met crystals-smaller ones (microscopic observations confirmed their presence) could be sintered together with adjacent Met crystals and PA12. Some non-sintered grains were visible in the pore space of Formulations A and D, suggesting that energy transfer was inefficient.
Studies performed by Dadbakhsh et al. [44] have shown that the crystal structure of powder changes from α to γ-form after SLS melting. The change is associated wit drastic drop in crystallinity, as observed using wide-angle X-ray scattering. Howev Martynková et al. [45] have proved that laser sintering decreases the lamellar crystal size of PA12, calculated from powder diffractograms because the partial phase tra formation from γ to α may reduce lamellar crystallite size due to disruptions caused the γ lamellar stacking continuum.
The thermal studies of the components of the printlets were in agreement with erature data. The DSC curves of Met and PA12 were characterized by single endother peaks ( Figure 3). Met melted at 159.63 °C (ΔH = −195.61 J/g), which was in agreem with the data published by Agafonova et al. [46]. The PA12 melted at 179.79 °C (ΔH −101.66 J/g), which agreed with the melting characteristic of γ-form [42,44].
The Effect of Sintering on the Phase Transitions of Printlet Components
DSC and IR techniques were used to evaluate the phase transitions of nylon and Met during sintering. The PA12 exists in four crystal forms: α, α , γ and γ [42][43][44][45]. Both γ and γ forms exhibit a single melting peak at 177.5 • C and 176 • C, respectively. In the case of the α form, two melting effects at 168.5 • C and 177 • C are observed in a DSC curve. Studies performed by Dadbakhsh et al. [44] have shown that the crystal structure of the powder changes from α to γ-form after SLS melting. The change is associated with a drastic drop in crystallinity, as observed using wide-angle X-ray scattering. However, Martynková et al. [45] have proved that laser sintering decreases the lamellar crystallite size of PA12, calculated from powder diffractograms because the partial phase transformation from γ to α may reduce lamellar crystallite size due to disruptions caused in the γ lamellar stacking continuum.
The thermal studies of the components of the printlets were in agreement with literature data. The DSC curves of Met and PA12 were characterized by single endothermic peaks (Figure 3). Met melted at 159.63 • C (∆H = −195.61 J/g), which was in agreement with the data published by Agafonova et al. [46]. The PA12 melted at 179.79 • C (∆H = −101.66 J/g), which agreed with the melting characteristic of γ-form [42,44].
DSC curves of printlets are presented in Figure 4. They differed from the DSC curves of the mixtures by a small additional endothermic effect on the left arm of the main melting effect of Met. The melting temperatures and enthalpies for Formulations A (80/20/0/100) and D (90/10/0/75) were different due to different Met/PA12 ratios (Figure 3). Thermal parameters for Formulation A (80/20/0/100) printlets were 148 • C, −15 J/g for the first effect and 159 • C, −156 J/g for the second effect. For Formulation D, the first effect was observed at 153 • C, with enthalpy −8 J/g, and the second effect was detected at 159 • C, −177 J/g. It was supposed that the first melting effect originated from phase transitions between γ and α form of PA12, with accompanying changes in crystallinity [45]. In ATR-IR spectra, Met was easily identified by the intensive bands at 3209 cm −1 (from O-H stretching vibrations), 3100 cm −1 (from C-H stretching vibrations), 1534 cm −1 (from -NO2 asymmetric stretching vibrations), 1472 cm −1 and 1427 cm −1 (from C-H deformation vibrations), 1367 cm −1 (from -NO2 symmetric stretching vibrations), 1186 cm −1 (from C-O stretching vibrations) and 1073 cm −1 (from the ring breathing mode) [47].
PA12 forms were well distinguished utilizing IR spectroscopy. The most significant differences between forms were for the amide II band. The origin of this band in secondary amides was due to a mixed vibration. The mixing was between the N-H in-plane bending and the C-N stretching vibration. This band appeared at 1561 cm −1 in γ-PA12, 1557 cm −1 in γ′-PA12, and in 1545 cm −1 in α-PA12 [43].
PA12 forms were well distinguished utilizing IR spectroscopy. The most significant differences between forms were for the amide II band. The origin of this band in secondary amides was due to a mixed vibration. The mixing was between the N-H in-plane bending and the C-N stretching vibration. This band appeared at 1561 cm −1 in γ-PA12, 1557 cm −1 in γ -PA12, and in 1545 cm −1 in α-PA12 [43].
Nylon PA12 was characterized by the intensive bands at 3290 cm −1 (from N-H stretching vibrations), 2917 cm −1 and 2848 cm −1 (from C-H asymmetric and symmetric stretching vibrations), 1637 cm −1 (from C=O stretching vibrations, amide I Band), and broadband from 1569 to 1540 cm −1 (from the N-H in-plane bending and the C-N stretching vibration, amide II Band) indicating the presence of a mixture of PA12 forms [42,44]. In the spectra of both mixtures, mainly Met bands were observed as well as two characteristic bands at 2971 cm −1 (Figure 6a) and 1637 cm −1 (Figure 6b) from PA12.
The IR spectroscopy confirmed the presence of Met and PA12 in mixtures and printlets. However, due to a small percentage of PA12, only two weak bands were visible in the IR spectra of mixtures and tablets (Figure 7). As a result, there were no visible differences and band shifts between the IR spectra of the mixtures and the printlets. It indicated the lack of interaction between Met and PA12 in the printlet. The IR spectroscopy confirmed the presence of Met and PA12 in mixtures and printlets. However, due to a small percentage of PA12, only two weak bands were visible in the IR spectra of mixtures and tablets (Figure 7). As a result, there were no visible differences and band shifts between the IR spectra of the mixtures and the printlets. It indicated the lack of interaction between Met and PA12 in the printlet. The results presented in this section, together with literature data, gave the strong premises that the composite material was obtained, i.e., elastic insoluble PA12 mesh filled with brittle crystalline Met.
Drug Release Studies and Floating Properties
The drug release studies were carried out using two different pharmacopeial methods: reciprocating cylinder (Apparatus 3 Ph. Eur./USP) and flow-through cell (Apparatus 4 Ph. Eur./USP). Combining both methods allowed for a comprehensive assessment of the printlets' properties. In the reciprocating cylinder apparatus (Apparatus 3), the repeating movements of the cylinders during the dips exerted additional effects on the printlets to simulate the mechanical stress in the stomach [48,49], but the flotation of the printlets was not possible to observe. The flow-through cell apparatus was applied to assess the influence of the formulation on drug dissolution in mild conditions, as well as the floating behavior of the printlets. The results presented in this section, together with literature data, gave the strong premises that the composite material was obtained, i.e., elastic insoluble PA12 mesh filled with brittle crystalline Met.
Drug Release Studies and Floating Properties
The drug release studies were carried out using two different pharmacopeial methods: reciprocating cylinder (Apparatus 3 Ph. Eur./USP) and flow-through cell (Apparatus 4 Ph. Eur./USP). Combining both methods allowed for a comprehensive assessment of the printlets' properties. In the reciprocating cylinder apparatus (Apparatus 3), the repeating movements of the cylinders during the dips exerted additional effects on the printlets to For all formulations, the mean apparent density was lower than 0.8 g/cm 3 ( Table 2). The porous structure of the printlets ensured their immediate flotation after the filling of flow-through cells. Due to the low content of insoluble PA12 the matrices were gradually fragmented and, in most cases, the printlets sunk within the first hour of the experiment. However, in the case of Formulation C (80/20/0/150), which had the lowest apparent density, flotation of the matrices was observed for 10 h (Figure 9). The observations confirmed that the porous structure of the printlets allowed their flotation. This feature could be optimized in further studies.
Materials 2022, 15, 2142 12 the formulation on drug dissolution in mild conditions, as well as the floating behavior o printlets. Drug dissolution in USP apparatus 4 was relatively slow. The release profiles of are presented in Figure 8. The time required to release 80% of drug substance was lon than 12 h, i.e., 13 h for Formulation C (80/20/0/150), 17 h for Formulation B (80/18/2/ and 21 h for Formulations A (80/20/0/100) and D (90/10/0/75). For all formulations, the mean apparent density was lower than 0.8 g/cm 3 (Tabl The porous structure of the printlets ensured their immediate flotation after the fillin flow-through cells. Due to the low content of insoluble PA12 the matrices were gradu fragmented and, in most cases, the printlets sunk within the first hour of the experim However, in the case of Formulation C (80/20/0/150), which had the lowest appa density, flotation of the matrices was observed for 10 h (Figure 9). The observations firmed that the porous structure of the printlets allowed their flotation. This feature co be optimized in further studies. Table 2. Mean apparent density of formulations obtained by SLS (n = 10) and results of buoy observations in Apparatus 4. Met release in USP apparatus 3 was substantially faster than in USP4 for all formulations due to different hydrodynamic conditions (Figure 10). Dissolution from Formulation A (80/20/0/100) was biphasic. Up to 4 h it was almost linear, and Formulation A released 80% of API at 5 h with the full drug release longer than 10 h. Regarding Formulation C (80/20/0/150), Met release in USP apparatus 3 was substantially faster than in USP4 for all formulations due to different hydrodynamic conditions ( Figure 10). Dissolution from Formulation A (80/20/0/100) was biphasic. Up to 4 h it was almost linear, and Formulation A released 80% of API at 5 h with the full drug release longer than 10 h. Regarding Formulation C (80/20/0/150), the standard composition containing 80% v/v of Met with higher laser speed (150 mm/s) resulted in a much faster Met release-about 80% of the dose was released during the first 30 min, and complete release occurred at 2.5 h. The interesting modification of the 80/20 formulation was replacing 2% PA12 with the same volume of sodium chloride as the osmotic agent in Formulation B (80/18/2/100). The presence of sodium chloride in the matrix modified Met release from the printlets. They released about 80% of the API within 1.5 h, and the total Met release was observed at 2.5 h. While the melting point of NaCl is high (801 • C) compared with other components, the addition of NaCl is unlikely to affect the sintering process. The explanation of faster release could be an increase in the osmotic pressure of the solution present in the porous printlet structure and, consequently, more effective water penetration into the matrix. For formulation containing only 10% v/v of PA12, Formulation D (90/10/0/75), almost 80% of Met was released during the first 2 h. However, in this case, faster release than for 80/20 formulation was probably caused by the weaker composite structure due to the lower PA12 content. Met release in USP apparatus 3 was substantially faster than in USP4 for all formulations due to different hydrodynamic conditions ( Figure 10). Dissolution from Formulation A (80/20/0/100) was biphasic. Up to 4 h it was almost linear, and Formulation A released 80% of API at 5 h with the full drug release longer than 10 h. Regarding Formulation C (80/20/0/150), the standard composition containing 80% v/v of Met with higher laser speed (150 mm/s) resulted in a much faster Met release-about 80% of the dose was released during the first 30 min, and complete release occurred at 2.5 h. The interesting modification of the 80/20 formulation was replacing 2% PA12 with the same volume of sodium chloride as the osmotic agent in Formulation B (80/18/2/100). The presence of sodium chloride in the matrix modified Met release from the printlets. They released about 80% of the API within 1.5 h, and the total Met release was observed at 2.5 h. While the melting point of NaCl is high (801 °C) compared with other components, the addition of NaCl is unlikely to affect the sintering process. The explanation of faster release could be an increase in the osmotic pressure of the solution present in the porous printlet structure and, consequently, more effective water penetration into the matrix. For formulation containing only 10% v/v of PA12, Formulation D (90/10/0/75), almost 80% of Met was released during the first 2 h. However, in this case, faster release than for 80/20 formulation was probably caused by the weaker composite structure due to the lower PA12 content.
General Discussion
The current study combined the very early concepts in DDS manufacturing by SLS, where a water-insoluble (nylon or polycaprolactone) matrix has been used [50][51][52][53][54][55], and the idea of manufacturing highly drug-loaded printlets with SLS, which is presented on paracetamol printlets by Kulinowski et al. [36]. For example, fine nylon powder has been sintered into matrices, which have been soaked with dye (methylene blue) as a drug model [51]. In another study, biodegradable polycaprolactone matrices loaded with progesterone feature zero-order kinetics [54,55]. The drug release from such DDS has been slow, with full drug release from several to dozens of days. Such structures can be useful when a long release time is required (e.g., implants) and are inadequate as oral drug delivery systems. On the other hand, obtaining a product with good mechanical properties by sintering pure, stained API is a very demanding process [36]. Putting a high content of API into insoluble elastic thread with low polymer content resulted in a composite material of improved mechanical properties and printability.
The potential of high drug loading, i.e., 80% w/w and more, which could be obtained without sacrificing printability and mechanical properties, cannot be overestimated. The work by Thakkar et al. shows the mechanical properties of the printlets manufactured using pharmaceutical grade polymer and containing 5-10% w/w of API [56]. The best tensile strength obtained in their study is comparable with composite Met/PA12 formulations containing up to 90% v/v (more than 90% w/w) of the brittle API.
Nylons are good candidates as potential pharmaceutical excipients due to their biocompatibility (see the extensive review article by Shakiba et al. [38]), printability and good mechanical properties. At this stage of DDS development, using technical grade (carbon-stained nylon) had no negative impact on the presentation of the general idea of the study and even on further developments. Such an approach was justified as no pharmaceutical-grade nylons are optimized for SLS printing.
Conclusions
Classical oral dosage forms achieve mechanical strength due to powder compression. As a result, pharmaceutical oral dosage forms excipients are designed primarily for powder compression. Using pharmaceutical-grade excipients frequently leads to poor mechanical characteristics of the printlets manufactured with SLS technology. Therefore, there is an unmet need to seek a new set of dedicated materials matching 3D printing technologies. In the presented study, the composite material was obtained consisting of elastic insoluble PA12 mesh filled with high content of brittle (crystalline) Met using SLS. As mesh consisted of 10 or 20% of the printlet, a high therapeutic API dose could be incorporated, i.e., ca. 600 mg of Met. Despite the high content of brittle drug substance, good mechanical properties were obtained. Together with the potential gastroretentive properties of the printlets, obtained drug delivery systems have the potential for H. pylori treatment. Differences in thermal characteristics between powder mixtures and printlets were identified by DSC experiments. It was shown that the drug-excipients mixture could generate complicated thermal patterns in terms of DSC curves which should be accounted for when selecting printing parameters. An additional potential degree of freedom for shaping/tuning drug release profiles was introduced. The addition of a low amount of osmotic agents, in our case 2% v/v of NaCl, could influence drug dissolution from composite PA12/Met printlets. The developed ideas can be used to design other printlet formulations.
The properties of standard pharmaceutical excipients applied for tableting result from decades of experiments and experience gathered during the optimization of the manufacturing processes of drug delivery systems. The study shows the need to define the requirements for excipients dedicated to 3D printing and to search for appropriate materials for this purpose.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ma15062142/s1, Figure S1: SEM with identified spots, where EDS spectra were recorded, Figure S2: The EDS spectrum recorded at spot 4, Figure S3: The EDS spectrum recorded at spot 5, Figure S4: The EDS spectrum recorded at spot 6, Figure S5: The EDS spectrum recorded at spot 7, Figure S6: The EDS spectrum recorded at spot 8, Figure S7: The EDS spectrum recorded at spot 9, Figure S8: DSC curves from thermal loop of Metronidazole (blue), PA12 (green), mixture (black) and printed tablet (red), Figure S9: DSC curves from thermal loop of Metronidazole segment 1 and 4-melting, Figure S10: DSC curves from thermal loop of PA12, segment 1 and 4-melting, Figure S11: DSC curves from thermal loop of mixture, segment 1 and 4-melting, Figure S12: DSC curves from thermal loop of printed tablets, segment 1 and 4-melting. Data Availability Statement: The processed data required to reproduce these findings is available from the authors upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,357.4 | 2022-03-01T00:00:00.000 | [
"Materials Science"
] |
Histone-like nucleoid-structuring protein (H-NS) regulatory role in antibiotic resistance in Acinetobacter baumannii
In the multidrug resistant (MDR) pathogen Acinetobacter baumannii the global repressor H-NS was shown to modulate the expression of genes involved in pathogenesis and stress response. In addition, H-NS inactivation results in an increased resistance to colistin, and in a hypermotile phenotype an altered stress response. To further contribute to the knowledge of this key transcriptional regulator in A. baumannii behavior, we studied the role of H-NS in antimicrobial resistance. Using two well characterized A. baumannii model strains with distinctive resistance profile and pathogenicity traits (AB5075 and A118), complementary transcriptomic and phenotypic approaches were used to study the role of H-NS in antimicrobial resistance, biofilm and quorum sensing gene expression. An increased expression of genes associated with β-lactam resistance, aminoglycosides, quinolones, chloramphenicol, trimethoprim and sulfonamides resistance in the Δhns mutant background was observed. Genes codifying for efflux pumps were also up-regulated, with the exception of adeFGH. The wild-type transcriptional level was restored in the complemented strain. In addition, the expression of biofilm related genes and biofilm production was lowered when the transcriptional repressor was absent. The quorum network genes aidA, abaI, kar and fadD were up-regulated in Δhns mutant strains. Overall, our results showed the complexity and scope of the regulatory network control by H-NS (genes involved in antibiotic resistance and persistence). These observations brings us one step closer to understanding the regulatory role of hns to combat A. baumannii infections.
Scientific Reports
| (2021) 11:18414 | https://doi.org/10.1038/s41598-021-98101-w www.nature.com/scientificreports/ H-NS regulates the expression of genes involved in a variety of biological processes like transport (e.g., autotransporters, type-VI secretion systems) and metabolism (e.g., phenylacetic acid degradation pathway) 7 . Interestingly, H-NS inactivation, reduces levels of resistance to colistin 8 , an antibiotic currently used as a last resource to treat multidrug resistant (MDR) A. baumannii infections. H-NS is also responsible for protecting against stress; it alleviates the deleterious effects produced by metallo-β-lactamase toxic precursors and protects against DNAdamaging agents such as mitomycin C and levofloxacin 10 . A recently described function of H-NS is the repression of the exopolysaccharide poly-N-acetylglucosamine synthesis and the expression of a protein belonging to the CsgG/HfaB family 11 . The expression of the A. baumannii H-NS is down regulated in environments containing human serum albumin (HSA) or HSA-containing fluids 12,13 . In sum, the regulatable nature of H-NS strongly suggests it plays a fundamental role in A. baumannii's responses and adaptation to the conditions within the human body.
In this study, we demonstrate the role of H-NS in regulating expression of genes related to antibiotic resistance, biofilm formation, and quorum sensing in the multidrug-resistant, hypervirulent A. baumannii AB5075 and the susceptible, moderately virulent A. baumannii A118 strains.
Results and discussion
Transcriptome analysis of A. baumannii strain AB5075 and AB5075 Δhns revealed a differential expression profile of 183 genes with an FDR-adjusted P-value < 0.05 and log2-fold change > 1. Among the differentially expressed-genes (DEGs), we identified genes associated with motility, biofilm formation, quorum sensing, antibiotic resistance, metabolism, and stress response (Supplementary Table S1). As the aim of the present work is to identify the possible role of H-NS in antibiotic resistance only those genes, directly or indirectly associated with antibiotic resistance were examined (Supplementary Table S2).
H-NS plays a role in modulating the expression of antibiotic resistance associated genes. Ana-
lyzed RNA-seq data showed that the expression of genes related to resistance to β-lactams, aminoglycosides, quinolones, chloramphenicol, trimethoprim, sulfonamides, and colistin were up-regulated in AB5075 Δhns strain (Fig. 1A). On the other hand, genes related to tetracycline resistance were down-regulated in the Δhns strain compared to the parental strain. To confirm the RNA-seq results, transcript analysis of A. baumannii AB5075, AB5075 Δhns and AB5075 Δhns pMBLe-hns cells was performed, showing an enhanced expression on seven out of eight genes (bla OXA-23 , bla OXA-51-like , bla ADC , bla GES-14 , carO, pbp1, and advA) associated with β-lactam resistance in AB5075 Δhns mutant strain ( Fig. 1B and Supplementary Fig. S1A). In addition, the increased expression of all the selected genes associated with aminoglycoside, quinolone, chloramphenicol, trimethoprim and sulfonamide resistance in AB5075 Δhns strain was confirmed by quantitative RT-PCR (qRT-PCR) assays ( Fig. 1B and Supplementary Fig. S1A). Consistently, the mutant strain containing the plasmid pMBLe-hns, which expresses a wild-type copy of hns under the control of its own promoter, rescued the expression for all resistance-related genes tested to wild type levels (Fig. 1B). These results provide evidence indicating that H-NS is involved in modulating the expression of resistance-related genes.
In order to further expand our knowledge about antibiotic resistance regulation, qRT-PCR experiments were carried out using the susceptible and less virulent A. baumannii A118 strain. This isolated was selected since it is an A. baumannii representative strains that we and other authors have been using in previous works as a model strain [11][12][13][14][15][16][17][18][19][20][21][22][23][24] . This strain belongs to a different lineages respect to AB5075 strain, A118 is a sporadic clone while AB5075 is a representative strain of one of the most prevalent clones (GC1). We observed that, four out of five (bla OXA-51-like , pbp1, pbp3, and carO) genes associated with β-lactam resistance as well as the fluoroquinolone resistance gene fusA were up-regulated ( Supplementary Fig. S1B).
To test the effect of H-NS on the antibiotic susceptibility profile of A. baumannii strains, disk diffusion and minimal inhibitory concentration (MIC) measurements were performed (Supplementary Tables S3 and S4 and Supplementary Fig. S2). Observing disk diffusion, an increase in the halo of 4 mm or more was observed for meropenem, imipenem, amikacin and gentamicin for both strains, considering significantly according to CLSI breakpoints (Supplementary Table S3). MICs revealed twofold, threefold, threefold and fivefold decreases in AB5075 Δhns for meropenem, imipenem, amikacin and gentamicin, respectively (Supplementary Table S4 and Supplementary Fig. S2), changing from resistant to susceptible for gentamicin and amikacin, and resistant to intermediate for imipenem. In the A118 strain, a highly susceptible strain, slight changes in MICs values were observed in the Δhns strain for the carbapenems antibiotics (Supplementary Table S4). However, significant fold-decreases were observed for amikacin and gentamicin MICs (Supplementary Table S4). Changes in MIC were not observed for ceftazidime, ciprofloxacin, norfloxacin, and colistin for both strains (Table S4).
Overall, an increase in expression of the analyzed antibiotic resistance genes is observed in Δhns mutant strains suggesting a pleiotropic modulation of gene expression. The increased expression of some genes cannot be correlated with the respective phenotype, such as the increase expression of aphA6 and aadB and the increase in susceptibility for amikacin and gentamicin, respectively. However, because antibiotic resistance mechanisms are multifactorial and other uncharacterized or unknown genes might be involved, H-NS could be partially responsible for the observed phenotypes.
A previous analysis using metallo-β-lactamases (MBLs) and DNA-damaging agents as stressors 9 , showed that the AB5075 Δhns strain showed diminished ability to overcome stress 9 . Moreover, imipenem MICs of strains expressing the different MBLs, showed a detrimental impact on the antibiotic resistance phenotype in the Δhns background 9 . In addition, the increased susceptibility to imipenem observed in both mutant strains can be explained in part by the significant increase in expression of carO (Supplementary Table S4, and Fig. 1 and Supplementary Fig. S1), a correlation between the expression of CarO and the outer membrane permeation to imipenem in A. baumannii has been reported 25 Δhns and AB5075 Δhns pMBLe-hns strains genes associated with antibiotic resistance, the expression levels and the fold changes were determinate using the double ΔCt analysis. At least three independent samples were used, and three technical replicates were performed from each sample. Statistical significance (P-value < 0.05) was determined by ANOVA followed by Tukey's multiple comparison test; one asterisks: P-value < 0.05; two asterisks: P-value < 0.01 and three asterisks: P-value < 0.001. Different patterns in the graphs indicate different groups of genes associated with antibiotic resistance. This figure was performed using GraphPad Prism version number 9 (GraphPad software, San Diego, CA, USA, https:// www. graph pad. com/). www.nature.com/scientificreports/ The reduced MICs observed in the present work for carbapenems and aminoglycosides in the mutant background, could be related with the role of H-NS in stress response. In accordance with this result, RNA-seq analysis of the AB5075 mutant strain showed the regulation of genes that encode for proteins involved in counteracting stressful conditions, such as cspE, cspG, ywrO, hsp40, slt, mltB, katE, soxR and soxS genes (Supplementary Table S1). Diverse stress conditions, such as oxidative stress, mediated by global regulatory genes like SoxR, which is a global repressor protein involved in multidrug-resistance in Gram negative bacteria, cause the overexpression of efflux pumps contributing to multidrug-resistance phenotype 27 . A recent in vitro study showed that under consecutive imipenem stress, A. baumannii evolved the ability to reduce susceptibility to various antimicrobials, including carbapenem, through overproduction of the RND super-family efflux pump 28 . In other Gram-negative bacteria, as in Vibrio cholerae, it was shown that the loss of H-NS is involved in inducing an endogenous envelope/oxidative stress 29 .
To further determine if changes at the transcriptional levels of efflux pumps could contribute to the altered antibiotic resistance phenotype in the Δhns mutant background, the transcriptomic data of AB5075 Δhns vs. AB5075 was examined. Our results showed that the expression of genes encoding AdeABC, AdeIJK as well as the efflux pump EmrAB were enhanced in AB5075 Δhns; however, the expression of genes that encode for AdeFGH were decreased ( Fig. 2A and Supplementary Table S2). qRT-PCR experiments further confirmed up-regulation of efflux pump genes in both A. baumannii Δhns mutant strains when compared with otherwise isogenic wildtype strains ( Fig. 2B and Supplementary S3). In trans complementation restored the expression levels for these genes in the AB5075 strain. The up-regulation of efflux pumps can be related with the wide regulatory effects of H-NS causing transcriptional changes in genes related with A. baumannii's adaptative response.
We observed a wide transcriptional response in the Δhns mutant, pointing out the central role of H-NS in modulating expression/repression of a wide variety of antibiotic resistance genes. Our results confirmed a pleiotropic role of H-NS as a transcriptional regulator that is playing a critical role in reprogramming the A. baumannii transcriptome related with antibiotic resistance and stress response.
Biofilm and Quorum sensing networks are also affected by H-NS. The transcriptomic comparison of A. baumannii AB5075 Δhns against the parental strain showed that expression of csu operon, which is essential for A. baumannii biofilm formation 30 , was significantly down-regulated ( Fig. 3A and Supplementary Figure 2. Genetic analysis of efflux pumps coding genes. (A) Heatmap outlining the differential expression of genes associated with efflux pumps in A. baumannii AB5075 Δhns vs. parental strain. The asterisks represent the DEGs (adjusted P-value < 0.05 with log2fold change > 1). Heat map was performed using GraphPad Prism version number 9 (GraphPad software, San Diego, CA, USA, https:// www. graph pad. com/). (B) qRT-PCR of AB5075, AB5075 Δhns and AB5075 Δhns pMBLe-hns strains genes associated with efflux pumps, the expression levels and the fold changes were determinate using the double ΔCt analysis. At least three independent samples were used, and three technical replicates were performed from each sample. Statistical significance (P-value < 0.05) was determined by ANOVA followed by Tukey's multiple comparison test; one asterisks: P-value < 0.05; two asterisks: P-value < 0.01 and three asterisks: P-value < 0.001. This figure was performed using GraphPad Prism version number 9 (GraphPad software, San Diego, CA, USA, https:// www. graph pad. com/). Table S2). The differential expression of csuA/B and csuE was further confirmed to be down-regulated by qRT-PCR analysis in AB5075 Δhns (Fig. 3B), while the AB5075 Δhns pMBLe-hns restored the wild-type level of expression of both csu genes analyzed. In addition, the changes in expression of these genes were correlated with the associated phenotype, since biofilm formation was significantly lower in AB5075 Δhns with respect to the parental strain (Fig. 3C). The rescued strain showed a slightly increment of biofilm formation compared to the Δhns strain, however, statistics differences were not observed between mutant and complementary strain (Fig. 3C). Decreased biofilm formation was also seen in A118 Δhns and a decreased in the expression level of csuE was also observed in the mutant background ( Supplementary Fig. S4A,B). Levels of expression of ompA, that encodes for the OmpA protein, partially involved in abiotic biofilm formation and known to be a virulence factor 31 , were up-regulated in both mutants strains ( Fig. 3B and Supplementary Fig. S4A). It is known that in a biofilm the microorganisms reside and communicate with each other through quorum sensing mechanisms 32 . AB5075 transcriptomic data analysis revealed an upregulation of the lactonase enzyme coding aidA gene and genes encoding enzymes that participate in acyl homoserine lactone (AHL) synthesis (kar, acdA, fadD, abaI, abaR and abaM) 33,34 in the mutant strain compared to wild type strain (Fig. 3A and Supplementary Table S2). The quorum network genes aidA, abaI, kar, and fadD were confirmed to be up-regulated in AB5075 Δhns mutant strains by qRT-PCR and the levels of expression were restored in hns complemented strain (Fig. 3B). Moreover, the levels of expression of aidA and abaI genes were observed to be up-regulated in the susceptible model strain A118 (Supplementary Fig. S4A). In addition, we assessed the levels of AHL using Agrobacterium tumefaciens-based solid plate assays 35,36 . The supernatants from cultures of AB5075 or AB5075 Δhns produced similar intensity of color, likely caused by an increase of both lactonase activity (quorum quenching) and AHL synthesis (quorum sensing) in the mutant strain (Fig. 3D). The lower AHL production observed in the AB5075 Δhns pMBLe-hns can be explained by the decrease expression of genes associated with AHL synthesis (abaI, karD, and fadD) and the higher lactonase expression levels responsible of quorum quenching activity (aidA), therefore in this strain the degradation of AHL is more evident. Finally, AHL analysis for A118 Δhns showed lower intensity of blue color compared to A118 wild type strain ( Supplementary Fig. S4C). This result suggested the production of a lactonase activity that is able to counteract the levels of large chain AHLs synthesized by this strain in the assayed conditions.
The genome-wide role of H-NS in A. baumannii was first studied in a hypermotile derivative of the reference strain ATCC 17978 7 . Transcriptomic analysis of the ATCC 17978 Δhns (strain 17978hm) identified 152 genes as DEGs (91 genes up-regulated and 61 down-regulated) when compared to the parental. The most striking up-regulation was observed in the quorum sensing genes encoding AbaI and the regulator AbaR. On the other hand, phenotypic and genetic analysis of biofilm coding genes showed no major differences between the wild type and mutant strains 7 . Our transcriptomic and phenotypic data compared to that of ATCC 17978, demonstrated that H-NS regulon components might vary in a strain-specific manner consistent with the fact that different A. baumannii isolates can display significant variations in phenotypes known to be controlled by H-NS (i.e., cell surface hydrophobicity, adherence and motility) 37 .
Conclusion
Our present work further contributes to deciphering the role of the H-NS global regulator in the control of transcription of antibiotic resistance-related mechanisms.Using distinct A. baumannii model strains, an extensive H-NS-controlled transcriptional response was observed. Our results reinforce the complexity of the regulatory network control by the transcriptional repressor H-NS that regulates the expression of genes involved in antibiotic resistance and persistence. Overall, H-NS exerts a pleiotropic role in A. baumannii, influencing the expression of a wide variety of genes related to antibiotic resistance and stress response, suggesting that it is a good candidate to develop further alternative treatment approaches to combat A. baumannii infections.
Materials and methods
Bacterial strains. The multidrug and hypervirulent AB5075 strain and its isogenic h-ns mutant (AB5075 Δhns) were used in the present study. AB5075 Δhns was obtained from the A. baumannii AB5075 transposon mutant library 38 . In addition, to extend the role of H-NS on the response of A. baumannii, the model susceptible A118 and its isogenic mutant A118 Δhns were used. A. baumannii A118 Δhns was obtained as previously described 39 . Briefly, chromosomal DNA of AB5075 Δhns (hns::TET R ) was used to naturally transform the A118 strain and mutant cells were selected on Luria-Bertani (LB) agar plates supplemented with 30 µg/mL tetracycline (TET). A118 Δhns mutant strain was confirmed by automated DNA sequencing. Growth curves confirmed no differences in growth between the mutant and the complemented strains respect to the parental strains (Supplementary Fig. S5). Analysis of the quality of the illumina reads, trimming of low-quality bases and removal of Illumina adapters was performed as described previously 17 . Reads were aligned to the genome of A. baumannii AB5075 using Burrows-Wheeler Alignment (BWA) software (v0.7.17) BWA and visualized using the Integrative Genomics Viewer (IGV). Read counts per gene were calculated using FeatureCounts 40 , and differential expression analysis was performed using DEseq2. Differentially expressed genes (DEGs) were defined as those displaying an FDR adjusted P-value of < 0.05 and log2fold change > 1. RNA-seq data generated as a result of this work has been deposited to SRA with the accession GSE167117. qRT-PCR was performed to confirm and analyze the expression with antibiotic associated genes. cDNA was prepared using the iScript Reverse Transcription Supermix for qRT-PCR (BioRad, Hercules, CA, USA) and quantitative PCR was performed using iQ™SYBR Green Supermix (BioRad, Hercules, CA, USA) per the manufacturer's recommendations, respectively. Results were analyzed using the 2 −ΔΔCt method in which recA 13 acted as the control gene. Experiments were performed in technical and biological triplicates. The results from experiments performed were statistical analyzed (t test) using GraphPad Prism (GraphPad software, San Diego, CA, USA). A P-value < 0.05 was considered significant. Biofilm assays. Biofilms assays were performed as described in previous work 13 . A. baumannii cells were cultured in LB broth with agitation for 18 h at 37 °C. Overnight cultures were centrifuged at 5000 rpm at 4 °C for 5 min. Pelleted cells were washed twice with 1X phosphate buffered saline (PBS) and then re-suspended in 1X PBS. The optical density at 600 nm (OD600) of each culture was then adjusted to 0.9-1.1 and the samples were the vortexed and diluted 1:100 in LB broth in technical triplicates, the inoculated 96-well polystyrene microtiter platewas incubated at 37 °C for 24 h without agitation. The following day, the OD600 (ODG) was measured using a micro-plate reader (SpectraMaxM3 microplate/ cuvette reader with SoftMax Pro v6 software) to determine the total biomass. Wells were emptied using a vacuum pipette, washed three times with 1X PBS and stained with 1% crystal violet (CV) for 15 min. Excess CV was removed by washing three more times with 1X PBS and the biofilm associated with the CV was solubilized in ethanol acetate (80:20) for 30 m. The OD580 (ODB) was measured using a micro-plate reader and results were reported as the ratio of biofilm to total biomass (ODB/ ODG). Experiments were performed in triplicates, with at least three technical replicates per biological replicate. N-acyl homoserine lactone (AHL) detection. Agrobacterium tumefaciens-based solid plate assays were carried out to detect N-Acyl Homoserine Lactone (AHL) production 42 as described in previous work 36 . Initially, 500 µL of the homogenate were loaded in a central well of 0.7% LB agar plates supplemented with 40 µg Figure 3. Genetic and phenotypic analysis of biofilm and quorum sensing coding genes. (A) Heatmap outlining the differential expression of genes associated with biofilm formation and quorum sensing in A. baumannii AB5075 Δhns vs. parental strain. The asterisks represent the DEGs (adjusted P-value < 0.05 with log2fold change > 1). Heat map was performed using GraphPad Prism version number 9 (GraphPad software, San Diego, CA, USA, https:// www. graph pad. com/) (B) qRT-PCR of AB5075, AB5075 Δhns and AB5075 Δhns pMBLehns strains genes associated with biofilm and quorum sensing, the expression levels and the fold changes were determinate using the double ΔCt analysis. At least three independent samples were used, and three technical replicates were performed from each sample. Statistical significance (P-value < 0.05) was determined by ANOVA followed by Tukey's multiple comparison test; one asterisks: P-value < 0.05; two asterisks: P-value < 0.01 and three asterisks: P-value < 0.001. (C) Biofilm assays performed in A. baumannii AB5075, AB5075 Δhns and AB5075 Δhns pMBLe-hns represented by OD580/OD600. Statistical analysis (t test) was performed and a P-value < 0.05 was considered significant. (D) Agar plate assay for the detection of AHL using A. tumefaciens. The presence of AHL was determined by the development of the blue color. Quantification of 5,5′-dibromo-4,4′-dichloro-indigo were estimated as the percentage relative to C10-AHL standard, measured with ImageJ (NIH). The mean ± SD is informed. Statistical significance (P-value < 0.05) was determined by ANOVA followed by Tukey's multiplecomparison test. Experiments were performed in triplicate, with at least three technical replicates per biological replicate. This figure was performed using GraphPad Prism version number 9 (GraphPad software, San Diego, CA, USA, https:// www. graph pad. com/). www.nature.com/scientificreports/ of 5-bromo-3-indolyl-b-d-galactopyranoside (X-Gal) per mL and 250 µL (OD = 2.5) of the overnight culture of Agrobacterium tumefaciens biosensor. The presence of AHL was determined by the development of blue coloring 35 . As a positive control, 100 µL of N-Decanoyl-dl-homoserine lactone (C10-AHL) 12.5 mg/mL was utilized. Quantification of 5,5′-dibromo-4,4′-dichloro-indigo production in different conditions was determined using ImageJ software (NIH) by measuring the intensity of each complete plate, and subtracting the intensity measured in the negative control. The values were normalized to the positive control, which received the arbitrary value of 100.
Growth of A. baumannii strains.
To test the ability of the A. baumannii A118 and AB5075 wild-type and derivative strains used in this work, 1/100 dilutions of overnight cultures grown in LB were inoculated in LB.
Cultures were subsequently grown in agitation at 37 °C. Optical density of 600 nm was measured using a microplate reader (SpectraMax M3 microplate/cuvette reader with SoftMax Pro v6 software).
Statistical analysis.
Experiments performed at least in triplicates were statistically analyzed by ANOVA followed by Tukey's multiple comparison test using GraphPad Prism (GraphPad software, San Diego, CA, USA). A P-value < 0.05 was considered significant. All procedures performed in this study were in accordance with the CSUF Institutional Biosafety Committee Approval plan (DBH117-01) and are in compliance with the NIH, CDC, OSHA and other environmental and occupational regulations. License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,383 | 2021-07-14T00:00:00.000 | [
"Medicine",
"Biology"
] |
Participation and receptivity in the art museum a phenomenological exposition
There is a powerful trend in museums today of asking visitors to participate in the exhibitions, co-create content,and tobeactive andengagewithoneanotherinthemuseum space.Whilewelcoming the participatory agenda as an initiative of democratizing art museums, we argue in this paper that the rise of the participatory agenda also rede fi nes the purpose of the art museum in a way that risks overlooking the kinds ofexperiences peopleundergo inart museums.Based onqualitativeand phenomenologicallyinspired interviews with museum visitors, we present a sketch of a class of aesthetic experiences that ought to be taken into consideration in curatorial practices. Developing a picture of the phenomenology of aesthetic experience, we argue that such experiences should be taken into account when considering the question of thepurposeoftheartmuseum.
INTRODUCTION
There is a powerful trend in museums today asking visitors to participate in the exhibitions, co-create content, and to be active and engage with one another in the museum space (Black 2012;Eriksson et al. 2019;McSweeney and Kavanagh 2016;Simon 2010Simon , 2016. This "participatory agenda" is a response and solution to challenges facing art museums, the most important one being that they still today are mostly visited by the well-educated public (Kulturstyrelsen 2017). As early as 1969 Pierre Bourdieu and Alain Darbel demonstrated in their study The Love of Art, that visits to the art museum are closely connected to your educational and cultural background (Bourdieu and Darbel 1990). This issue persists today across the Western world and art museums are still struggling: a large percentage of the population does not consider the art museum as a place where they belong or feel comfortable (DCMS 2018;Kulturstyrelsen 2017;Simon 2016)not least due to its alienating rules and conventions (Duncan 1995;Samis and Michaelson 2016). Therefore, the critique of the art museum as elitist is still as relevant as ever. While welcoming the participatory agenda as an important initiative of democratizing art museums to make them more inclusive, we argue in this paper that the rise of the participatory agenda also redefines the purpose of the art museum in a way that risks overlooking the various kinds of experiences people undergo. While art has been known to challenge our perceptual capacities (see e.g. Arnheim 1954Arnheim /19741969/2004, provoke our moral sensibilities (Jørgensen 2003(Jørgensen , 2006 and lead us to reflect upon societal and political matters (Adorno 1970(Adorno /1997Ranciere 2004Ranciere /2013Roald and Køppe 2015), it also has a power to evoke intense, existential, meaningful, and even life-changing experiences. The nature of these specific experiences can be studied phenomenologically and psychologically, which in turn reveals that their significance for human flourishing cannot be understated. The participatory agenda, however, does not seem to embrace this type of aesthetic experience. In fact, we will argue that the very nature of the participatory approach to museums is at odds with certain types of aesthetic and existentially laden experiences. Based on thorough qualitative and phenomenologically inspired interviews with museum visitors, we present a sketch of a class of aesthetic experiences that ought to be taken into consideration in curatorial practices. In other words, we develop a picture of the phenomenology of aesthetic experience and argue that the existence of such experiences should inform the question of the purpose of the art museum in the same way as the participatory agenda.
The paper is structured as follows: we begin with a presentation of Nina Simon's conception of the participatory cultural institution and show that considerations of what kinds of experiences that occur in the art museum art are largely absent. We in turn provide such considerations ourselves in three steps: firstly, by briefly accounting for the methodology of phenomenologically inspired interviews; secondly by presenting excerpts of some of the interviews we have conducted in art museums; thirdly and finally by identifying the central experiential structures within these interviews in exchange with an account of aesthetic experience from the philosopher Mikel Dufrenne. This leads us to propose that the possibility of having aesthetic, life-changing experiences in the museum should play a greater role in informing curatorial practices in particular, and in questioning the purpose of the art museum in general.
THE PARTICIPATORY MUSEUM
As an authority in the participatory trend, Nina Simon argues that cultural institutions should become more participatory and involve their audiences in co-creation and encourage different forms of expression. In The Participatory Museum she defines a participatory cultural institution as: a place where visitors can create, share, and connect with each other around content.
Create means that visitors contribute their own ideas, objects, and creative expression to the institution and to each other. Share means that people discuss, take home, remix, and redistribute both what they see and what they make during their visit. Connect means that visitors socialize with other people-staff and visitors-who share their particular interests.
Around content means that visitors' conversations and creations focus on the evidence, objects, and ideas most important to the institution in question. (Simon 2010, III) Today the museum is reconceived as a social space in which our focus is not only on the objects on display, but also on personal expression, on sharing with others in the context of, or around, content. Overall you can say, with the words of the museologist Steven Weil, that the museum has gone from "being about something to be being about someone" (Weil 1999) and one could add, "to being with someone", as participatory practices become ever more fundamental to museum work (Rung 2018). The art institution is not only a place where visitors can perceive art, but also a site where they discuss, contribute and create. For instance, exhibitions are often co-created with users and interpretative material is generated on the basis of a diverse range of dialogical and interactional practices such as user-feedback, games, discussion sessions and so on (Black 2012;Eriksson et al. 2019;McSweeney and Kavanagh 2016;Simon 2010).
There are many reasons why the participatory agenda has been developed to become one of the key strategies of the museum today. The educational role of the museum has, since the birth of museums in the 1800s, been part of the rationale on which the museum is built (Bennett 1995). Learning theory has therefore played an important role in museological theory. In particular, social constructivism and situated learning have, for the past decades, challenged the traditional transmission model of learning and have explained how social and active participation stimulates learning (Bakhtin 1981;Dewey 2007;Hein 1998;Wenger and Lave 2003). In this regard, John Dewey's (2007) concept of learning as socially situated, as well as his thoughts about inquiry-based education, remains central. Incorporating these thoughts into the interpretative and educational practices of the museum has been a way of professionalizing museum work and of developing more meaningful and educational experiences for the museum visitors. However, these perspectives are not the only motivation behind the participatory agenda: the Danish cultural theorist Anne Scott Sørensen explains how participation within the field of cultural politics has been described as an "obscene blend of an economic (liberal), a social-integrative (corporate) and a political (democratic) rhetoric" (Sørensen 2015, 2). She argues that the participatory agenda is politically motivated because participation is thought to be what cultural institutions need in order to engage a larger proportion of the public and thereby generate income to become financially sustainable. At the same time, Sørensen maintains, it is also argued that participation enables a broader and more diverse user group, thus introducing new audiences to cultural life (Sørensen 2015, 2). Nina Simon's participatory proposal echoes these issues as it is meant to respond to the two following problems: (1) There aren't enough visitors in the art institutions and (2) Art institutions are irrelevant to many social groups. Arguing for the first problem, she presents a 2009 report from the National Endowment for the Arts, showing that attendance at art institutions was declining (Simon 2010, I). In a more recent European context, however, the general trend is one of visitor increase. In Denmark, for instance, Louisiana, AROS, and The National Gallery of Art (SMK), have celebrated consecutive records in visitor numbers over the last decade (Danmarks Statistik 2015a). Visitor numbers, however, do play an increasingly important role as the continuous decrease of state funding demands that museums generate a higher percentage of their income through ticket sales. Simon is therefore right to argue that art museums need to attract more visitors. However, in the Danish case this not because visitor numbers are declining, but because attracting visitors is increasingly central to meeting budgets.
The second problem that art museums are irrelevant to many social groups also holds true in Europe. Evidence from both Denmark and the UK shows that museum visitors are mainly white and well-educated. In 2017, 47% of the visitors to SMK belonged to the eight percent of Danes holding a lengthy college education (Danmarks Statistik 2015b; DCMS 2018; Kulturstyrelsen 2017). Simon addresses such issues in her recent book, The Art of Relevance (2016), which provides a detailed analysis of this problem and its possible solutions. One of these leads us back to the idea of participation. Remember Simon's premise that increased participation is a way to address the problem of the museum's narrow social appeal. She claims that: Visitors expect access to a broad spectrum of information sources and cultural perspectives. They expect the ability to respond and be taken seriously. They expect the ability to discuss, share, and remix what they consume. When people can actively participate with cultural institutions, those places become central to cultural and community life. (2010, II) It may be that this approach can make art museums more relevant for both new and old user groups and in turn "de-elitesize" the art museum. However, Simon does not mention the perspective of the kinds of aesthetic experiences that can arise in art museums (or their relation to the participatory museum), perhaps because she refers mainly to libraries (Biblioteek Haarlem, Simon 2010, 7) history and science centers (Minnesota's Historical Society's History Center, ibid.; Ontario Science center ibid., 86; Anne Frank Museum, ibid., 92), nature centers or natural parks (The Wild Center, ibid., 14; Yellowstone National Park, 2016, 61) as well as digital, social technologies ("social technographics", 2010, 8; YouTube, ibid., 10). It is evident how and why participation in such places will be of mutual benefit, but while art museums share characteristics with these institutions, they also differ from them.
PARTICIPATION AND SUBJECTIVITY IN THE MUSEUM
Concepts such as experience, subjectivity and aesthetics traditionally belong to the domain of philosophy, phenomenology and psychology. Within psychology, different theories deal with aesthetic experience, relying on different, fundamental assumptions (see Allesch 2006;Funch 1997;Roald 2015;Roald and Køppe 2015 for overviews). However, only the phenomenological traditions examine how aesthetic experience appears to the subject herself and focuses on describing this appearance. Here, Mikel Dufrenne has provided the most detailed account from a theoretical point of view, while Clark (2006), Funch (1997, and Roald (2015) have begun the work of describing how experiences with visual art appear empirically. Still, as Cupchik 2016 argues, descriptions of lived experiences with art are missing to a great extent. Apart from "experience", Simon does not consider such terms in her presentation of the participatory museum. The kinds of art institutions that Simon refers to all display different kinds of objects, but from the perspective of aesthetic phenomenology (Bertram 2015;Dufrenne 1973;Roald 2015), encountering a piece of art is not first and foremost the mere seeing of an object, but rather an interaction with another expressive quasi-subject (Dufrenne 1973, 393) through which one can acquire a different relationship to oneself (e.g. Clark 2006;Funch 1997;Roald 2015). In other words, an aesthetic experience concerns a subjective, or intersubjective, process and not an objective property, or better yet, it is a relation of meaning encompassing both subject and object. Henceforth, we call this emotionally and existentially heightened experience, "aesthetic experience". 1 For Simon, however, museum 72 Article: Participation and Receptivity in the Art Museum -A Phenomenological Exposition content is "consumed" (2010, 2), experience "designed" (ibid., 25), museum visitors are "users" (ibid., 2). This perspective does not consider the experience of museum visitors from their first person perspective and we present the following phenomenological analysis to counter-balance the participatory agenda.
PHENOMENOLOGICALLY ORIENTED QUALITATIVE INTERVIEWS: METHODOLOGY
Before engaging the phenomenology of aesthetic experience, we here present the method used to generate the data analysed to ground this phenomenology. Since their inception, fields such as anthropology, ethnography and psychology have been engaged in the question of how to reliably interview others for the purposes of theory construction. Within the field of psychology, inspired by Edmund Husserl, Amedeo Giorgi (e.g. 1975;2009; was the first to develop a methodology in which such interviews could target the structure of lived experience. Carl Rogers (e.g. 1945;1961) and Steinar Kvale (1996) also put down foundational stones for phenomenologically oriented interviewing and since then, there has been a proliferation of phenomenologically inspired interview techniques, especially as applied in healthcare and therapy (Englander 2019;Van Manen 1990;Smith, Flowers, and Larkin 2009). The use of such methods has recently gained great traction in phenomenology and cognitive science with the invention of Neurophenomenology (Varela 1996) and Microphenomenology (Petitmengin et al. 2019). These various approaches discuss, also among themselves and against each other (Van manen 2017; Smith 2018; Zahavi 2018; Englander 2019), the kinds of questions science is generally concerned with: how exactly is the method implemented? Are its results reliable? Are they reproducible? How does the inclusion of experience into our ontology and metaphysics affect our conceptualization of nature and science? To discuss such questions are far beyond the scope of the current paper and are only mentioned to illustrate that working with knowledge and data in the second person (Høffding and Martiny 2016) through interviews cannot be considered arbitrary or unscientific, contrary to the claims of more naturalistically inclined researchers (Schwitzgebel 2002). Instead of trying to avoid subjectivity in favour or objectivity, in phenomenologically informed research we foreground our subjectivity and use it together with logical reasoning in order to reach valid and general knowledge claims.
The interviews that inform this paper were conducted in keeping with best practices from phenomenological psychology (Kvale 1996) and ethnography (Denzin and Lincoln 2011; Hammersley and Atkinson 2007;Ravn 2016). As a rule of thumb, these semi-structured interviews encourages the interviewees to describe a past experience in as great detail as possible, while being steered away from theories, explanations, and opinions. Once the interview is concluded, it is transcribed and subjected to several close readings to identify central structures and categories of the experiences presented. The interviews used for this paper are part of a research project in phenomenological psychology: For more than 15 years, this group has been investigating the relation between visual art and subjectivity based on people's descriptions of their own experience as published in Funch (1997;Roald 2007;2008;. More recently, the group has focused on the role of pre-reflection in aesthetic experience. From March 2017 to March 2018, Høffding and Roald (2019) conducted 20 interviews at Esbjerg Art Museum, the National Gallery of Denmark, and at the department of psychology, University of Copenhagen. In Esbjerg, the people recruited for these interviews were invited by the museum administration, whereas in Copenhagen they were invited through the newsletter of the National Gallery or simply approached directly by the researchers in the museum. Participants were of mixed backgrounds, but all belonged to the middle class: teachers, university students, unemployed and retired people. Those who accepted our invitation, as mediated by the museum outreach, could be considered typical, core customers (simply in virtue of being contactable by the museum) and those directly approached would match an idealized typical visitor, lingering for long time stretches at individual paintings. Given museum statistics for instance from the Danish National Gallery showing that 88% of visitors go to the museum with someone else (Kulturstyrelsen 2017), however, the selected interviewees to be presented are atypical as they came on their own and spent longer time at the museum than the average visitor.
A phenomenologically inspired qualitative interview works on validity criteria that are different from the statistical significance of for instance a questionnaire survey (See Flyvbjerg 2011 for a general discussion). A good interview that achieves sufficiently detailed descriptions easily lasts 45 to 90 minutes and among the 20 conducted interviews, five had the sufficient length to provide this degree of detail. Being willing to engage in such a long interview about one's aesthetic experiences in itself demonstrates an unusual commitment to the perceived importance of such experiences. Hence, the interviews to be presented are not typical or representative of most museum visitors' experience and we do not claim this. Our claim is rather that they can be used to present accounts of intense aesthetic experience that are valuable to us as human beings and which art museums should consider protecting, even if only representing a minority of visitors. Our normative stance is that thinking curatorially of how to cater for such experiences could avail them to a greater quantity of visitors. Generally speaking, although our selected interviewees have different expectations of the museum, when they relate their most significant aesthetic experiences, most have some common characteristics such as being surprising, overwhelming, and as coming from "out of the blue." They begin passively and affectively, become existentially dense, and usually take place while alone. In the following, we present three interview excerpts with Karin, Sabine, and Louise. 2 These were selected because of the density and nuance in their descriptions, in other words, because they most clearly illustrate the phenomenon in question.
EXPERIENCES OF AESTHETIC ABSORPTION
Karin, for instance, travelled across Denmark in order to describe her experiences with art to one of us. These experiences are intensely meaningful to her, she says, and occur almost every time she visits an art museum although the experience changes a little each time, depending on the work of art in question. She describes these experiences as immediate: "I know it as soon as I see it [the work of art], almost before I am aware that I have seen it. . .. It happens with the speed of light. I think the thoughts come afterwards." She continues: "I think it is a mixture of shock and falling in love the very first time I see it." This immediacy refers not only to instantaneity, but also to minimal mediation: "I try hard just to let my body and my senses experience it without So her experiences are surprising, intense and instantaneous. She is struck by the work of art, which works irrespective of her own volition. For Karin such immediate experiences are linked to bodily feelings of various kinds: "It can both be a feeling of being more grounded, but there can also be butterflies in the stomach, yes, almost the opposite of anxiety, you know, if one is scared, a pressure in the chest or having to breathe faster, but in this case, it just goes the opposite way. It is much more relaxing". This kind of experience is not only important to Karin, it also changes her: for instance, at AROS, a modern art museum in Aarhus, Denmark, she describes her experience with a pink room installation to which she keeps returning: "Something happens to me when I am there and I feel calm." Visiting art museums in general makes Karin calm also by clearing up her thinking: "I feel as if some bricks are physically moved around in my head or body. . .And it feels strange to say, but it is as if a cleaning takes place or that some issues are more solidly put in place." These are existential thoughts that are hidden in everyday life, but here they surface and stay with her for a long time. Karin also says that: "I know I am saying thoughts, but it is non-verbal." These non-verbal thoughts are inner expressions of emotional states and changes. Something similar happens when she sees Edvard Munch's M aneskinn (Moonlight) which she has "been in love with" since the very first time she saw it: "I guess I have created my own room with this picture and [I] block out all other disturbances, but yes, so I have just been standing and looking at this picture." Both works not only elicit feelings, but also enable her able to cope with her own emotional life: It belongs to that side of things that we cannot fully verbalize, but which comprises some kind of insight, a kind of knowledge or feelings. . .Both works [Munch and the pink room AROS installation] make a lot of feelings present at once and make me have them. . .make me able to deal with them. It [the work] can bring forth a lot of feelings, but just in a way that makes them okay. . .it is not because they [the feelings] disappear, but they are reduced in strength, perhaps, in comparison to how large a negative influence they could otherwise have.
In other words, she describes her experiences with works of art as immediate, with intense bodily and affective feelings, and with existential significance in the sense that her own emotional life becomes more tangible and manageable. Like Karin, Sabine reports that such experiences are deeply satisfactory and gives her a kind of existential feeling. It affects her in the diaphragm and satisfies a thirst for colour: [I]t affects me here in the diaphragm,. . . no it is not, I don't really get physical sensations, except, haha, you know, I did not grow up with Simon Høffding,Mette Rung,and Tone Roald 75 the Nordic winter, and as November goes into January, I feel actually thirsty for color, so then when I see color, the feeling of thirst is removed, you know I feel, I once bought a red painting in November, it's just red and then there is two red cherries on, and I felt warm.
In regards to a painting by Cezanne, she says that it alters her way(s) of being in the world where she finds or resonates with herself. Here is an excerpt from the dialogue: 1. It gives me a feeling of being myself, it gives me, it contributes to my sense of identity in fact, actually.
2. Oh, that's very interesting. What does that mean?
3. That's also very difficult to describe because a sense of identity is difficult to describe 4. Yeah, do you mean your sense of identity as a woman with a particular history you have, living in a particular place that you do, with a particular occupation that you have, or is it something more personal, existential?
5. It's more existential, this is me, this is who I am, and that might also related to the fact that because we lived removed, you change friends, you change cultures, and so your sense of identity can be a little fluid, so everything that gives me this feeling of, strengthens my sense of identity.
As we can see, Sabine has intense experiences with works of art that target her very subjectivity, including her sense of self or sense of identity. Louise describes a similar situation. She talks about how pictures provide her life with an existential kind of content. They are a source of happiness and fulfilment, without which she can feel ill. She describes these effects, while talking about an experience at Esbjerg Art Museum: [I]t is psychological. The only option is to become happy. . .The artists have dedicated their lives to create an expression which they have given to us. It is fantastic. . .They make one calm and content. And it is beautiful. When one sees something beautiful one becomes calm inside. (1973,361) and reflection (ibid., 389, 393) are to be held at bay as aesthetic perception must concern perceiving the work of art as it is; it is about perceiving the art object as art, that is, as a grasped or felt meaning. Although imagination fulfils an important function, it needs to be restrained for aesthetic experience to develop. Imagination must adhere to the form and content of the art object, and not follow other associations that can lead the experience astray from the work. Rather than concerning imagination and reflection, aesthetic experience begins with a bodily resonance, an immanently meaningful presence experience. The meaning here is not "detached," but one that "concerns and determines me, resonating in me and moving me" (ibid., 336). It is an immense and powerful experience, yet it has a strong element of "passivity" characterized by feeling: it is passive in the Husserlian sense 4 not only in so far as it depends on the efficacy of the work of art, but also in that it cannot be brought about by any act of volition: mental acts of analytic reflection or object detached imagination diminish or lead away from the aesthetic experience. In other words, passivity in this context does not mean "static" or "lifeless" as if the subject does not participate in or affect the experience. Rather, it means that the participation is not initiated by egoic act of "wanting to do", but by pre-reflective and receptive acts of "being willing to submit". As mirrored in the descriptions provided by Karin, Louise, and Sabina we can with Dufrenne say that the aesthetic object "imposes its presence on us" (ibid., 427) and that it "causes us to yield to it, rather than accommodating itself to us" (ibid). In other words, it is a particular kind of object, a "privileged" (ibid., 388) and unique object, or even "quasi subject" (ibid., 393) "overwhelming us with its imperious presence" (ibid., 388) that is causing what Dufrenne calls an aesthetic feeling. Such aesthetic feeling consists in a certain receptivity to sensuousness which prominently marks it as a form of existential communication: "Aesthetic feeling is deep because the object reaches into everything that constitutes me" (ibid., 404). Feeling is about being receptive and open to the work of art, creating and sustaining intimacy, immediacy, and depth: to understand a work is to be assured that it cannot be otherwise than it is. This is no tautology, since this assurance can come to us only when we are infused with the work to such an extent that we allow it to develop and to affirm itself within us, discovering in this intimacy with the work the will to seek out its meaning within it. For, to repeat, existential necessity cannot be recognized from the outside or be experienced except in myself, insofar as I am capable of opening myself up to this necessity. Such is the necessity of the aesthetic object, which I must at the same time recognize in myself (ibid., 396).
To perceive the work of art as aesthetic, an opening, a receptivity, an "aesthetic attitude," as Dufrenne calls it, is necessary. This receptivity is not an intellectual or critically reflective one, 5 but enables one to be "struck," to come into contact with the aesthetic object. According to Dufrenne, one has to free oneself from a practical attitude and the humdrum of everyday life aside and become open to the feeling, and thereby the meaning, the art work elicits through its affective resonance.
THE PURPOSE OF THE ART MUSEUM: CONCLUSION
The participatory agenda to some extent is at odds with the account of aesthetic experience presented here: if the art museum is primarily intended as a social platform in which we actively discuss ideas, express our creativity and share this with othersall expressive activitiesthen it will be difficult for us to be sufficiently open, receptive and passive for the emergence of these kinds of intense aesthetic experiences. From our interviews, we know how fragile these experiences can be. If you are overly concerned with talking to other people or with thinking about how you can express, create and share, it is not likely to afford the essential shift away from the everyday, practical toward the aesthetic. Simon's suggested kind of participation is likely to overshadow the subtle changes in the passive and pre-reflective features of consciousness that Karin expresses as the "cleaning [that] takes place" or the "bricks [that] are physically moved around in my head or body". However, it is also clear that in order to become sufficiently receptive to aesthetic experience you need to feel comfortable and relaxed when visiting an art museum. This brings us back to the outset of this article and the problem of the museum visitors' narrow social background (Bourdieu and Darbel 1990). Aesthetic experiences might have universal structures according to Dufrenne and therefore in principle be accessible for everyone, but whether these are actualized in the individual entering the museum is entirely contingent: you might be alienated from the museum if you are uncertain of its rules and conventions (Duncan 1995; Samis and Michaelson 2016). As Samis and Michaelson state: "Museums are intimidating spaces with a language all their own" (Samis and Michaelson 2016). The question is therefore this: how can we curate for these fragile aesthetic experiences in a manner where the visitors, who do not belong to the cultural elite of museum regulars, feel included, comfortable, and welcome? How do we square the immense value of aesthetic experiences, such as those of Karin, Sabine, and Louise with the societal challenges facing the museum as pointed to by people such as Simon? On the one hand, we need to open up the art museum such that it addresses more than a small privileged elite. On the other hand, we have to acknowledge that if the art museum only functions as a participatory and interactive space, we risk jettisoning its ability to facilitate the emergence of the intense kinds of aesthetic experience we have discussed.
Is it possible to structure the art museum such that different ways of engaging with artworks are taken into account? To work reflectively and carefully with zones or areas that prioritize either active engagement or the more 'passive' reception? And finally, can all of this be done without promoting elitist narratives or compromising current, successful use patterns? We do not possess the full answers to these kinds of questions, nor to the general challenge of how and whether to reconcile phenomenological ideas with participatory and political agendas. Rather, this paper has demonstrated the existence of a class of aesthetic experiences of great emotional and existential significance to the museum visitors who undergo them. Further, it has shown this perspective to be somewhat in contrast to the participatory agenda. This agenda has informed debates about the purpose of the art museum in today's society. Likewise, the presentation of the phenomenology of aesthetic experience justifies our assertion that the potential of the art museum to evoke aesthetic experiences should be centrally considered when curating exhibitions and displays. 6 END | 7,168 | 2019-11-15T00:00:00.000 | [
"Art",
"Philosophy"
] |
The prerequisites of forming a risk management system in the design of facilities space application
The problem of increasing the term of active existence of space-use equipment is relevant. Application of risk management system in the design and manufacture of space-use equipment is a promising approach to increase resiliency and reliability of spacecraft. This paper discusses the preconditions of the risk management system, which is based on the use of critical small amounts of the state of control objects. The technique of statistical processing of the data of the risks based of the additive approximation of the standard statistical distributions is presented. The generalized structure of the adaptive system of statistical diagnostics risk of abnormal conditions in the space application equipment is offered.
Introduction
In any project to develop space application objects (SAO) there are many uncertainties.Whenever the process of creating the new system has a significant departure from the usual practice, the result of the development becomes unpredictable.An important task of system engineering is to develop the management system so that the uncertainty was eliminated as early as possible [1].Any suddenness in the later stages of the development system can cost significantly more than her detection in the early stages.
Problem of assessment and risk mitigation at its core is the task of identifying and eliminating uncertainties at all stages of the SAO lifecycle.Such problems can be solved by analysis, simulation, full-scale tests, which allow to diagnose and quantify the critical important characteristics of the system.
A promising direction for providing an increase fault-tolerance and reliability of SAO is the creation and implementation of an adaptive risk management system for the design and manufacture of SAO.This system will enable detect, locate and remove the cau ses of defects at all stages of the SAO lifecycle.The testing of system fragments is the means for collecting of important data on the behavior of the entire system and components under controlled conditions.
Materials and methods
A detailed description of the risk management is given in [2].The main disadvantage of used risk management today is to use only quality scale of probability of potential importance of different sources of risk.
The proposed risk management system for the design SAO allows to quantify the significance of risks.It is based on a statistical diagnosis, adaptive to the actual condition of the equipment.To this end, the SAO is introduced into a non-standard mode, which causes random output of basic and additional parameters of the permissible area.Assessment of SAO levels is based on the analysis of level and duration of emissions of monitored parameters.
The risk management system is implemented at three levels.The first level provides a qualitative (rough) assessment.On the second level is produced the approximate continuous quantitative assessment of the actual state.On the third level is made accurate calculation of quantitative characteristics of reliability and adopting risk management solutions.
At the first level of SAO diagnostics is solved the problem of admission control.At this level is given a preliminary assessment of the actual condition of the equipment in the form of "fit -not fit."If any of the monitored parameters has gone beyond the limits of tolerance limits, the risk level is classified as high.
The second level of the SAO diagnostics is implemented, if the result of admission control was the "fit".At this level is produced an approximate quantitative estimate of the risk based on a continuous model of monitored parameters.
At the third level of risk control is made accurate calculation of quantitative characteristics of reliability for inhomogeneous Markov model.It is believed that diagnostic system has properties of adaptability to the structure and the number of available statistics through the use of special algorithms for processing of critical small volume of samples.Deciding on the potentially defective SAO fragments and generation of issue control actions is based on the application of the princip les of fuzzy logic and is carried out on all the steps of SAO lifecycle.
Results
Figure 1 shows a generalized structure of the adaptive system of statistical diagnostics of risk for abnormal mode of SAO operation with regard to actual state of the equipment [3].The principle of operation of this system is as follows.
The signals of controlled parameters of SAO (unit 1) are input to the threshold elements (TE) 21 -2N.Here emissions of controlled parameters from tolerance zone are recorded.In unit 3 the parameters with emissions from tolerance zone are detected with an identification of address of the respective channels.In unit 4 the magnitudes of are measured by means of quantization of tolerance ranges to q levels, where k -is а number of parameters, which have emission from tolerance zone, N -is a number of emissions.The value q is determined depending on the specified value of reliability of identification random process ) (t x i .Similarly, in unit 5 emission duration ij W on k parameters through quantization in time of a random process ) (t x i is measured.
Accumulated during the process of diagnosing the amplitudes and durations are fixed in the form of emission samples formed respectively in units 6 and 7.The sampling of amplitudes and durations parallel is given to the units 11 and 12. Unit 11 implements a method of additives for constructing the empirical density function of the distribution.Unit 12 implements the algorithm of kernel estimation of empirical data.Inputs units 11 and 12 are also connected to the generator output distributions of the function of deposits necessary for the implementation of method of processing of arrays of small samples.In unit 13 an identification of functions of amplitude distributions and emission durations of SAO diagnostic parameters , which have been received in units 11 and 12, are performed.This procedure is made by sequential revision a plurality of the most common laws of probable distributions (equiprobable, exponential, normal, Rayleigh, Weibull et al.) from the bank of distributions 10.Calculations of corresponding values the criterion of consent are performed using the bank of criteria for adequacy 9.The designing of ranked series, based on which a decision on the conformity of empirical data to one of the theoretical distributions, is performed in unit 13.
The validity of the estimated statistical models of the individual parameters is assessed in unit 14, according to results of unit 13.Doing it, we calculate the conditional density distribution emissions of each controlled parameter In accordance with the theory of reliability the emission distribution density is equal to the frequency of failures i-th parameter.Then, the reliability (probability of failure-free operation) of the test piece for the i-th parameter is estimated by the formula In unit 15 the proposition about stopping control process or its continuation, based on the calculation and analysis of the trajectory of the functional loss and the terminal benefit, is decided [4].If unit 15 detects the inadequate reliability of the statistical models of the some parameters, then the unit of mode setting 19 generates a signal to continue the monitoring process.If unit 15 decides to move to the statistical diagnostic mode, then the unit of diagnostics and assessment of the SAO state will be initiated.
In unit of the construction of private models of parameters 16 is carried out formation of continuous models of efficiency for individual SAO parameters on the basis of sampling small size and probability models in form of two -dimensional functions of the density distribution for emission characteristics .Thereafter the intensities of gradual failures are calculated and an inhomogeneous Markov models for the three statuses are constructed.Based on result of operation of unit 16 the diagnosing of the failed element of SAO is performed in unit 17 that implements a technique localizing faults in the face of uncertainty.
Unit 18 uses the results of units 16 and 17 for producing three types of the control action to unit 19: -the use of the SAO for its intended purpose in normal operation; -regulation of unstable parameters and the SAO output to a normal operating mode; -replacement of the failed unit (or fragments) of the SAO.
Discussion
The full-scale simulation and test of object fragments (and subsequent ly the facility as a whole) is applied based on the threshold principle of formation of the parameters emissions from the acceptable zone.As a result, monitoring values are presented as a series of random of levels of emissions object parameters [3].Statistical processing of emissions critically small volumes is based on the additive principle.The emissions are presented as a set of standard symmetric distributions (deposits), or as a set of arrays of random values generated approximately the point of emission parameter.
Each mode of full-scale simulation is displayed as the numerical statistical characteristics of the corresponding random processes .
Fig. 1 .
Fig. 1.Generalized structure of the adaptive system of statistical diagnostics of risk for abnormal mode of object operation with regard to the actual state of the equipment .
(1)ation(1)gives a quantitative description of the significance of different sources of risk. | 2,069.2 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
Review of the millipede genus Eutrichodesmus Silvestri , 1910 ( Diplopoda , Polydesmida , Haplodesmidae ) , with descriptions of new species
Eutrichodesmus, the largest genus in the Oriental family Haplodesmidae, is reviewed and shown to encompass 24 recognizable species, all keyed, including the following nine new species: E. regularis sp. n., E. aster sp. n., E. fi lisetiger sp. n., E. curticornis sp. n., E. asteroides sp. n. and E. griseus sp. n. from Vietnam, E. distinctus sp. n. from China, E. multilobatus sp. n. from Laos, and E. reductus sp. n. from Sulawesi, Indonesia. Th is genus is slightly redefi ned as follows: Gonopod coxae usually abundantly setose ventrolaterally; telopodite usually slender, not enlarged towards end of femorite, but typically with a more or less distinct process or outgrowth laterally, opposite recurvature point of seminal groove; solenomere thereafter usually comprising most of telopodite, sometimes elaborate; seminal groove normally terminating distally to subapically, with or without a hairpad; acropodite normally small to nearly absent. ZooKeys 12: 1-46 (2009) doi: 10.3897/zookeys.12.167 www.pensoftonline.net/zookeys Copyright Sergei I. Golovatch et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Launched to accelerate biodiversity research A peer-reviewed open-access journal
Introduction
Th e millipede family Haplodesmidae Cook, 1895, which has only six component genera basically occurring (except for a few pantropical introductions) in East and Southeast Asia, as well as the southwestern Pacifi c region, has recently been reviewed (Golovatch et al. 2009).
Th e present paper records another nine new species of Eutrichodesmus, thus improving our knowledge of the diversity of this Oriental genus.Th e descriptions below are arranged by countries in a more or less north-south direction.
Material and methods
Th e material serving as the basis for the present contribution derives mainly from subterranean collections made in Vietnam, China, Laos, and Indonesia by Anne Bedos and Louis Deharveng (MNHN).Th e bulk of this material, including most of the holotypes, has been deposited in MNHN, with two holotypes and a few paratypes from China and Indonesia shared between the collections of SCAU and MZB, respectively, and some further paratypes deposited in the collection of ZMUM, as indicated hereafter.Th e terms "doratodesmid" or "haplodesmid" are used hereafter only in their vernacular meaning, in order to concisely characterize a body shape, i.e. capable or nearly capable of volvation in the former informal group versus vermiform and definitely incapable of valvation in the latter.
SEM micrographs were taken using a JEOL JSM-6480LV scanning electron microscope.After examination, SEM material was removed from stubs and returned to alcohol, all such samples being kept at MNHN.
Name: To emphasize the obvious distinctions from E. latus and E. similis, these being the only congeners hitherto known from Guangxi Province (Golovatch et al. 2009).
Diagnosis: Diff ers from all other congeners in the especially distinct metatergal tuberculation, coupled with the lack of tergal trichome, as well as only a few minor details of gonopod structure (in particular, the shape of the telopodite).In addition, it can be separated from the other species known from the same province, E. latus and E. similis, by the apparently perfect volvation (due to much shorter and more strongly declivous paraterga).
Description: Length of adults of both sexes ca 8.0-8.5 mm, width 1.35-1.40mm, body broadest at segment 3 or 4. Holotype ca 8.0 mm long and 1.4 mm wide.Coloration uniformly pallid, shown pinkish because of a photographic artifact (Fig. 1).
Eutrichodesmus regularis
Name: To emphasize the highly regular, isostictic (i.e.regular rows not only transversely, but also longitudinally) and almost undiff erentiated pattern of metatergal tuberculation.
Diagnosis: Diff ers from congeners by the perfect volvation, coupled with faint metatergal lobulations, the especially regular, nearly undiff erentiated and isostictic pattern of metatergal tuberculation, the peculiar, phylloid tergal setae and a few minor details of gonopod structure (in particular, the shape of the telopodite and distofemoral process).
Description: Length of adults of both sexes ca 9.0-10.0mm, width 1.7-1.8mm, body broadest at segment 3 or 4. Holotype ca 10 mm long and 1.7 mm wide.Coloration uniformly pallid, shown pinkish because of a photographic artifact (Fig. 4).
Remarks: Th is pallid species shows highly peculiar, phylloid but very short, tergal setae and undiff erentiated metatergal tubercles.It is also a typical "doratodesmid", possibly another troglobite.Diagnosis: Diff ers from all congeners, except E. macclurei and E. reclinatus, by the extremely high mid-dorsal crests on metaterga 5-19.In addition, it can be distinguished from E. macclurei and E. reclinatus in the presence of a mid-dorsal projection on metatergum 4, the slightly more strongly declivous and quadrilobate paraterga.Th e new species diff ers from all other species of the genus in minor details of gonopod structure (in particular, the shape of the telopodite and distofemoral process).Description: Length of adults of both sexes ca 12-14 mm, width 2.1-2.3 mm, body broadest at segment 3 or 4. Holotype ca 12 mm long and 2.2 mm wide.Coloration uniformly pallid, shown pinkish because of a photographic artifact (Fig. 8).
Remarks: Th is pallid species shows peculiarly high mid-dorsal projections, coupled with short, distally fl attened tergal setae.It is also a typical "doratodesmid", possibly another troglobite.Name: To emphasize the peculiar, fi liform but rather short tergal setae.
Eutrichodesmus fi lisetiger
Diagnosis: Diff ers from congeners by the perfect volvation, coupled with the fi liform tergal setae, evident metatergal lobulations, a regular pattern of tergal tuberculation and a few minor details of gonopod structure (in particular, the shape of the telopodite and a rudimentary distofemoral process).
Description: Length of adults of both sexes ca 12-13 mm, width 2.5-2.6 mm, body broadest at segment 3 or 4. Holotype ca 12 mm long and 2.5 mm wide.Coloration uniformly pallid, shown pinkish because of a photographic artifact (Fig. 12).
Remarks: Th is rather large, pallid species shows fi liform tergal setae and only poorly diff erentiated metatergal tubercles.It is also a typical "doratodesmid", possibly still another troglobite.Name: To emphasize the unusually short antennae.Diagnosis: Diff ers from congeners by the perfect volvation, coupled with the very short antennae, legs and paraterga, the contiguous paramedian tubercles above the antennal sockets, the regular mixostictic pattern of metatergal tuberculation and a few minor details of gonopod structure (in particular, the shape of the telopodite and distofemoral process).
Legs very short and stout, barely reaching edge of paraterga (Fig. 18F).Gonopods (Figs 18G, H) relatively complex.Coxae abundantly micropapillate and setose, with a usual, apicolateral lobe.Telopodite elongate, slender, only very slightly arcuate, with a marked, denticulate distofemoral process (dp) at about midway and an evident distolateral tooth; seminal groove terminating without hairpad at base of the tooth.
Remarks: Th is pallid species shows short antennae, legs and paraterga, all these traits apparently being related to the small body size.It is also a typical "doratodesmid", possibly yet one more troglobite.Name: To emphasize the nearly star-shaped but broad body of the volvated animal.
Eutrichodesmus asteroides
Diagnosis: Diff ers from congeners by the peculiar, subtriangular, rather high, mid-dorsal crests on metaterga 4-18, coupled with 19 body segments, the relatively wide paraterga and a few minor details of gonopod structure (in particular, the shape of the telopodite and a rudimentary distofemoral process).
Description: Length of adults of both sexes ca 8.0-8.5 mm, width 2.1-2.5 mm, body broadest at segment 3 or 4. Holotype ca 8.0 mm long and 1.9 mm wide.Coloration uniformly pallid, shown pinkish because of a photographic artifact (Fig. 19).
Diagnosis: Diff ers from congeners by the peculiar, grey to blackish coloration, coupled with the relatively short and high paraterga (thus the body apparently not capable of complete volvation) and a few minor details of gonopod structure (in particular, the shape of the telopodite and distofemoral process).Description: Length of adults of both sexes ca 6.0-6.5 mm, width 0.7-0.75mm, body broadest at segment 3 or 4. Holotype ca 4.5 mm long and 2.5 mm wide.Coloration of vertex, collum, following metaterga, and of dorsal and lateral parts of telson greyish to dark grey.Clypeolabral part of head, venter and legs contrastingly pallid (Fig. 23).Holotype light grey.
Legs relatively short and stout, barely reaching edge of paraterga (Figs 25C-E).
Remarks: Because this pigmented species obviously shows incomplete volvation, it is not a typical "doratodesmid", defi nitely epigean.Moreover, because of the elevated anterior row of tubercles on the collum, superfi cially it slightly resembles certain Pyrgodesmidae.Name: To emphasize the mostly 5-lobulated paraterga.
Eutrichodesmus multilobatus
Diagnosis: Diff ers from congeners by the peculiar, evidently and only laterally 5-lobulated paraterga, coupled with very distinct but only slightly diff erentiated metatergal tuberculation and a few minor details of gonopod structure (in particular, the shape of the telopodite and distofemoral process).
Description: Length of adults of both sexes ca 6.0-6.5 mm, width 1.1-1.2mm, body broadest at segment 3 or 4. Holotype ca 6.0 mm long and 1.1 mm wide.Coloration uniformly pallid, shown pinkish because of a photographic artifact (Fig. 27).
Name: To emphasize the strongly underdeveloped paraterga 2 and lack of metatergal tuberculation.
Diagnosis: Diff ers from congeners except E. communicans Golovatch, Geoff roy, Mauriès & VandenSpiegel, 2009 by the strongly underdeveloped paraterga 2, coupled with 19 body segments and the absence of metatergal tuberculation; from E. communicans by a very short tergal trichome, from it and other congeners in a few minor details of gonopod structure (in particular, the shape of the telopodite).Description: Length of adults of both sexes ca 4.0-4.2mm, width 0.45-0.5 mm, body broadest at midbody segments.Holotype ca 4.0 mm long and 0.45 mm wide.Coloration uniformly pallid, shown pinkish because of a photographic artifact (Fig. 31).
Legs relatively long and slender, evidently surpassing edge of paraterga (Fig. 33B).Gonopods (Fig. 33F, G) relatively complex.Coxae abundantly micropapillate, but only with a few macrosetae.Telopodite elongate, only slightly arcuate, with a large, papillate, peculiar, distofemoral outgrowth (dp) in distal one-third and a slender but short solenomere bearing a few small setae (but no pad!) subapically at base of a very
Discussion
Now that Eutrichodesmus has grown even more speciose, containing 24 species and no doubt with more still remaining to be discovered, it seems appropriate to redefi ne it and provide an updated key.Th e defi nition of the genus (cf.Golovatch et al. 2009) is therefore amended as follows: Eutrichodesmus Silvestri, 1910 Type-species: Eutrichodesmus demangei Silvestri, 1910, by original designation.
New diagnosis
Body usually "doratodesmid", with or without mid-dorsal projections; conglobation usually complete, only seldom imperfect due to underdeveloped or hypertrophied paraterga.Collum and metaterga often (micro)setose, usually tuberculate, only rarely smooth.Gonopod coxae usually abundantly setose ventrolaterally; telopodite usually slender, not enlarged towards end of femorite, but typically with a more or less distinct process or outgrowth laterally, opposite recurvature point of seminal groove; solenomere thereafter usually comprising most of telopodite, sometimes elaborate; seminal groove normally terminating distally to subapically, with or without a hairpad; acropodite normally small to nearly absent.
Conclusion
As stated recently (Golovatch et al. 2009), there seem to be almost no coherent patterns in the distribution of the various non-genitalic and gonopodal characters in Eutrichodesmus.Th e same concerns geographic patterns.Only a few pairs of particularly similar species can be distinguished in this speciose genus: e.g.E. macclurei and E. reclinatus, E. latus and E. similis, E. demangei and E. arcicollaris, and E. communicans and E. reductus sp.n.However, it is still too early to attempt to discriminate species groups, because many more new species of Eutrichodesmus can be expected to occur at least in East and Southeast Asia, as well as in the Indo-Australian archipelago, which seem to represent the centre of diversity of this genus (Fig. 34), and of the entire family Haplodesmidae.
Th at most of the Eutrichodesmus species have been found and described from cave material alone does not necessarily mean their obligate cavernicoly, even though many of them exhibit troglomorphic features.Most of the caves are simply better explored than the adjacent, much less prospected, epigean habitats. | 2,760.8 | 2009-06-18T00:00:00.000 | [
"Biology"
] |
A polarization-based image restoration method for both haze and underwater scattering environment
Existing polarization-based defogging algorithms rely on the polarization degree or polarization angle and are not effective enough in scenes with little polarized light. In this article, a method of image restoration for both haze and underwater scattering environment is proposed. It bases on the general assumption that gray variance and average gradient of a clear image are larger than those of an image in a scattering medium. Firstly, based on the assumption, polarimetric images with the maximum variance (Ibest) and minimum variance (Iworst) are calculated from the captured four polarization images. Secondly, the transmittance is estimated and used to remove the scattering light from background medium of Ibest and Iworst. Thirdly, two images are fused to form a clear image and the color is also restored. Experimental results show that the proposed method obtains clear restored images both in haze and underwater scattering media. Because it does not rely on the polarization degree or polarization angle, it is more universal and suitable for scenes with little polarized light.
A polarization-based image restoration method for both haze and underwater scattering environment Zhenming Dong 1,2 , Daifu Zheng 1,2 , Yantang Huang 1,2 , Zhiping Zeng 2 , Canhua Xu 1,2* & Tingdi Liao 3,4 Existing polarization-based defogging algorithms rely on the polarization degree or polarization angle and are not effective enough in scenes with little polarized light. In this article, a method of image restoration for both haze and underwater scattering environment is proposed. It bases on the general assumption that gray variance and average gradient of a clear image are larger than those of an image in a scattering medium. Firstly, based on the assumption, polarimetric images with the maximum variance (I best ) and minimum variance (I worst ) are calculated from the captured four polarization images. Secondly, the transmittance is estimated and used to remove the scattering light from background medium of I best and I worst . Thirdly, two images are fused to form a clear image and the color is also restored. Experimental results show that the proposed method obtains clear restored images both in haze and underwater scattering media. Because it does not rely on the polarization degree or polarization angle, it is more universal and suitable for scenes with little polarized light.
In traffic monitoring, remote sensing and ocean exploration, because of the scattering medium, such as fog, haze, turbid water and so on, the visibility and the contrast of images are reduced and the details of images are blurred [1][2][3] . Usually there are two types of imaging methods to enhance the image clearness 1 . One is built on image enhancement algorithm, such as histogram equalization 4 and Retinex algorithm 5 . The other is image restoration methods, which obtain unscattered object light based on specific physical models or priori hypotheses, such as dark channel prior method, Schechner's polarization defogging method, and Tan's single image defogging method. The image enhancement methods are straight forward and very effective to improve the contrast of blurred images. However, because these methods do not take into consideration of the image degeneration, they usually induce more image distortion and obtain fewer recovery details than image restoration methods. In addition, defogging methods based on using deep learning technologies become prevail in recent years 6,7 .
In this article, we present a defogging algorithm in the scope of image restoration method. Plenty of researches on image restoration techniques through scattering mediums have been conducted over the past decades. Classic methods are listed as following. In 2001, Schechner et al. [8][9][10] proposed an image defogging method based on polarization difference. They assumed that the atmospheric light is partially polarized and the object light is unpolarized. The difference between two polarization images ("I max " and "I min ") is regarded as atmospheric polarized light. And an Atmospheric Propagation Model is adopted to restore the image. In the next few years this method was extended to be used in the field of underwater imaging experiments 11 . In 2008, Tan 12 proposed a single image defogging method based on two assumptions of that clear images have larger contrast than foggy images and the airlight tends to be smooth in foggy weather. In the same year, Fattal 13 formulated a refined image formation model. They introduced the new variable "surface shading" and assumed that "surface shading" and transmission functions are locally statistically uncorrelated. Then the defogging is conducted with the transmission functions calculations. The well-known image dehazing method using dark channel prior 14 6 . In very recent years, Zhu et al. 15,16 proposed a novel fast single image dehazing algorithm based on artificial multiexposure image fusion and an image dehazing method by an artificial image fusion method based on adaptive structure decomposition. Vazquez-Corral et al. 17 proposed a method of physical-based optimization for non-physical image dehazing methods. In 2021, by using dual self-attention boost residual octave convolution, Zhu et al. 18,19 developed a defogging method suitable for remote sensing imaging. In general, benefit from more information obtained from multiple different polarization images, restoration method based on polarization imaging has more advantages and better details than single image dehazing methods. This technology had been intensively researched in past decades in the aspects of visibility enhancement 20 , active imaging system 21,22 , underwater target detection [23][24][25] , real-time measurement 26 , long-range polarization imaging 27 , utility of the polarization angle [28][29][30] . Moreover, Shao et al. 31 proposed a hazy image restoration method based on atmospheric light polarization tomography. Liu et al. 32 used Wavelet Transform to stratify images and removed haze of images. Fang et al. 33 proposed an image dehazing method using polarization effects of objects and airlight. In 2021, Liang et al. 34 proposed a low-pass filtering based polarimetric dehazing method for dense haze removal. So far plenty of methods have been used to restore the image effectively in the specific scene. The restoration effect depends on the accurate estimation of polarization degrees or polarization angles. However, it is difficult to calculate the polarization parameters exactly in a strong scattering scene with very weak polarized light. Large polarization parameters estimation error results in unsuccessful image restorations in these scenes. In comparison, calculation of gray variance is more accurate and practical in the various conditions. Based on the general assumption that gray variance and average gradient of a clear image are larger than those of an image through scattering media, a universal image restoration method is proposed in this article. Firstly, Stokes parameters are used to find polarimetric images with the maximum variance (I best ) and minimum variance (I worst ). Secondly, the transmittance can be estimated and be used to remove the scattering light from background medium of I best and I worst . Thirdly, two images are fused to form a clear image and the color is also restored. Experimental results show that the proposed method is applicable both in atmospheric and underwater scattering media. Moreover, it is still very effective in scenes with little polarized light.
Technical background
Stokes parameters. The polarization properties of light can be described by the Stokes parameters. The Stokes parameters of light are denoted as [S 0 , S 1 , S 2 , S 3 ] T , where S 0 is the total light intensity, S 1 is the light intensity difference between 0° and 90° polarization direction, S 2 is the light intensity difference between 45° and 135° polarization direction, and S 3 is the light intensity difference between left-handed and right-handed circularly polarized light. Generally, there is little circularly polarized light in the natural environment, so S 3 is ignored in this article. By measuring the intensities of light along three or four different angles, the first three components of the Stokes parameters can be obtained. For example, measure the intensities of light along four directions of 0°, 45°, 90°, and 135°, which are respectively expressed by I 0 , I 45 , I 90 , and I 135 . The first three components of the Stokes parameters can be calculated as 29 : By multiplying with Mueller matrix of a linear polarizor with axis at angle θ, the intensity image at any angle (denoted by I θ ) can be derived from the first three components of the Stokes parameters as Eq. (2) 30 , in which the polarimetric images with the maximum and minimum variance can be selected.
Atmospheric scattering model and underwater scattering model. In a scattering environment, the total light captured by a camera I total can be calculated from the sum of the attenuated object light L·t and the back scattered light A. Here, L is the clear object image and t is the transmittance of the scattering media. Generally, transmittance t is defined as Eq. (3), which relates the backscattered light A in the camera and the backscattered light from the infinite distance A ∞ .
So the restored image can be expressed by the Eq. (4) 35 . There are only two unknowns t and A ∞ , which need to be determined in the following restoration methods. (1)
Image restoration methods based on polarization imaging
In order to overcome the difficulty of polarization parameters estimation in a strong scattering scene with very weak polarized light, we adopted a restoration process based on the variance prior and illustrated it in Fig. 1. At first, the polarimetric images containing the most and least object light were selected among the calculation results of Eq. (2) with a variance prior, and named as I best and I worst ; then we calculated the backscattered light A w from I worst using a wavelet transform, and obtained the degenerated object light D b from the difference between I best and A w multiplying with several polarimetric factors; furthermore, the transmittance of the media t was calculated by the combination of D b , I best , I worst and the corresponding backscattered light; and the undegenerated object light in I best was calculated as L b = D b / t. Similar process was used to calculate the undegenerated object light in I worst and named as L w . In the end, the sum of L b and L w was taken as the final restored image in our method. The detailed restoration algorithm was described as the following.
Find the I best and I worst . The total light intensity received by the camera sensor is defined as I total , which is composed of the light from the object (defined as D) and the light from the background scattering medium (defined as A). This process is illustrated in Fig. 2.
In general, D and A have different polarization degrees and polarization angles. A linear polarizer is installed in front of an ordinary camera. Rotating the polarizer at different angles, the visibility of the image is different. At an angle θ, the light intensity I θ can be written as www.nature.com/scientificreports/ The angle at which the image contains the most object light can be defined as θ best , and the corresponding polarization intensity is defined as I best . The angle at which the image contains the least object light is defined as θ worst and the corresponding intensity is I worst . At angle θ best , the value of D θ /I total is maximum. At angle of θ worst , the value of D θ /I total is minimum. In general, θ best is perpendicular to θ worst . In previous work, θ best and θ worst are usually determined by the polarization property of a specific region. For example, using the brightest angle 8-10 , the direction perpendicular to the incident surface 11,22 , the polarization angle of the sky region 28 , or two reference objects 36 to calculate θ best were reported in former researches. In order to establish a universally applicable algorithm for various scattering media, we use a variance prior based on the fact that clear images usually have larger gray-scale variance and larger average gradient than images through scattering media 12 . The scattering medium overwhelms the details, reduce the contrast, gray-scale variance and average gradient 29 . Therefore, the gray-scale variance is taken as the standard to find θ best and θ worst . Because the gray-scale variance does not depend on the polarization properties of a specific region of the image, it can be universally applied to various scattering media. In the specific calculation, the polarization image at any angle can be obtained by using Eq. (2). Each image at a different angle is divided into 10 × 10 blocks. The angle with maximum gray-scale variance in most regions is selected and regarded as θ best . θ worst is perpendicular to θ best . After θ best and θ worst are selected, I best and I worst can be calculated by Eq. (2). The θ best which maximizes the gray-scale variance of the image is automatically determined by the algorithm.
Obtain the object light (degenerated) of I best .
Usually, the light intensity in a scattering media consists of object light D and backscattered light A. In Eqs. (6) and (7), b and w are the subscripts representing the images at θ best and θ worst , respectively. After selecting the minimum variance region as background region, the polarization degree P can be derived from the background region of I best and background region of I worst 9 .
From Eq. (8), we obtain: By combining Eqs. (6) and (9), we obtain: A w and P are required to calculate D b . A w is the backscattered light of I worst . Because background scattering media are relatively smooth compared with objects 12 , object light has a much higher frequency than the backscattered light. So the low-frequency component of I worst (denoted by A′ w ) can be treated as an approximation of the backscattered light A w . Wavelet Transform is adopted to obtain the low-frequency component of I worst in this article. For example, we use wavelet transform (basis function "db8") to take the fourth layer of low-frequency components of the image I worst as A′ w . There is still some object light that has not been removed in A′ w . Two correction factors were introduced to estimate backscattered light more accurately.
First correction factor (denoted by y1). For each channel, the correction factor component y i (i = r, g, b) can be obtained by using the following formula: A w ∞ can be derived from the background region (average value of minimum variance region) of I worst , and note that A w ∞ is the backscattered light of I worst from an infinite distance.
The correction factor y1 is a matrix which is used to estimate the backscattered light of each pixel in A′ w . It can be described simply as the reciprocal of the difference between a pixel and A w ∞ . Less backscattered light in a pixel is corresponding to a smaller y1.
Second correction factor (denoted by y2). When the polarized light is weak, the result of Eq. (10) is near zero. In order to avoid the unreasonable result, we introduce a correction factor y2 (also a matrix), In order to make our method applicable to more scenes and not only depend on the polarization degree P. We set y = (1 − P)/(1 + P), P is in the range [0-1]. Thus, y is in the same range of [0-1]. In the actual algorithm, the image with maximum average gradient was calculated by searching the exact y in the range of [0-1]. The parameter y which maximizes the average gradient of the image is automatically determined by the algorithm.
In this section, the wavelet transform and two correction factors were used to calculate D b .
Calculate the transmittance. Because I best and I worst are two images of the same scene at two angles which are perpendicular to each other. We assume that they have the equivalent transmittance t. According to Eq. (4), where A b ∞ and A w ∞ can be respectively derived from the background region (average value of minimum variance region) of I best and I worst , and A w is unknown. t can be calculated by A b and A b ∞ . However, I best contains most of the object light and a small part of scattered light, I worst contains a small part of the object light and most of the scattered light. A b ∞ is little and as a denominator it causes the instability of the result. As an alternative, t is calculated by Eq. (17).
As mentioned above, D w is much smaller than D b . It can be neglected in Eq. (17). So after calculation of I best and I wors from the captured polarization images, A b ∞ and A w ∞ from the background region, D b from Eq. (14), the transmittance of the scattering media can be computed.
Obtain the object light (undegenerated) and fuse two images. Now we can obtain the object light L b of the image I best , which has not been degraded.
And the object light L w of the image I worst can be derived from Eq. (3): The fused object light L is obtained: We have used transmittance t to obtain object light L. In the last step of our method, the normalized first component of Stokes parameters S 0 is used for color restoration. L was multiplied by the gray value of S 0 to correct the intensity of each color channel.
Experimental results and discussion
We used a polarization camera (Luccid PHX050S, Canada) based on the technology of division of the focal plane, which was capable to capture four polarization images with different angles (0°, 45°, 90°, 135°) at the same time and can be applied to a moving object. The polarization camera is shown in Fig. 3(a). The smallest periodical cell in the camera focal plane is shown in Fig. 3(b). Micro-polarizer array and color filter array are in front of the sensor. The underwater experiments were arranged in a square glass container, and the background behind the glass container was arranged in black. The light source (LED white light source) was used for illumination. The intensity images (S 0 ), the results of dark channel prior dehazing, the results of DehazeNet 6 , the results of Schechner's method, the results of Ren's method 34 and the results of our proposed method are given in Fig. 4. The groups of 1-4 are images of outdoor buildings and hills in the foggy weather, the distance between the scene and the camera is within a few thousand meters. And the groups of 5-9 are indoor underwater experiments in the milk solution with different concentrations. The imaging objects were put in a transparent water tank made of glasses, and A 5 W unpolarized white LED light was taken as the illumination source in front of the tank. Fig. 4, in group 1-4, the objects are buildings and hills in the foggy weather. (a) is the intensity image, (b)-(f) are the corresponding restored images using dark channel prior, DehazeNet, Schechner's method, Ren's method, and our method, respectively.
As we can see, in group 1, the building in intensity image (a) is very blurry. Buildings in images (b), (c) and (f) are clearer. For more details, we choose the same region (black frame) in images (a-f) and enlarge it in Fig. 5. Obviously, the proposed method obtains the clearest restored image with most details. It causes less noise and restored more details in the image. Same results can be found in group (1-3), even the buildings in group 3 are much farther away from the observer than those in group 1 and 2. Long distance scattering in fog causes www.nature.com/scientificreports/ the depolarization of the light, which results in the restoration difficulty in a method using polarization prior. For comparison, images of polarization degree (P = (S 1 2 + S 2 2 ) 0.5 /S 0 ) are displayed in Fig. 6. As showed in Fig. 6, the object light is polarimetric and air light is not polarimetric. So Schechner's polarization assumption is not accurate here. Images (d) in group 1-3 in Fig. 4 show Schechner's method does not work very well. Our method does not depend on the specific state of polarization and is more universal. In group 4, hills are shrouded in fog. The color in the image (f) is closest to the real scene. It is dark green and little fog on the hill in front of the scene.
The concentration of the milk powder increases gradually from group 5 to 7. and from 8 to 9. In group 5 (10 mL milk in 6L water), 6(12.5 mL milk in 6L water) and 7(15 mL milk in 6L water), from images (c) and (f), we found the results of DehazeNet and proposed method are much better than intensity image (S 0 ). And severe blue shift happens in image (b), while the true color of the image is kept in our method. By comparison of restored images with different concentration of milk, we found that Schechner's method is better than the image (S 0 ) but not very effective with little polarimetric light. Especially in high-concentration scattering media, the degree of polarization is very small. But the proposed method still works well under such strong scattering condition.
The histograms can provide a more accurate description on the spectrum and details in an image. We calculate the histograms of the images in the group 5-7 in Fig. 4 and plotted in Fig. 7. Wider histogram relates to an image with higher quality and more details. We find that the histograms of both DehazeNet and our proposed method in Group 5-7 are wider than others in Fig. 7.
In order to observe the details, we choose the same region (black frame) and enlarge them in Fig. 8. In such a strong scattering, the intensity image (a) in group 9 is very blurry and we cannot see any details in this image. The restored images with Schechner's method and with Ren's method are better than the intensity image. But it is still blurry. The restored images with dark channel method, with DehazeNet and with the proposed method are much better.
Next, we evaluate the experimental results by four objective evaluation indexes: contrast 1 , gray standard deviation 37 , average gradient 38 and information entropy 7 . Contrast reflects the difference between adjacent pixels in the image. The better image has larger contrast. Gray standard deviation reflects the degree that the gray value deviates from the average value. The better image has larger standard deviation. Average gradient reflects the Information entropy reflects the amount of information contained in an image. An image with larger information entropy contains more information.
In Table 1, the bold data indicates the maximum value of every group, and the percentage in brackets indicates the improvement of the proposed method compared with the intensity image. As can be seen from Table 1, compared with the intensity image, contrast, gray standard deviation, average gradient and information entropy of the restored images are improved evidently. In general, compared with intensity images, the standard deviation www.nature.com/scientificreports/ and average gradient of restored images by the proposed method are enhanced by 50-100%. The information entropy is improved by 5-10%. And the contrast is enhanced by 100-300%. So far, our defogging algorithm has a performance as good as DehazeNet, both in foggy weather and underwater experiments. DehazeNet is a latest-developed dehazing method using a convolutional neural network 6 . It has an extremely high restoration accuracy, but requires a network training based on a large number of precollected image libraries. Our method can directly defog in a single image, so it will be more practical in the application of various scattering environment.
Conclusion
In this article, we proposed an image restoration method based on gray variance and average gradient assumption to overcome the difficulty of polarization parameter estimation under strong scattering condition. This technology is performed with a single-shot polarization camera based on the technology of division of the focal plane. Experimental results show that the proposed method obtains clear restored images. Same improvements can be found in the evaluation indexes for our method. Therefore, compared with previously reported methods, our method has some advantages of more restoration details, less noise, and more general applicability. It is independent of a specific polarization state in various scenes and keeps effective even in a scene only containing little polarized light. | 5,791.8 | 2022-02-03T00:00:00.000 | [
"Physics"
] |
“3D Counterpart Analysis”: A Novel Method for Enlow’s Counterpart Analysis on CBCT
The aim of this study was to propose a novel 3D Enlow’s counterpart analysis traced on cone-beam computed tomography (CBCT) images. Eighteen CBCT images of skeletal Class I (ANB = 2° ± 2°) subjects (12 males and 6 females, aged from 9 to 19 years) with no history of previous orthodontic treatment were selected. For each subject, a 2D Enlow’s counterpart analysis was performed on lateral cephalograms extracted from the CBCT images. The following structures were identified: mandibular ramus, middle cranial floor, maxillary skeletal arch, mandibular skeletal arch, maxillary dento-alveolar arch, mandibular dento-alveolar arch. The differences between each part and its relative counterpart obtained from the 2D analysis were than compared with those obtained from a 3D analysis traced on the CBCT images. A Student’s t-test did not show any statistical significant difference between the 2D and 3D measurements. The landmarks proposed by this study identified the cranio-facial structures on the 3D images in a way that could be superimposed on those described by Enlow in his analysis performed on 2D lateral cephalograms.
Introduction
The study of lateral cephalograms is absolutely common in daily practice among orthodontists and oro-maxillofacial surgeons and it is fundamental for a correct diagnosis as well as for drafting a correct treatment plan. Lateral radiographs can be used to evaluate dentofacial and craniofacial proportions and even growth changes. However, a cephalometric study traced on 2D images has intrinsic limitations due to magnification, geometric distortion, and above all, superimpositions of anatomical structures. Three-dimensional (3D) diagnostic imaging could overcome these limitations of the 2D imaging [1] even with a low dose of radiation absorption for the patient [2,3]. Three-dimensional imaging is now widespread in orthodontics, with devices ranging from intraoral scanners to radiology systems such as cone-beam computet tomography (CBCT) machines. Three-dimensional imaging has several advantages as compared with medical a CT: lower radiation dose exposure, compact dimension that allows devices to be installed in a common dental office, and no requirement for a patient to be lying down for the scan.
The need to overcome the limits of the 2D cephalometric analysis has led clinicians to develop different CBCT-based methods of 3D cephalometric analysis in the last few years. Swennen and Schutyser [4] described the potential of the CBCT 3D cephalometry. Gateno et al. [5] proposed a new 3D cephalometric analysis for measuring the size, shape, position and orientation of different facial units and introduced a novel method to measure asymmetries. Farronato et al. [1] proposed a 10-point 3D analysis of CBCT images which showed that the small number of points to be selected drastically reduce human error as compared with the 2D Steiner cephalometry method. Furthermore, Farronato et al. [1] concluded that the information provided by the graphical representation facilitated orthodontic diagnosis making it more reliable and repeatable. Perrotti et al. [6] proposed a novel 3D cephalometric analysis with relative normal values of reference.
Some authors have also proposed some non-radiographic 3D methods. Juerchott et al. investigated the reliability of 3D cephalometric landmarks traced on magnetic resonance imaging (MRI) records [7]. Catillo et al. demonstrated that jaws relation and incisors orientation could be predicted from 3D photogrammetry measurements [8].
However, two recent systematic reviews by Pittayapat et al. [9] and Smektała et al. [10] concluded that 3D cephalometry still had limited evidence of diagnostic efficiency. Furthermore, the use of CBCT images in orthodontics is not wide spread, due to the greater exposure to radiation as compared with teleradiographs, despite the existence of low-dose image acquisition protocols. Indeed, some studies have also proposed cephalometric analysis obtained with a reduced field of view (FOV) in order to combine the advantages of three-dimensional imaging with a further reduction in radiation exposure. The reduced FOV extending from the Frankfurt plane to the lowest point on the chin symphysis (Me) as compared with a large FOV demonstrated the reliability of the measurements performed while reducing radiation exposure [11,12].
Several cephalometric methods have been published in the literature to analyze the cranio-mandibular structures on lateral cephalograms and, currently, even some automated landmark detection and tracing systems have been proposed [13].
Donald H. Enlow in his "counterpart analysis" [14] proposed a method to correlate vertical and horizontal dimensions that led to a balanced or unbalanced model of the face. The counterpart analysis examined the shape, location, and development of each structural component within the framework of the cranio-mandible-cervical whole and comparing each structure with the corresponding counterpart using the "millimeter difference method", in which the condition with from 0 to 2.5 mm difference between part and counterpart was considered to be harmonic and ideal.
In an era of increasingly digital orthodontics and three-dimensional orthodontic analysis and design, it is possible to overcome the difficulties of a two-dimensional cephalometric analysis and review the old cephalometric notions in light of modern technologies. Therefore, the aim is to increase the precision of the analysis without overlapping of anatomical structures or points constructed radiographically.
The purpose of this study was to present a new 3D Enlow's horizontal counterpart analysis that used anatomical points traced on CBCT images, in order to identify the different horizontal dimensions of Enlow's 2D counterpart analysis.
Materials and Methods
Eighteen CBCT images of skeletal Class I (ANB = 2 • ± 2 • ) subjects (12 males and 6 females, aged from 9 to 19 years) with no history of previous orthodontic treatment were selected from the archives of the Unit of Orthodontics, Department of Innovative Technologies in Medicine and Dentistry, University of Chieti, Chieti, Italy. The radiographic images were obtained by using a low dose CBCT machine (Vatechlpax 3D PCH-6500, Fort Lee, NJ, USA) and processed using the Ez3D Plus Software (Vatech, Global Fort Lee, NJ, USA). The scanning procedure has been previously described by Moscagiuri et al. [15]. For each subject, lateral cephalograms were extracted from a CBCT image and evaluated with Enlow's counterpart analysis [16,17] by using the OrisCeph3 (OrisLine; Elite Computer Italia S.r.l., Milano, Italy) software and the CBCT scans were evaluated with a 3D cephalometric analysis by using the Materialise Mimics software (Materialise NV, Leuven, Belgium). Enlow's horizontal counterpart analysis were performed on both lateral cephalograms and CBCT images.
2D Horizontal Counterpart Analysis
Enlow's Counterpart Analysis was performed on lateral cephalograms identifying the landmarks described in Table 1: Table 1. Landmarks and lines identified in the 2D simplified Enlow's analysis.
3D Horizontal Counterpart Analysis
Before tracing, the CBCT images were orientated according to three reference planes identified by selecting three points on the most external slice of the CBCT scan, respectively, for every single view (coronal, axial, and sagittal), as previously done by Perrotti
3D Horizontal Counterpart Analysis
Before tracing, the CBCT images were orientated according to three reference planes identified by selecting three points on the most external slice of the CBCT scan, respectively, for every single view (coronal, axial, and sagittal), as previously done by Perrotti et al. [6] ( Figure 4). Planes identified during the 3D cephalometric analysis were traced according to the following geometric rules of planes construction: • Planes passing through 3 points; • Planes passing through 2 points normal to a plane; • Planes passing through 1 point parallel to a plane.
By using these planes, the distances between a point and a plane were automatically measured by the software with the function "measure and analyze" from the software's tool menu.
The landmarks coordination used in the 3D analysis are explained in Table 2.
The anatomical lingual tuberosity ( Figure 5) was located on the 3D according to the Enlow's references in the book "Essentials of Facial Growth" [14] and anatomical references from the books "Gray's Anatomy for Students" [17] and "Sicher's Oral Anatomy" [18]. As Enlow explained, the lingual tuberosity represents "the effective boundary between the two basic part of the mandible: the ramus and corpus. The inaccessibility of the lingual tuberosity for routine cephalometric study is a great loss" [14]. This is due to the overlapping by the ramus in the lateral headfilm because the lingual tuberosity lies toward the midline from the ramus. The book "Gray's anatomy for Students" defined it as the "retromolar triangle" the area posterior to the last molar [17]; regarding the "Sicher's Oral Anatomy", this triangle has the "medial and the lateral margins continuous with the alveolar bone, respectively, on Diagnostics 2022, 12, 2513 6 of 13 the buccal and the lingual side, of the last molar" [18]. Based on these anatomic references, the position of the lingual tuberosity has been defined and the two lingual tuberosity points define a plane that separates the mandible into two portions: the anterior representing the mandibular corpus and the posterior representing the mandibular ramus. The anatomical lingual tuberosity ( Figure 5) was located on the 3D according to the Enlow′s references in the book "Essentials of Facial Growth" [14] and anatomical references from the books "Gray′s Anatomy for Students" [17] and "Sicher's Oral Anatomy" [18]. As Enlow explained, the lingual tuberosity represents "the effective boundary between the two basic part of the mandible: the ramus and corpus. The inaccessibility of the lingual tuberosity for routine cephalometric study is a great loss" [14]. This is due to the overlapping by the ramus in the lateral headfilm because the lingual tuberosity lies toward the midline from the ramus. The book "Gray′s anatomy for Students" defined it as the "retromolar triangle" the area posterior to the last molar [17]; regarding the "Sicher′s Oral Anatomy", this triangle has the "medial and the lateral margins continuous with the alveolar bone, respectively, on the buccal and the lingual side, of the last molar" [18]. Based on these anatomic references, the position of the lingual tuberosity has been defined and the two lingual tuberosity points define a plane that separates the mandible into two portions: the anterior representing the mandibular corpus and the posterior representing the mandibular ramus. On the 3D, the structures previously described were evaluated with the following method:
Maxillary and Mandibular skeletal arches
The maxillary skeletal arch ( Figure 6) was measured from point A to a plane passing through the PNS and parallel to the coronal plane. On the 3D, the structures previously described were evaluated with the following method:
Maxillary and Mandibular skeletal arches
o The maxillary skeletal arch ( Figure 6) was measured from point A to a plane passing through the PNS and parallel to the coronal plane. o The mandibular skeletal arch (Figure 7) was measured from the point B to a plane passing through the two lingual tuberosity and normal to the axial plane
Maxillary and mandibular dento-alveolar arches
o The maxillary dento-alveolar arch (Figure 8) was measured from the point SPr to a plane passing through the PNS and parallel to the coronal plane. o The mandibular dento-alveolar arch (Figure 9) was measured from the point IPr to a plane passing through the two lingual tuberosity and normal to the axial plane. On the 3D, the structures previously described were evaluated with the following method:
Maxillary and Mandibular skeletal arches
o The maxillary skeletal arch ( Figure 6) was measured from point A to a plane passing through the PNS and parallel to the coronal plane. o The mandibular skeletal arch (Figure 7) was measured from the point B to a plane passing through the two lingual tuberosity and normal to the axial plane
Maxillary and mandibular dento-alveolar arches
o The maxillary dento-alveolar arch (Figure 8) was measured from the point SPr to a plane passing through the PNS and parallel to the coronal plane. o The mandibular dento-alveolar arch (Figure 9) was measured from the point IPr to a plane passing through the two lingual tuberosity and normal to the axial plane.
Maxillary and mandibular dento-alveolar arches
The maxillary dento-alveolar arch (Figure 8) was measured from the point SPr to a plane passing through the PNS and parallel to the coronal plane. The mandibular dento-alveolar arch (Figure 9) was measured from the point IPr to a plane passing through the two lingual tuberosity and normal to the axial plane.
Middle cranial floor and mandibular ramus
o The middle cranial floor ( Figure 10) was determined by the distance from Ba to a plane passing through the two anterior clinoid processes and normal to the axial plane. The anterior clinoid processes were chosen as they represented the most posterior point of the anterior cranial floor and the most anterior point of the middle cranial floor. o The mandibular ramus was measured from the right and left condylion to a plane passing through the right and left lingual tuberosity and normal to the axial plane ( Figure 10).
Middle cranial floor and mandibular ramus
o The middle cranial floor ( Figure 10) was determined by the distance from Ba to a plane passing through the two anterior clinoid processes and normal to the axial plane. The anterior clinoid processes were chosen as they represented the most posterior point of the anterior cranial floor and the most anterior point of the middle cranial floor. o The mandibular ramus was measured from the right and left condylion to a plane passing through the right and left lingual tuberosity and normal to the axial plane ( Figure 10).
Middle cranial floor and mandibular ramus
The middle cranial floor ( Figure 10) was determined by the distance from Ba to a plane passing through the two anterior clinoid processes and normal to the axial plane. The anterior clinoid processes were chosen as they represented the most posterior point of the anterior cranial floor and the most anterior point of the middle cranial floor. The mandibular ramus was measured from the right and left condylion to a plane passing through the right and left lingual tuberosity and normal to the axial plane ( Figure 10).
Statistical Analysis
To validate the proposed landmarks for the "3D horizontal counterpart analysis" (Figure 11), the average measures obtained from the discrepancy between each part and its counterpart on the 2D and 3D were compared to confirm the null hypothesis of the absence of statistically significant differences between the analyzing method used on the 2D and the 3D.
Statistical Analysis
To validate the proposed landmarks for the "3D horizontal counterpart analysis" (Figure 11), the average measures obtained from the discrepancy between each part and its counterpart on the 2D and 3D were compared to confirm the null hypothesis of the absence of statistically significant differences between the analyzing method used on the 2D and the 3D.
Statistical Analysis
To validate the proposed landmarks for the "3D horizontal counterpart analysis" (Figure 11), the average measures obtained from the discrepancy between each part and its counterpart on the 2D and 3D were compared to confirm the null hypothesis of the absence of statistically significant differences between the analyzing method used on the 2D and the 3D. To evaluate the discrepancy between the middle cranial fossa and the mandibular ramus on the 3D, the values obtained from the measurement of the right ramus were conventionally chosen.
Statistical analysis was performed using the Prism-GraphPad software (Graphpad software, LLC, San Diego, CA, USA). The Kolmogorov-Smirnov normality test was applied for each variable to check whether data were normally distributed. Since data were normally distributed, the paired Student's t-test was performed. The level of statistical significance was 0.05.
Results
A total of eighteen cephalograms were extracted from the CBCT images and evaluated. The 2D Enlow's counterpart analysis and 3D analysis were traced on lateral cephalograms extracted from CBCT and on the CBCT images, respectively.
Horizontal Counterpart Analysis
The discrepancy values between the three variables according to Enlow's counterpart analysis were calculated. Data were obtained by subtraction between parties and counterparties according to Enlow's analysis. Table 3 shows means and standard deviations of each discrepancy, first in 2D and then in 3D: maxillary and mandibular skeletal arch, maxillary and mandibular dento-alveolar arch, middle cranial floor and mandibular ramus. Table 3. Comparing discrepancies of the horizontal counterpart analysis between 2D and 3D parameters. Once the Kolmogorov-Smirnov test was performed and verified that the sample was normally distributed, the paired Student's t-test was performed.
The gap between the discrepancy values of each variable was small and no statistically significant difference was highlighted by the unpaired Student's t-test (p > 0.05), confirming the null hypothesis and proving the validity of the proposed new method for 3D horizontal counterpart analysis.
Discussion
The aim of this study was to present a 3D cephalometric analysis based on the craniofacial growth theories and the related cephalometric analysis described by D.H. Enlow [14]. The idea of performing a 3D cephalometric analysis stems from the possibility for modern orthodontists to obtain three-dimensional images of maxillofacial structures through the use of technologies and protocols that significantly reduce the exposure of radiation for the patient such as low dose CBCT [2,3,19]. Given the lack of standardized protocols in 3D cephalometry, this preliminary study aims to provide an easily reproducible method of analysis that can contribute to a 3D orthodontic diagnosis.
One of the main advantages of using CBCT images to run a cephalometric analysis is represented by the possibility of avoiding splits or superimpositions of the images and to be able to bilaterally evaluate the symmetrical structures. For instance, 3D imaging can e used to identify the anatomical point describing the lingual tuberosity considered to be the connection point between the body and the mandibular ramus, an area that is not visible in the common teleradiographs as it is covered by the mandibular branch.
Moreover, our cephalometric analysis was performed solely by anatomical points, without radiographically constructed points, which are the most frequently mistaken.
All the measurements of the 3D analysis of the horizontal counterparts were obtained as a distance between a point and a plane. This method represents a different approach than those described in the literature where usually the length of maxillofacial structures is measured as a distance between two or three points. In our analysis, the ability to use plans provides us with several benefits. The first is the possibility of dividing the skull into different portions. Indeed, the presence of planes allow us to divide the mandible into mandibular body and mandibular ramus and, consequently, measure these two portions separately; it also allows us to identify a repeatable division zone at the level of the skull base that separates the middle cranial floor from the anterior cranial floor. The second advantage of using planes is the possibility of standardizing the measurement between points that do not lie on the same plane, for instance, the evaluation of the mandibular ramus where the condylion and the lingual tuberosity points are located at different heights. Finally, the method proposed in this study to construct the reference planes seems to be easier and less operator dependent as compared with other methods described in the literature [1,5,[20][21][22][23]. Selecting three points outside the skull to orientate the images could also avoid the displacement of these landmarks due to local bone remodeling [6]. Furthermore, the lower number of points to select and the greater ease in their identification makes the 3D analysis a more accurate analysis with a lower risk of operator-dependent errors.
Moreover, the transition from 2D cephalometry to 3D determines a large number of data that can appear complex to clinicians. One of the approaches described to prevent this "information overload" is to create 3D average hard and soft tissue models, but the attempts to generate accurate average 3D skull models are still a challenge because of the complex anatomy [24]. This problem can be overcome with our analysis since each subject is compared with himself and not with average values from the population, reducing the amount of information to combine for the diagnosis.
The results obtained from the statistical analysis showed no statistically significant differences between the 2D and 3D measurements obtained by the difference between the part and its relative counterpart, as described by D.H. Enlow [16]. The landmarks proposed by this study identified cranio-facial structures on 3D images in a way that could be superimposed on those described by Enlow in his analysis performed on 2D lateral cephalograms. Due to the anatomical positions of the points identified in this 3D analysis, a large FOV is necessary and a reduced FOV cannot be used. With current radiographic acquisition protocols, this analysis may not be used in routinary orthodontic diagnosis due to radiation exposure. It could be useful for more complex cases, situations were teleradiographs could show superimpositions (i.e., skeletal asymmetries), patients who will undergo orthognathic surgery, or cases where CBCT is already prescribed for other clinical reasons (i.e., dental inclusions).
Another limitation is that the sample investigated in our study is composed exclusively using skeletal Class I subjects. Future investigations should verify the reproducibility of the points proposed in this analysis, also in different sagittal and vertical skeletal relationships.
Future investigations could be performed on the basis of this preliminary study to identify all the landmarks of the neutral track and vertical counterpart analysis proposed by D.H. Enlow and to reach a complete 3D counterpart analysis.
Conclusions
The landmarks proposed by this study to identify on 3D images the mandibular ramus, middle cranial floor, maxillary skeletal arch, mandibular skeletal arch, maxillary dento-alveolar arch, and mandibular dento-alveolar arch are valid and superimposable on those described by Enlow in his analysis performed on 2D lateral cephalograms.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki. The study does not require Ethics Committee approval as it was conducted retrospectively on radiographic report saved in radiology unit archives of the dental clinic at the university "G. d'Annunzio" Chieti-Pescara.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,096 | 2022-10-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Echoes of 2HDM inflation at the collider experiments
We study the correlation between the constraints on general two Higgs doublet model from Higgs inflation and from collider experiments. The parameter space receives meaningful constraints from direct searches at the Large Hadron Collider and from flavor physics if $m_H$, $m_A$, and $m_{H^\pm}$ are in the sub-TeV range, where $H$, $A$, and $H^\pm$ are the CP even, CP odd, and charged Higgs bosons, respectively. We find that in the parameter region favored by the Higgs inflation, $H$, $A$, and $H^\pm$ are nearly degenerate in mass.We show that such near degeneracy can be probed directly in the upcoming runs of the Large Hadron Collider, while the future lepton colliders such as the International Linear Collider and the Future Circular Collider would provide complementary probes.
I. INTRODUCTION
The cosmic inflation [1][2][3] in the early universe is a well established paradigm which can successfully explain the horizon, flatness and exotic-relics problems, and can provide the initial condition for the hot big bang as the reheating process in the early Universe [4]. The slow-roll inflation [5][6][7] can seed the primordial density fluctuations [8,9] which eventually evolve into large scale structure that we observe today in cosmic microwave background (CMB) anisotropies [10]. Despite of its prevalent success, the underlying mechanism behind the inflationary dynamics still remains unknown. In the simplest inflationary scenario a slowly rolling scalar field (inflaton) can account for the nearly scale-invariant density fluctuation observed in the CMB. In the Standard Model (SM), the only available scalar field is the Higgs boson, which has a quartic potential. However, it alone, when used in the chaotic inflation, cannot support the observed scalar spectral index and tensor-to-scalar ratio [10].
The Higgs inflation [11][12][13][14][15][16][17][18][19] is one of the best fit models to the CMB data, and is testable due to its connection to the Higgs physics at the Large Hadron Collider (LHC) and beyond. In the SM Higgs inflation, the Higgs doublet Φ is assumed to couple with gravity via Ricci scalar R by ξΦ † ΦR, where ξ is a dimensionless nonminimal coupling of order 10 4 -10 5 . The successful Higgs inflation requires the stability of Higgs potential up to at least M P /ξ. Even if we demand the stability up to M P , the required upper bound on the pole mass of the top quark is m pole t 171.4 GeV [21], which is perfectly consistent at 1.4 σ with the current value 172.4 ± 0.7 GeV [22]. The Higgs inflation is also possible in models with additional Higgs doublet. After the discovery of the Higgs boson h of mass 125 GeV [20], it is conceivable that the Higgs field has an extra generation since all the known fermions in the SM has more than one generations. The general two Higgs doublet model (g2HDM) is one of the simplest renormalizable extensions of the SM where the scalar sector (Φ) is extended by one extra doublet (Φ ). The g2HDM would share the same virtue of being one of the best fit inflationary models to account for the CMB data if one has sufficiently large nonminimal couplings to Φ and/or Φ .
In this article we study the possibility of slow-roll inflation with nonminimal Higgs couplings in general two Higgs doublet model 1 and its implications at the collider experiments. In general we have three nonminimal couplings between the Higgs fields and the Ricci scalar in g2HDM. As a first step, we study two different scenarios in this article. In Scenario-I we switch only on the nonminimal coupling of Φ, while in Scenario-II we switch only on that of Φ . In both scenarios we find the parameter space for inflation satisfying all observational constraints from Planck 2018 [10].
Without the presence of discrete symmetry, in g2HDM, at tree level both the scalar doublets couple with both the up-and down-type fermions. After diagonalizing the fermion mass matrices two independent Yukawa couplings λ F ij and ρ F ij emerge, where F denotes leptons (L), up-type quarks (U ), and down-type quarks (D): The λ F ij matrices are real and diagonal and responsible for mass generation of the fermions, while the ρ F ij are in general complex and non-diagonal matrices. The parameter space for inflation receives constraints from several direct and indirect searches, in particular from the LHC and Belle experiments. We show that extra Yukawa couplings ρ U tt and ρ U tc can provide unique test for the parameter space for inflation at the LHC. Discoveries are possible at the LHC or future lepton colliders such as International Linear Collider (ILC) and the Future Circular Collider (FCC-ee), depending on the magnitude of extra Yukawa couplings ρ U tt and ρ U tc . We also show that B s and B d mixing data as well as future measurements of B meson decay observables would provide sensitive probe to the inflationary parameter space.
In the following we outline the g2HDM framework in Sec. II followed by formalism for inflation in Sec. III. The scanning and parameter space for inflation is summarized in Sec. IV, and direct and indirect constraints are discussed in and Sec. V. We discuss our results with some outlook in Sec. VI.
II. MODEL FRAMEWORK
Here we outline the framework of the g2HDM following the notation of Refs. [31,32]. In the Higgs basis, the most general two Higgs doublet potential can be written as [32,33] where the vacuum expectation value v arises from the doublet Φ via the minimization condition µ 2 , and η i s are quartic couplings. A second minimization condition, µ 2 12 = 1 2 η 6 v 2 , removes µ 2 12 , and the total number of parameters are reduced to nine. For the sake of simplicity, we assumed CP-conserving Higgs sector. The mixing angle γ between the CP even scalars h, H satisfy relations: The alignment limit corresponds to c γ → 0 with s γ → −1, where we used shorthand c γ = cos γ and s γ = sin γ . The current LHC data suggests [34] that c γ to be small i.e. the so called approximate alignment [31]. The physical scalar masses can be expressed in terms of the parameters in Eq. (1), We now express the quartic couplings η 1 , η 3−6 in terms of [31,33] all normalized to v, and the mixing angle γ, The quartic couplings η 2 and η 7 do not enter in the scalar masses, nor in the mixing angle γ. Therefore in our analysis we take v, m h , and γ, m A , m H , m H ± , µ 22 , η 2 , η 7 as the nine phenomenological parameters. The scalars h, H, A and H ± couple to fermions by [32,33] where P L,R ≡ (1 ∓ γ 5 )/2, i, j = 1, 2, 3 are generation indices, V is Cabibbo-Kobayashi-Maskawa matrix and, U = (u, c, t), D = (d, s, b), L = (e, µ, τ ) and ν = (ν e , ν µ , ν τ ) are vectors in flavor space. The matrices λ F ij (= √ 2m F i /v) are real and diagonal, whereas ρ F ij are in general complex and non-diagonal. In the following we drop superscript F . For simplicity, we assume all ρ ij are real in our analysis. It is likely that ρ ij follow similar flavor organizing principle as in SM i.e. ρ ii ∼ λ i with suppressed off-diagonal elements of ρ ij matrices [31]. Therefore ρ tt ∼ λ t , ρ bb ∼ λ b etc., while as we show below the flavor changing neutral Higgs coupling ρ tc could still be large. In the following, for simplicity we assumed λ t , ρ tt , and ρ tc to be nonzero and set all other λ i and ρ ij couplings to zero; their impact will be discussed in the later part of the paper.
For inflationary dynamics we chose the m H , m A , and m H ± between 200-800 GeV. This is primarily because of our aim to find signatures at the collider experiments, in particular at the LHC. In general lighter masses are possible. However they will be subjected to severe bounds from flavor physics as well as direct searches. We remark that heavier masses are also possible for inflationary dynamics. The potential for discovery or probing, although, becomes limited for heavier masses due to rapid fall in the parton luminosity. Thus we focus on sub-TeV mass range and restrict ourselves below 800 GeV 2 . As discussed earlier it is likely that ρ ii ∼ λ i . However, as we shall see below for the bulk of the 200-800 GeV mass range ρ tt = λ t is excluded by various direct and indirect searches. In particular we set ρ tt = 0.5 at low scale. Furthermore we take ρ tc = 0.2, which is still allowed by current data and can have exquisite signatures at the LHC.
III. INFLATIONARY DYNAMICS
To study the inflationary dynamics we first write down the action in Jordan's frame: where ξ 11 , ξ 22 , and ξ 12 are dimensionless nonminimal couplings; g µν and g are the inverse and determinant of metric, respectively; and M P is the reduced Planck mass (≈ 2.4 × 10 18 GeV) with M P = 1. The action in Eq. (12) can be written in the Einstein's frame as where, and For inflationary dynamics we choose the Higgs field in the electromagnetic preserving direction: The Einstein action in terms of field φ I = {ρ 1 , ρ 2 , χ} becomes where S IJ = δ IJ /F +3k F † I F J /(2F 2 ) with F I = ∂F/∂φ I ; k = 1 and 0 are for metric and Palatini formulations, respectively. The potential V E (φ I ) can be written as where c χ = cos χ and c 2χ = cos 2χ and we have only taken into account the quartic terms of the Jordan-frame potential V , discarding the quadratic terms, as we are interested in the inflaton dynamics for very large field values. Theη i s denote the quartic couplings in Eq. (1) at the inflationary scale.
As we will see below, one nonminimal coupling is sufficient to account for all the observational constraints on the Higgs inflation. Therefore in the following we turn only one nonminimal coupling at a time. In particular we primarily focus on the scenarios when either of ξ 11 and ξ 22 are nonzero, while ξ 12 = 0 throughout, and denote them as Scenario-I and Scenario-II respectively. The impact of nonzero ξ 12 will be briefly discussed at the latter part of the paper.
As in previous section, we minimize the ϕ-independent part of potential (23) numerically. Again, there exists three sets of minima: c χ = ±1, c χ = 1 and c χ = −1. After stabilizing the potential at the minima (ρ 0 , c χ0 ), the potential for single Higgs inflation becomes where η eff is written as calculated at the minimum (ρ 0 , c χ0 ). As before this is required to be positive for the positive potential energy during inflation.
C. Kinetic mixing
If there exist kinetic mixing, the heavy state needs to be integrated out during inflation to get an effective theory [35][36][37][38] such that ϕ-independent parts of Eq. (19) or Eq. (23) would induce the slow-roll inflation for the light state ( ϕ) while the mass of the heavy state is exponentially suppressed. Let us elaborate on this.
The kinetic terms of the Lagrangian can be written as: It is clear from Eq. (27) that the kinetic terms are not canonically normalized, i.e., there exist kinetic mixing between ϕ and ρ. To find canonically normalized kinetic terms, we closely follow the prescription laid out in Ref. [25]. For finite value of the Higgs ratio ρ, we consider a perturbation around the minimum ρ 0 as ρ = ρ 0 +ρ. The kinetic terms of ϕ andρ can be rewrit- The potentials of Eq. (21) and Eq. (25) can be expanded around the minima as 2 22 in Scenario-II. The quantity A I and A II are: Both A I and A II are required to be positive. This is an additional requirement in addition to the conditions for potential minimization as described earlier. We can now diagonalize the kinetic terms via the following transformation: where θ = ArcTan 2γ The eigenvalues of the kinetic terms can be identified as while the potential can be re-expressed in terms of the new variables as By further field redefinitionφ = λ + ϕ andρ = λ −ρ , the kinetic terms become canonically normalized. The different elements of the mass matrix forφ andρ are After diagonalizing the mass matrix we get two eigenvalues m 2 light = 0 and m 2 heavy = A sin 2 θ/λ + + cos 2 θλ − .
The ϕ-dependent part of Eq. (21) or Eq. (25) induces slow-roll inflation for the massless mode m light , while the m heavy mode is exponentially suppressed. In Scenario-I (Scenario-II) for the large value of ξ 11 (ξ 22 ), the mass of the heavy state becomes m 2 heavy ∼ A I /ξ 11 (∼ (A II ρ 4 0 )/ξ 22 ). This is much larger than the Hubble parameter H 2 ∼ η eff /ξ 2 11 (∼ η eff /ξ 2 22 ), and heavy states can be integrated out. To find the parameter space for inflation, along with all aforementioned conditions, for both the scenarios additionally we also demanded m 2 heavy > H 2 in our numerical analysis.
A. Inflationary observables
Let us spell out our notation for basic quantities. The dimensionless slow-roll parameters which measures the slope and curvature are defined The quantities n s = 1 + 2η ϕ − 6 ϕ and n t = −2 ϕ are the scalar and tensor spectral indices, respectively, while A s = V 24π 2 ϕ and A t = 2V 3π 2 are the scalar and tensor amplitudes, respectively. To first order approximation
B. Observational Constraints on Inflation
For consistent inflationary model the observational constraints from Planck 2018 results are [10] where A * s , n * s , and r ϕ * are the scalar amplitude, the scalar spectral index, and the tensor-to-scalar ratio, respectively, evaluated at ϕ = ϕ * . The value of ϕ * is obtained by solving the number of e-foldings N where ϕ * correspond to the value of inflaton field when number of e-foldings N = 60, and ϕ end denotes the end of slow-roll approximation defined as ϕ (ϕ end ) := 1. If we approximate that ϕ end = 0, from Eq. (41) one finds while A * s , n * s and r ϕ * are with ξ = ξ 11 or ξ 22 . Solving Eq. (42) for N = 60 we find ϕ * ≈ 5.45. Correspondingly, n * s ≈ 0.9675 and r ϕ * ≈ 3.03 × 10 −3 , which are within the limits obtained by Planck 2018 [10], as can be seen from Eq. (39) and Eq. (40). Moreover, scalar amplitude A * s of Eq. (43) needs to satisfy the constraint as in Eq. (38).
C. Scanning and parameter space
At the low scale (µ = m W ), the dynamical parameters in Eq. (1) need to satisfy the unitarity, perturbativity, positivity constraints, for which we utilized 2HDMC [39]. To save computation time we generated the input pa- GeV, and m h = 125 GeV. We call them parameter points and fed into 2HDMC for scanning in the Higgs basis. The input parameters in 2HDMC [39] are Λ 1−7 and m H ± in the Higgs basis with v being an implicit parameter, and we identify Λ 1−7 with η 1−7 . To match the convention of 2HDMC, we take −π/2 ≤ γ ≤ π/2. For more details on the convention and parameter counting we redirect readers to Refs. [32,40]. One has to also consider oblique T parameter [41] constraint, which restricts the hierarchical structures of the scalar masses [42,43], and therefore η i s. We utilize the expression given in Ref. [43]. The parameter points that passed unitarity, perturbativity and positivity conditions from 2HDMC are further needed to satisfy the T parameter constraint within the 2σ error [44].
We shall see shortly η eff 1 is favored by inflationary constraints, which implies that ξ 11 and ξ 22 should be O(10 4 ) to generate the observed spectrum of CMB density perturbations. On the other hand, unitarity is broken at momentum scales µ M P /ξ 11 (M P /ξ 22 ) for a scattering around the electroweak vacuum in Scenario-I (Scenario-II), and one might expect that higher dimensional operators are suppressed only by M P /ξ 11 (M P /ξ 22 ) rather than by M P . We may either assume that the coefficients of higher dimensional operators have extra suppressions or introduce additional scalars at the inflationary scale to restore the unitarity as discussed in Refs. [23][24][25]. The RGE above unitarity scale depends on the ultra-violet (UV) completion of the model [45], and we only perform the RGE computation of the parameters of g2HDM up to the unitarity scale. As for the unitarity scale, we take y ≈ 26, corresponding to the scale µ = 1.6×10 13 GeV with y = ln(µ/m W ) such that unitarity is maintained for the ballpark nonminimal couplings O(10 4 ) and η eff ∼ 1. Later we also call this scale the high scale. The high scale parameters are denoted with tilde in order to differentiate them from the corresponding low scale parameters in Eq. (1) and Eq. (11).
For the RGE of the parameters in Eq. (1) as well as ρ F and λ F in the Eq. (11), from low scale y = 0 to high scale y = 26, we utilized the β x functions (β x = ∂x/∂y) for g2HDM given in Ref. [46]. The parameter points that survive the low scale constraints from unitarity, perturbativity, positivity, and T parameter are entered in the RG equations. At the high scale we check perturbativity (i.e. couplings are being within [− √ 4π, √ 4π]) forλ i s, ρ ij s, and |η i | as well as positivityη 1,2 > 0. We found that parameter points with |η i | > 1 at the low energy get generally excluded after imposing perturbativity and positivity criteria at the high scale. Therefore, with limited computational facility to save time while generating parameters at low scale, we more conservatively demanded At the high scale, for each parameter point, we require to satisfy all the necessary conditions as described in the previous section, such as η eff and the potential energy V 0 being positive at the potential minima. Finally, the points are needed to satisfy inflationary constraints of Eqs. In Scenario-I, we find that η eff 3.5 with ξ 11 5×10 4 as can be seen from the left panel of Fig. 1. The scanned points are mostly concentrated around ξ 11 ∼ 10 4 . This can be understood easily from Eq. (38). For ϕ * ≈ 5.45 and η eff 3.5 one finds ξ 11 to be O(10 4 ). A similar pattern is found also for the Scenario-II. The corresponding minima (ρ 0 , c χ0 ) in the Scenario-I (Scenario-II) found to be in the range 0 |ρ 0 | 1 (1 |ρ 0 | 100) while c χ0 is either 1 or −1, as can be seen in Fig 2. Note that, there exist no minima for −1 < c χ0 < 1 in both the scenarios. In most cases ρ 0 is found to be complex for −1 < c χ0 < 1. Indeed, there exist some real ρ and c χ that solve ∂V /∂ρ = 0 and ∂V /∂c χ = 0 simultaneously, however, the determinant and/or the trace of the covariant matrix X ij are found to be not positive in such cases.
Let us take a closer look at Fig. 3 and Fig. 4. We find that at the low scale the parameter space for inflation requires m H , m A and m H ± to be nearly degenerate with m h = 125 GeV. This behavior can be traced back to our choices of parameters at the low scale. As for the inflation, one requires perturbativity and positivity at the high scale. At the low scale while scanning we demanded all |η i | ≤ 1. This is driven by the fact that the parameter points with |η i | > 1 at the low scale tend to become nonperturbative at the high scale. Such choices severely restrict mass splittings between m H , m A and m H ± which are primarily determined by the magnitudes of η i . This can be understood easily from Eqs. (3)- (5) and Eq. (6)- (10). With common µ 2 22 terms in m H , m A and m H ± , the mass splittings are restricted because we require all |η i | < 1. Thus we conclude that parameter space favored by inflation requires m H , m A and m H ± to be nearly degenerate, as reflected in Figs. 3 and 4. In what follows we shall show that these mass ranges receive meaningful direct and indirect constraints and may have exquisite signatures at the LHC, ILC, FCC-ee, etc. In addition, we show that such near degeneracy can be directly probed at the LHC.
A. Constraints
Having already found the parameter space for inflation we now turn our attention to the constraints on the parameter space. The couplings ρ tt and ρ tc receive several direct and indirect search limits, particularly in the sub-TeV mass ranges of m A , m H and m H ± . We now summarize these constraints in detail. In particular we will show that the parameter space chosen for scanning in Sec. IV is allowed by current data.
First we focus on the h boson coupling measurements by ATLAS [48] and CMS [49]. The results are provided as the ratios of the observed and SM productions and decay rates of h, called signal strengths. For nonvanishing c γ the extra Yukawa ρ ij s modify the couplings h to fermions (see Eq. (11)). Therefore, the parameter space for inflation would receive meaningful constraint from such measurements. The ATLAS results [48] are based on Run-2 ( √ s = 13 TeV) 80 fb −1 data, while CMS [49] utilized only up to 2016 Run-2 data (35.9 fb −1 ). Both the collaborations measured signal strengths µ f i and corresponding errors to different production and decay chains i → h → f . The signal strengths µ f i are defined as [48,49]: where σ i and B f are the production cross sections of i → h and the branching ratio for h → f respectively. ATLAS and CMS considered gluon-fusion (ggF), vectorboson-fusion (VBF), Zh, W h, tth production processes (denoted by index i) and the γγ, ZZ, W W , τ τ , bb, and µµ decay modes (by f ). For simplicity we utilized the leading order (LO) µ f i and followed Refs. [50][51][52][53] for their explicit expressions. In our analysis, we focus particularly on the ggF and the VBF production modes because they put the most stringent constraints. In the ggF category we find that the most relevant signal strengths are µ W W ggF , µ γγ ggF , µ ZZ ggF and µ τ τ ggF , whereas in the VBF category µ γγ VBF , µ W W VBF and µ τ τ VBF . Further, we have also considered the Run-2 flagship observations of top Yukawa (htt) [54,55] and bottom Yukawa (hbb) [56,57] by AT-LAS and CMS. We call all these measurements together "Higgs signal strength measurements". Under the assumptions on couplings in Sec. IV, the flavor conserving couplings ρ tt would receive meaningful constraints for c γ = 0. Allowing 2σ errors on each signal strength measurements we find that the |ρ tt | = 0.5 is still allowed by Higgs signal strength measurements for c γ = 0.05. While finding the upper limit, we assumed m H ± = 200 GeV, which enters in the hγγ couplings only from one loop level: The constraints have very mild dependence on m H ± and the results remain practically the same for the entire m H ± ∈ [200, 800] GeV range.
For nonzero ρ tt the charged Higgs and W bosons loop with t quark modifies the B q -B q (q = d, s) mixing amplitudes M q 12 . The constraint is stringent specially for the sub-TeV m H ± . Recasting the type-II 2HDM expression of B q -B q mixing amplitude [58], Ref. [59] found that M q 12 can be written as , and m t and m W are top quark and W boson masses. The respective expressions for I W W , I W H , and I HH are given as [59] For the quantity C Bq e 2iφ Bq := M q 12 /M q SM , one simply has C Bq = M q 12 /M q SM for real ρ ij couplings. The summer 2018 results of UTfit [60] found C B d ∈ 1.05 ± 0.11, Nonvanishing ρ tt can induce V tb enhanced bg →tH + and gg →tbH + processes (charge conjugate processes are implied). The processes pp →t(b)H + followed by H + → tb are the conventional search program for the H ± and covered extensively by ATLAS [61] and CMS [62,63]. The ATLAS search [61] provides model independent 95% CL upper limit on cross section times branching ratio (σ(pp →tbH + ) × B(H + → tb)) based on its √ s = 13 TeV 36 fb −1 dataset for m H ± = 200 GeV-2 TeV. Likewise CMS also set 95% CL upper limits on σ(pp →tH + ) × B(H + → tb), based on √ s = 13 TeV 35.9 fb −1 dataset for m H ± = 200 GeV and 3 TeV in the semileptonic t decay [62], and on combination of semileptonic and all-hadronic final states [63]. We first extract these σ × B upper limits [64] from Refs. [61][62][63] in the mass range m H ± = 200-800 GeV. In order to estimate the constraints, we determine the cross sections σ(pp →t(b)H + ) × B(H + → tb) at LO for reference |ρ tt | = 1 value for the m H ± = 200-800 GeV via Monte Carlo event generator MadGraph5 aMC@NLO [66] with NN23LO1 PDF set [67]. To obtain the respective 95% CL upper limits on |ρ tt | these cross sections are then rescaled by |ρ tt | 2 simply assuming B(H + → tb) ≈ 100%. We find the upper limits from ATLAS search [61] are in general much weaker than that of CMS searches [62,63]. The upper limits from the CMS semileptonic final state [62] are mildly weaker compared to those from the combined semileptonic and all-hadronic final states [63]. Hence in Fig. 5 we only provide the regions excluded by the CMS search of Ref. [63], which is shown in green shaded region. While finding the excluded regions we assumed ρ ij = 0 except for ρ tt for the sake of simplicity. In general nonzero ρ ij couplings would turn on other decay modes of H + leading to even weaker upper limits on ρ tt . E.g., the ρ tc coupling induces V tb proportional H + → cb decay. For ρ tc = 0.2 such additional decay mode can suppress the B(H + → tb) by 20 − 30% for m H ± = 200-800 GeV. While finding these upper limits, we have implemented the effective model in FeynRules [68]. The search for heavy Higgs via gg → H/A → tt by AT-LAS [69] and CMS [70] would be relevant to constrain ρ tt for m A /m H > 2m t . The ATLAS [69] search set exclusion limits on tan β vs m A (or m H ) in type-II 2HDM framework starting from m A and m H = 500 GeV for two different mass hierarchies: m A = m H and massdecoupled m A and m H . The search is based on √ s = 8 TeV (Run-1) 20.3 fb −1 data. The CMS has performed similar search [70] but with Run-2 35.9 fb −1 data, and provided 95% CL upper limit on coupling modifier (see Ref. [70] for definition) for m A (m H ) from 400-750 GeV based on different values of decay width to mass ratios Γ A /m A (Γ H /m H ) assuming m H (m A ) is decoupled. After reinterpreting ATLAS results for m A = m H , which are provided only for three benchmark points 500, 550, and 600 GeV [69], we find red shaded exclusion region in Fig. 6. Note that we utilized ATLAS m A = m H result (and not the mass-decoupled m A and m H scenario) primarily because most scanned points in Figs. 3 and 4 resemble roughly m A ≈ m H pattern. We remark the actual constraints would be mildly weaker, depending on the value of |m A − m H | for the respective scanned points. The limits for the mass-decoupled scenario are much weaker and not shown in Fig. 6. The CMS [70] provides limits only for mass-decoupled scenario which is shown in blue shaded regions in Fig. 6. The limits are weaker than those from ATLAS even though the latter used only Run-1 data. It is reasonable to assume the constraints could be stronger if CMS [70] provided results for m A = m H scenario. A CMS analysis with mass degeneracy with full Run-2 dataset is welcome.
Moreover, ρ tt would also receive constraint from CMS search for SM four-top production [71] with 13 TeV 137 fb −1 dataset. Apart from measuring SM four-top production, the search also set 95% CL upper limits on σ(pp → ttA/ttH) × B(A/H → tt): 350 GeV ≤ m A/H ≤ 650 GeV. The search also included subdominant contributions from σ(pp → tW A/H, tqA/H) with A/H → tt. To find the constraint on ρ tt we generate these cross processes at LO by MadGraph5 aMC@NLO for a reference value of |ρ tt | setting all other ρ ij = 0, and finally rescale simply by |ρ tt | 2 assuming B(A/H → tt) ≈ 100%. We find that the constraints from σ(pp → ttA) × B(A → tt) are mildly stronger than that of σ(pp → ttH) × B(H → tt). The regions excluded by the former process is shown in cyan shaded regions in Fig. 6. We stress that for simplicity we assumed B(A/H → tt) ≈ 100%. As we chose ρ tc = 0.2, which will induce A/H → tc decays, B(A/H → tt) would be suppressed, hence the limits will be weaker than the shaded regions in Fig. 6. Note that as in before, while setting upper limits, CMS [71] assumed A (or H) is decoupled from H (or A), which is not the case for the scanned points in Fig. 3 and Fig. 4. We remark that the actual limit could possibly be stronger.
The coupling ρ tc receives constraints from B(t → ch) measurement. For nonzero c γ , ρ tc can induce flavor changing neutral current (FCNC) coupling htc (see Eq. (11)) which can induce t → ch decay. Both ATLAS and CMS have searched for the t → ch decay and pro-vided 95% CL upper limits on B(t → ch). The ATLAS upper limit is B(t → ch) < 1.1 × 10 −3 [72], while the CMS one is weaker B(t → ch) < 4.7 × 10 −3 [73]. Both ATLAS and CMS results are based on 13 TeV ∼ 36 fb −1 dataset. For c γ = 0.05, which is the largest value considered while scanning, |ρ tc | 1.8 is excluded at 95% CL. The constraint is weaker for smaller c γ .
The constraints on ρ tc from B(t → ch) measurement is rather weak. However, it has been found [74,75] that ρ tc receives stringent constraint from the CMS search for SM four-top production [71] (based on 13 TeV 137 fb −1 dataset), even when c γ is small. The search provides observed and expected number of events for different signal regions depending on the number of charged leptons and b-tagged jets with at least two same-sign leptons as baseline selection criteria [71]. It has been shown [75] that the CRW [71], i.e. the Control Region for ttW background, defined to contain two same-sign leptons and two to five jets with two of them b-tagged (see Ref. [71] for details), is the most relevant one to constrain ρ tc . The Ref. [71] reported 338 events observed in CRW whereas the total events expected (denoted as SM expected events) is 335 ± 18 [71]. Induced by ρ tc coupling, the processes cg → tH/tA → ttc (charge conjugate processes always implied) with the semileptonically decaying same-sign top quarks have similar event topologies and contribute abundantly to the CRW. However, there is a subtlety. If the masses and widths of A and H are degenerate the cg → tH → ttc and cg → tA → ttc contributions interfere destructively, leading to exact cancellation between the amplitudes [74,75]. We first estimate cg → tH/tA → ttc contributions for a reference ρ tc = 1 assuming B(H/A → tc) = 100%. Following the same event selection criteria described for CRW analysis [71], we rescale these contributions by |ρ tc | 2 and demand that the sum of the events form the cg → tH/tA → ttc contributions and the SM expected events in CRW to agree with the number of the observed events within 2σ error bars for the SM expectation. We find that ρ tc 0.5 is excluded at 2σ for the scenario |m H − m A | ≈ 20 GeV, whereas ρ tc 0.4 for |m H −m A | ≈ 50 GeV. Due to smaller mass splitting, and therefore larger cancellation between the amplitudes, the constraint is weaker for the |m H − m A | ≈ 20 GeV case compared to |m H − m A | ≈ 50 GeV case. Here we simply assumed Gaussian [76] behavior for the uncertainty of the SM expected events. Note that nonzero ρ tc will also induce cc → tt via t-channel H/A exchange, which we also included in our analysis. The events are generated at LO by MadGraph5 aMC@NLO interfaced with PYTHIA 6.4 [77] for showering and hadronization, and then fed to Delphes 3.4.2 [78] for fast detector simulation with CMS based detector card. For matrix element and parton shower merging we adopted MLM scheme [79].
For the heavier m H and m A , we find that the constraints on ρ tc from CRW becomes weaker. This is simply because cg → tH/tA → ttc cross sections drops rapidly due to fall in the parton lumniosity. In finding the constraint we assumed B(H/A → tc) ≈ 100%. However, this assumption is too strong given c γ = 0.05. For nonzero c γ one has A → Zh decay for m A > m Z +m h (or H → hh decay for m H > 2m h ), which will weaken the constraint further. In addition as we assumed ρ tt = 0.5. For scanned points in Fig. 3 and Fig. 4 where m H /m A > 2m t the B(H/A → tc) will be diluted further by large B(H/A → tt).
In this regard we also note that ATLAS has also performed similar search [80] however we find the limits are weaker due to difference in event topologies and selection cuts. In addition, ATLAS has performed search [81] for R parity violating supersymmetry with similar event topologies. The selection cuts, however, are still too strong to give meaningful constraints on ρ tc . Furthermore, B s,d mixing and B(B → X s γ), where ρ tc enters via charm loop through H ± coupling [82], can still constrain ρ tc . A reinterpretation of the result from Ref. [82] finds |ρ tc | 1 is excluded from B s mixing, for the ballpark mass range of m H ± considered in our analysis. The constraints are weaker than those from the CRW region.
Before closing we remark that ρ tt ∼ 0.5 and ρ tc ∼ 0.2 are still allowed by the current direct and indirect searches for all scanned points in Fig. 3 and Fig. 4. So far for simplicity we set all ρ ij to zero in the previous section, however, there exist searches that can also constrain the parameter space if some of them are nonzero. E.g., the most stringent constraint on ρ bb arises from [94] CMS search for heavy H/A production in association with at least one b-jet and decaying into bb pair for m H /m A 300 GeV to 1300 GeV [83]. Following the same procedure as in before and utilizing σ(pp → bA/H +X)·B(A/H → bb) in we find that |ρ bb | ∼ 0.2 is still allowed at 95% CL for all scanned points with m H /m A > 300 GeV. ATLAS preformed a similar search [84] but the limits are somewhat weaker. The CMS search for light resonances decaying into bb [85] provides limits covering also m H /m A = 200 GeV, however, the constraint are weaker than Ref. [83] for all scanned points. This illustrates that the current exclusion limits are much weaker than our working assumption ρ bb ∼ λ b . Same is also true for ρ τ τ i.e., all scanned points are allowed if ρ τ τ ∼ λ τ . Moreover, nonvanishing c γ may induce H → ZZ, H → W + W − , H → γγ, A → γγ etc., however, we have checked such decays are doubly suppressed via c γ ∈ [0, 0.05] and large B(H/A → tt) and B(H/A → tc +tc). In general, we assumed off diagonal ρ ij s to be much smaller compared to the diagonal elements in the corresponding ρ matrices, however, ρ tu could still be large, with O(0.1 − 0.2) is still allowed for m A /m H 200 GeV [86]. Furthermore, if both ρ tu and ρ τ τ are nonzero B → τ ν decay could provide sensitive probe which could be measured by the Belle-II experiment [87]. We leave out a detailed analysis turning on all ρ ij s simultaneously for future. We conclude that there exist sufficient room for discovery in near future while non-observation may lead to more stringent constraints on the parameter space.
B. Probing near mass degeneracy at the LHC In this subsection we discuss how to probe the near degeneracy of m H , m A and m H ± favored by inflation at the LHC. As discussed earlier, there exist exact cancellation between the cg → tH → ttc and cg → tA → ttc amplitudes if masses and widths are degenerate [74,75]. The cancellation reduces if the mass splittings are larger, as can be seen from previous subsection. For the allowed values of ρ tc and ρ tt discussed above, the decay widths of H and A are also nearly degenerate. Therefore cancellation could be significant for the scanned points in Fig. 3 and Fig. 4. With semileptonically decaying samesign top signature, cg → tH/tA → ttc can be discovered at the LHC, even with full Run-2 dataset, unless there exist such cancellation [74,75].
Note that such cancellation does not exist between cg → tH → ttt and cg → tA → ttt processes [74] if m H and, m A are above 2m t threshold. Induced by ρ tc and ρ tt couplings, the processes cg → tH/tA → ttt can be discovered in the Run-3 of LHC if m A and, m H are in the sub-TeV range [74]. In general, it is expected [74] that cg → tH/tA → ttc (same-sign top signature) would emerge earlier than the cg → tH/tA → ttt (triple-top signature). For sizable ρ tc and ρ tt one may also have cg → bH + → btb [88] process which can also be discovered at the LHC as early as in the Run-3. Hence, we remark that vanishing or small same-sign top and, sizable triple-top and cg → bH + → btb signatures at the LHC would provide smoking gun signatures for the inflation in g2HDM. We leave out a detailed study regarding the discovery potential of these processes in the context of inflation for future.
VI. DISCUSSION AND SUMMARY
We have investigated inflation in g2HDM in the light of constraints arising from collider experiments. We have primarily focused on the two benchmark scenarios. In Scenario-I we assumed nonminimal coupling ξ 11 to be nonvanishing while in Scenario-II we assumed ξ 22 nonzero. In both cases the parameter space favored by inflation require the nonminimal coupling O(10 3 − 10 4 ). We find that parameter space preferred by inflation requires m H , m A and m H ± to be nearly degenerate.
While finding the available parameter space we turned on only one nonminimal coupling at a time. This is primarily driven by the fact that one nonminimal coupling is sufficient to account for all the constraints from Planck data 2018. Throughout we set ξ 12 = 0 in our analysis. We find that a similar parameter space for ξ 12 can be found. We leave out a detailed analysis where all three nonminimal couplings are nonzero for future.
There exist several direct and indirect constraints for the parameter space. The most stringent constraints on the additional Yukawa couplings ρ tt arise from h boson coupling measurements by ATLAS [48] and CMS [49] as well as from heavy Higgs searches such as bg →tH + [63], gg → A/H → tt [69,70], and gg → ttA/H → tttt [71]. The most stringent indirect constraints arise from B d,s meson mixings. We found that ρ tt ≈ 0.5 is allowed by current data for m H , m A , and m H ± for 200-800 GeV. On the other hand the most stringent constraint on ρ tc arise from the control region of ttW background of CMS search for SM four-top production [71]. We find that ρ tc ∼ 0.2 are well allowed by current data.
The near degeneracy of m H and m A , as preferred by inflation, would lead to small same-sign top cg → tH/tA → ttc signature, while triple-top cg → tH/tA → ttt cross sections could be large. One expects same-sign top to emerge earlier than triple-top, that is unless m H and m A are degenerate or nearly degenerate [74]. One may also have cg → bH + → btb signature which could be discovered as early as in the Run-3 of LHC. Together they will provide unique probes for the inflation in g2HDM at the LHC if m H , m A , and m H ± are sub-TeV. Future lepton colliders such as ILC and FCC-ee might also provide sensitive probes to the parameter space. E.g., if c γ is nonzero one may have e + e − → Z * → Ah, followed by A → tt (or A → tc) with h → bb. This would be studied elsewhere. The future updates of B d,s mixing or, B(B → X s γ) of Belle-II [90] could also relevant.
In our analysis we have assumed ρ ij and λ i s to be real for simplicity. In general ρ F ij , µ 2 12 , λ 5 , λ 6 and λ 7 could be complex. We however briefly remark that such complex couplings receive stringent constraints from electron, neutron and mercury electric dipole moment (EDM) measurements [91,92]. In this regard, asymmetry of CP asymmetry (∆A CP ) of charged and neutral B → X s γ decays could be relevant [92,94] even though the observable has associated hadronic uncertainties. The future Belle-II measurement of ∆A CP [90] could reduce the available parameter space for imaginary ρ tt [92][93][94]. Moreover, we set all λ i s and ρ ij s to zero except for λ t , ρ tt and ρ tc and, assumed ρ ii could be ∼ λ i with suppressed off diagonal ρ ij s. If such coupling structure is realized in nature, we find that couplings other than λ t , ρ tt and ρ tc have inconsequential effects in inflationary dynamics.
In summary, we have analyzed the possibility of Higgs inflation in general two Higgs doublet model. We find that parameter space for inflation favors nearly degenerate additional scalars. The sub-TeV parameter space receives meaningful constraints from direct and indirect searches. We also find that parameter space required for inflation could be discovered in the future runs of LHC as well as the planned ILC, FCC-ee, etc., while indirect evidences may emerge in flavor factories such as Belle-II. A discovery would not only confirm beyond Standard Model physics, but may also provide unique insight on the mechanism behind inflation in the early Universe. | 10,760.4 | 2020-07-16T00:00:00.000 | [
"Physics"
] |
My Alphabet Application: Alphabet Introduction Learning Media for 1 st -grade Elementary School using Augmented Reality Technology
Reading skills can be initiated by familiarizing children with letter recognition first. Students who cannot recognize and differentiate letters will have difficulty reading. Implementing interactive learning media in the learning process can enhance students' motivation, leading to a higher interest in participating in learning activities. Augmented Reality (AR)-based learning media is one approach to creating an interactive learning environment. The research methodology used in this study is the Multimedia Development Life Cycle (MDLC), which consists of six stages: concept, design, material collecting, assembly, testing, and distribution. This research aims to create a better learning atmosphere for alphabet recognition by incorporating interactivity. This is intended to engage students throughout the learning process. Evaluation is conducted using the black box method to validate the application. The evaluation results demonstrate that the application functions according to the developer's plan, making it suitable for elementary school alphabet recognition learning activities.
INTRODUCTION
The language skills students need to master include listening, speaking, reading, and writing.Reading skills must be acquired before students develop writing skills (Pardede, 2020).The early reading stage begins with letter recognition, aiming to familiarize children with letters.In this initial reading activity, students learn to differentiate letters, pronounce them correctly, and combine them to form words.
Based on observations, students often struggle to recognize and differentiate alphabet letters, resulting in mispronunciations.This can be influenced by several factors, including the lack of exposure to letter recognition at home, as not all children have been taught letter recognition by their parents (Lestari et al., 2021).Additionally, the introduction to alphabet letters in Grade 1 thematic books is primarily presented through songs (Nelwan et al., 2020).Furthermore, in the letter recognition process, students practice identifying characters' names in books by observing and pronouncing their letters together (Arifian, 2017).
According to the findings of the PISA 2018, Indonesia is among the four countries where teachers are highly enthusiastic about teaching (Kurniawati, 2021).However, many teachers heavily rely on the provided textbooks at school (Magdalena et al., 2020), indicating the need for innovation in developing diverse and varied learning resources that support 21st-century learning, especially in student alphabet letter recognition.
The impact of the COVID-19 pandemic has raised concerns about early reading skills, both from internal factors (within the child) and external factors (outside the child) (Kusuma et al., 2019).Both factors contribute to the difficulties children face in early reading.The main challenges students encounter include problems in letter recall, differentiating similar-looking letters, and distinguishing vowels (Meo et al., 2021).The use of vowels and consonants can hinder the reading of certain words.
As future teachers, who are always hailed for their innovations, utilizing digital technology, such as the application of Augmented Reality (AR), can significantly enhance the learning of alphabet letter recognition (Aprilia & Rosnelly, 2020).The interactive nature of AR can make the learning process more engaging and capture students' attention.Incorporating AR features through a physical pocketbook adds variety to the learning media and provides a more enjoyable approach to learning alphabet letters.
The relevant study to this research is "The Implementation of AR in Bacteria Classification Learning Media", conducted by (Febriza et al., 2021).The study was conducted using the Multimedia Development Life Cycle (MDLC) method, which consists of six stages: concept determination, design creation, material collection, production, testing, and distribution.The testing stage involved the Black Box testing method conducted by an expert in software engineering.The research findings indicate that AR-based learning media is highly engaging for users.The AR-based Bacteria Classification learning media can be used as an alternative learning media for high school students.
The following relevant study is conducted by (Saputri et al., 2018) Another study titled "Development of Augmented Reality (AR) Learning Media in Class V MI Wahid Hasyim" conducted by (Mukti, 2019) showed that Augmented Reality (AR) learning media is suitable for student learning sources.The developed learning media received a response rate of 82.57% from teachers and 90.2% from students, with positive feedback, resulting in an excellent category.AR learning media also improved student learning outcomes by 35.8%.
Another relevant study was conducted (Arrum & Fuada, 2022) titled "Strengthening Online Learning at SDN Jakasampurna V in Bekasi City, West Java, Using Augmented Reality (AR) Interactive Learning Media".The research showed that using AR-based learning media can increase students' interest in learning and create a sense of joy.Through interactive AR-based learning media, students can also learn about technological advancements.
Furthermore, a relevant study titled "Implementation of Augmented Reality-Based Learning Media for Improving Students' Mathematical Understanding in Terms of Learning Styles" conducted by (Larasati & Widyasari, 2021) showed that AR learning media could be used for all learning styles, from kinesthetic, visual, to auditory.The applied learning media can increase students' engagement during learning activities.Moreover, implementing ARbased learning media can enhance students' understanding abilities with the support of explanations from teachers.
Based on the above studies, it can be concluded that using AR-based learning media can enhance the learning process.AR-based learning media can make students more enthusiastic and interested in participating in the learning process.Furthermore, AR-based learning media can improve students' understanding, improving learning outcomes.
Research Stage
The methodology used in this research is the Multimedia Development Life Cycle (MDLC).The multimedia development methodology comprises six phases: Concept, Design, Material Collecting, Assembly, Testing, and Distribution.
1) Concept
This application aims to improve the process of letter recognition learning by incorporating interactivity and striving to capture students' attention through creative and innovative learning interfaces.In this phase, the author establishes several criteria that form the concept of the application to be developed.The expected criteria for this application are as follows: a.The application should be compatible with Android-based smartphones.b.Users should be able to view 3D objects of alphabet letters, including capital letters, lowercase letters, and words starting with those letters, along with their corresponding images.
2) Design
In this stage, there are several activities, including designing the application interface using Canva (Figure 1a), creating button designs using Corel Draw (Figure 1b), creating media cards for each alphabet letter from A to Z, and developing 3D objects of alphabet letters using Sketchup.When the application user opens the AR section, they are required to point the camera towards the designated media card, and a 3D object will appear on top of the card.3) Material Collecting In this stage, the collection of materials needed for the development of the Multimedia Application for Alphabet Recognition using Augmented Reality takes place.The materials included in this application consist of the definition of letters, vowel materials, and consonant materials.
4) Assembly
The application is developed in this stage based on the concept that aligns with the collected materials and pre-designed interfaces.This ensures that the application functions as intended.The previously designed Media Cards are then imported into Vuforia to generate key points for each card.Each Media Card corresponds to one 3D object, starting from the letter A to Z.
5) Testing
In this stage, testing is conducted to determine whether each media card encounters any errors.The success indicator in this testing is when the 3D object appears above the media card by the letter highlighted by the camera (Figure 5).If the 3D object does not occur or does not match the media card highlighted by the camera, the testing is considered unsuccessful.
Interactive Application
An application is a computer or mobile device program designed to execute a predefined set of functions (Dewi et al., 2021).Something interactive can be understood as accommodating responses from the user (Faroqi & Maula, 2014).Interaction has a cause-andeffect relationship because there is a two-way form of action and response (Bali & Naim, 2020).Therefore, an interactive application is an application that requires an action from the user, and the application responds and accommodates accordingly based on the programmed instructions.
Augmented reality
Augmented reality is a term used to describe the merging of the real and virtual worlds, creating the impression that there is no longer a boundary between them (Arifitama & Permana, 2015).Augmented reality is an interactive technology that combines the real and virtual worlds, allowing for the seamless delivery of information to users (Ramadhan et al., 2021).Based on the definitions above, augmented reality can be understood as an interactive technology that merges the real world with the virtual world, creating a seamless integration between the two.
Alphabet Letter
Alphabet is a learning process that introduces children to the basic letters (Faris & Lestari, 2016).Alphabet serves as a foundation for children to develop reading and writing skills.It is divided into two forms: vowel letters, which are the living letters such as A, I, U, E, O, and consonant letters, which are the non-vocal letters such as B, C, D, F, G, H, J, K, L, M, N, P, Q, R, S, T, V, W, X, Y, Z (Chabibbah & Kaulam, 2014).Therefore, the alphabet can be understood as the essential learning of reading and writing skills, consisting of two types of letters: vowel letters, which are living letters, and consonant letters, which are non-vocal letters.
RESULT AND DISCUSSION
Based on the project framework in Figure 1, this section discusses the system of the "My Alphabet" application, hardware markers, 3D objects, analysis, design, implementation, and testing of the educational media application for 1st-grade elementary school students, namely the "My Alphabet" application.
Application system "My Alphabet"
To understand the "My Alphabet" application system, it is necessary to depict the process or instructions to facilitate its use by users.Therefore, the method or instructions of this application are illustrated through a flowchart and use case diagram.The following is an overview of these depictions.
Flowchart
To illustrate the process or sequence of instructions in Augmented Reality learning media is depicted using the following flowchart in Figure 3.
Use case diagram
An actor is a system that plays a role in providing or obtaining information from other systems.Actors in the "My Alphabet" application include elementary school students, parents, and teachers.The system in this application starts by opening the "start" event.Then, there are six events: opening the material feature, Augmented Reality (AR), credits, user instructions, about, and home.The interaction between the user and the learning media application system is represented using a use case diagram shown in Figure 4.
Software Design
The "My Alphabet" application is designed for Android users with a 1080 x 1350 pixels portrait resolution.This application consists of a total of 15 pages, including 1 home page, 1 menu page, 6 pages for learning materials, 1 page for AR scanning, 1 page for credits, 1 page with user instructions, and 1 page for the about section, which contains the developer profile of Multireality Group 3.Then, Figure 6 is the display design of the main menu feature on the "My Alphabet" application that will be implemented; this display is loaded in Unity.Then, Figure 7 is the display design of one of the main features in the "My Alphabet" application that will be implemented, namely the feature that displays material about the meaning of letters; this display is loaded in Unity.
Gambar 7. Material display 1 (meaning of letters)
Then, Figure 8 is the display design of one of the second main features in the "My Alphabet" application that will be implemented, namely the feature that displays material about the meaning of local letters; this display is loaded in Unity.
Gambar 8. Material display 2 (meaning of vowels)
Then, Figure 9 is the display design of one of the three main features in the "My Alphabet" application that will be implemented, namely the feature that displays material about various local letters; this display is loaded in Unity.Next, the design in Figure 10 is the display design of one of the four main features in the "My Alphabet" application that will be implemented, namely the feature that displays material about the understanding of consonant letters; this display is loaded in Unity.
Figure 10. Material display 4 (meaning of vowels)
Next, the design in Figure 11 is the display design of one of the five main features in the "My Alphabet" application that will be implemented, namely the feature that displays material about the use of consonant letters in everyday life; this display is loaded in Unity.
Implementation
This application is implemented or used exclusively on Android devices.If installed on a different device, it will not be used or cannot be installed.For the implementation of this product, it was tested using Android version 12 with a screen size of 6.5 inches and a resolution of 1080 x 2400.
The "My Alphabet" app on Android is represented by a file view, as shown in Figure 13, which includes its size description and the .apkextension.This file view provides a comprehensive overview of the app's installation package.By examining the size description, users can gauge the amount of storage space required to install the app on their Android devices.The .apk extension signifies that the file is in the Android Package format, the standard format for distributing and installing Android applications.This file view is crucial for users who want to download, install, and manage the "My Alphabet" app on their Android devices efficiently.
Evaluation
The developers conducted black box testing on the My Alphabet application using an Android device in the development environment to ensure its smooth operation and minimal errors.Black box testing, which focuses on observing the outcomes of executing the application, was employed.The testing results, as represented in the table, provide valuable insights into the application's performance and functionality, helping the developers identify any issues that need to be addressed before the final release.By conducting thorough testing, the developers can ensure that the My Alphabet application delivers a seamless and errorfree user experience on Android devices.Successfully installed on the test device with the Android 12 operating system.
Valid Application Interface
The application appears according to the size of the installed device.
It runs successfully on smartphones with a 16:9 aspect ratio screen and installs successfully on devices with a 6-inch screen, indicating that the application is responsive.
Main menu button
The buttons on the menu function properly by directing to the intended pages.
Successfully navigates to the corresponding pages.Valid Alphabet letter markers.
The marker is detected and displays augmented reality (AR) content according to the scanned marker using a smartphone.
AR content is successfully displayed according to the scanned marker.Markers that are not scanned do not appear.
Valid Exit the application Successfully exited the application
The testing went well.Valid
CONCLUSION
Based on the development of the My Alphabet application, it can be concluded that the Android application based on Augmented Reality for alphabet recognition is in accordance with its design and analysis.The My Alphabet application can assist primary school students, especially first-grade students, in learning and memorising the alphabet.This application provides a realistic virtual experience in the real world by visualising three-dimensional objects that can be interacted with, such as sliding and rotating, so that all sides can be seen.The interactive presentation of learning materials will make students more interested and enthusiastic about learning, which can impact their understanding abilities.The developers tested the My Alphabet application using the black box method on an Android device.The testing resulted in the following findings: the My Alphabet application successfully operated according to the specifications set by the developers.The application was successfully installed on Android OS 7.0 and appeared correctly on devices with varying screen sizes.The buttons within the application functioned as intended, allowing users to navigate through the app seamlessly.The marker detection feature accurately displayed the corresponding Augmented Reality (AR) content, and the exit button performed well, allowing users to exit the application smoothly.
AUTHORS' NOTE
The author declares that there is no conflict of interest regarding the publication of this article.The author confirms that this paper is free from plagiarism.
Figure 2 .
Figure 2. Object 3D appears above the media card.6)DistributionIn this stage, the marketing process of the My Alfabet application targeted at 1st-grade elementary school students takes place.
Figure 5
are Initial view of the application design created by Unity.
Figure 5 .
Figure 5.The initial view of the application in Unity.
Figure 6 .
Figure 6.Application menu display in Unity.
Figure 11 .
Figure 11.Material display 5 (examples of the use of consonants)
Figure 13 .
Figure 13.View of the "My Alphabet" application file on Android.
Figure 14
Figure14displays the "My Alphabet" application installed on an Android device.This screenshot captures the app's user interface, showcasing its presence on the smartphone's home screen or app drawer.The figure represents the visual confirmation that the "My Alphabet" app has successfully been installed and is ready for use on the user's handheld device.
Figure 14 .
Figure 14.Display the "My Alphabet" application file that has been installed on Android.
Table 1 .
API clarification results table. | 3,871 | 2021-12-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
'Standards' on the bench Do standards for technological literacy render an adequate image of technology?
The technological literacy of students has recently become one of the primary goals of education in countries such as the USA, England, New Zealand, Australia, and so forth. However the question here is whether these educations – their long-term policy documents as well as the standards they provide in particular – address sufficient learning about the nature of technology. This seems to be an important concern that through taking advantage of the philosophy of technology (the arena which affords a bountiful ground of various reflections on the nature of technology) is intended to be discussed throughout this study. In the first place, the paper presents a relevant framework based upon Mitcham’s (1994) four-aspect account of technology, i.e., technology as objects, knowledge, activities, and volition. Then it categorizes the main relevant concepts and concerns put forward by many other philosophers of technology into this framework; this will yield a concrete model (tool) to analyze any intended standard such as the above mentioned ones. Afterwards, to show how this model works, the well-known case of the USA – Standards for Technological Literacy (ITEA, 2007) – will be used as an example for inspection; the results will disclose the points where the current American case needs to be modified.
INTRODUCTION
It is not so long ago that the issue of technological literacy was given a substantial place in education; various researchers all over the world have taken it into serious consideration and, consequently, numerous attempts have been initiated to design the educational contents of teaching about technology over the previous 30 years ( see, e.g., International Technology Education Series, 2011-2015De Vries, 1997, 2005aRossouw, Hacker & De Vries, 2010;Dakers, 2005;Head & Dakers, 2005, and also the 'Standards' or 'long-term policy documents' such as Australian Education Council, 1994;Department of Education of South Africa, 2002;ITEA, 2007;Ministry of Education of New Zealand, 2007).
Even so, do these educational contents-specifically their resulting technological literacy Standards-render a comprehensive image of the nature of technology to students, who are expected to have more sophisticated interactions with it now and in the future? The answer can hardly be positive! For one thing, the concept of 'modelling' -as an essential part of most engineering activities -is claimed by the scholars such as De Vries (2013) that, as discussed later on, does not receive a desirable attention throughout the current Standards; this can be thought of only as one instance among others. Such a fact motivates us to seek a way to analyze these Standards, or other same types of long-term policy documents, to see the state of other relevant concepts within them as well and, even beyond that, to realize that to what extent these documents deliver an adequate understanding about the nature of technology. This endeavour will actually attempt to enhance the overall approach of such documents towards various and notable aspects of technology, as the current Standards are in general praiseworthy guidelines for organizing the relevant (and lower-level practical) curricula of technology education; they are not and should not be expected to be, themselves, detailed curricula bounded to strict rules or materials of teaching about technology.
Before moving any further, it is worthwhile also to make the approach of this inspection even clearer by giving emphasis to a fact, that is, the concept of 'technological literacy' is a broad view embracing more than just the 'image of' or 'understanding about' the nature of technology touched upon in this paper; it indeed includes the other aspects of technology as well, such as 'ways of thinking and acting' and 'capabilities' in relation to technology (National Academy of Engineering, National Research Council, Pearson & Young, 2002) which have not been addressed by this paper; they can be considered separately.
That said, in order to get a wiser view on how to deal with this concern, we would firstly like to have a chronological flashback to approximately the 1980s when an international movement was initiated in the area of learning about technology: the mission of this movement was actually to underpin a new path shifting such learning, from its customary craft-oriented attitude, to a broader approach which would consider 'technological literacy' as the essential basis in this regard (De Vries, 2013).
This movement was in fact a significant next step in the field of technology-oriented reflections, which occurred less than a half century after the advent of its predecessor, i.e., philosophical attempts to deliberate on the nature and various aspects of technology (Dakers, 2005;De Vries, 2000. Stated more clearly, the philosophy of technology in this point has initiated valuable resources for providing a conceptual basis for technological literacy reflections.
The primary approach of this movement by the late 1990s was mostly towards establishing an extensive discipline for technology education -that which eventually induced very beneficial contents, subjects, and even further philosophical reflections in this regard around topics such as the following: • The necessity for technology education However, these attempts gradually gave rise to a more specific step, as well, concerned with the literacy of students in this respect and, from this point on, the mission of underpinning a sound discipline in technology education for students was taken into consideration (Jones, Buntting & De Vries, 2013).
Performing such a mission in a suitable manner is no doubt a process which can be, and obviously should be, improved through continuous evaluation -to assess, as far as it relates to our study, the appropriateness of the image of and understanding about the nature of technology that is rendered by these educational curricula and Standards. Nevertheless, such an evaluation has not yet been implemented, and there exist some critical questions in this regard put forward by different scholars. Jones et al. (2013), for instance, enquire as to the main characteristics that constitute the nature of technology and the very concepts that should be, but are still not properly, taught and learnt in this respect; the researchers indeed put stress on the insufficiency of appropriate academic investigation into the manner that meets the needs of educational systems from this perspective.
It seems to us that these (types of) concerns could be tackled through taking advantage of the philosophy of technology; the discipline which, as will be discussed further on in this paper, can once again provide a conceptual contribution as to the nature and various properties of 'technology' and what students are Vol. 6(1), 2016, pp 6 supposed to learn in this regard, from different points of view. This is the very mission undertaken by this study: comparing that articulated by the philosophers of technology with that proposed by an extensivelydocumented educational standard of the USA, i.e. Standards for Technological literacy: content for the study of technology (ITEA, 2007), as an exemplar long-term policy document of technological literacy. This yields a fruitful method to evaluate, in the same way, the adequacy of the Standards designed for teaching about technology and to propose the modifications needed to be considered in this regard. This paper proceeds as indicated below and begins with an essential explanation of' why and how' this contribution has approached the philosophy of technology; this will end with a model categorizing most of the relevant concepts, proposed within the philosophical reflections on technology, to be used in technology education materials and standards (Section 2). Afterwards, in order to show how this developed model work, it will be thoroughly applied to the above-mentioned American case; this will yield an insight regarding the efficiency of that case, at least from our philosophy-flavoured perspective (Section 3). Finally, the last two sections draw the main points together and provide a conclusion to discuss, and open up some innovative approaches for further studies (Sections 4 and 5).
PHILOSOPHY OF TECHNOLOGY; WHY AND HOW?
Philosophy of technology as an antecedent field of technological reflections, as mentioned earlier, can afford a fertile ground of perspectives, content, and analyses to enrich and strengthen the tree of technological literacy studies. This is not a new claim at all, and one can easily find some supportive ideas in this relation in these earlier studies, such as the following: • Seeking an effective way of shaping concepts of technology for students, De Vries and Tamir (1997) state that, philosophy of technology is a discipline that has much to offer for technology education. Insights into the real nature of technology and its relationship with science and society can help technology educators build a subject that helps pupils get a good concept of technology and to learn to understand and use concepts in technology' (p. 3). • Delving into the different aspects of teaching about technology, De Vries (2005b) speaks of two important issues to be taken seriously into account: (1) what is a correct concept of technology, and (2) what educational settings need to be created in order to shift -and in point of fact, improve -pupils' actual concept of technology towards a correct concept in the experts' viewpoint. Nonetheless, contrary to many other school subjects,' he continues, 'there is [yet] no clear academic equivalent of technology education, from which a good conceptual basis can be derived …' (p. 149); he believes that the philosophy of technology can afford such an appropriate basis. • The philosophy of technology in the view of Jones et al. (2013) contains' a rich source of inspiration that can be used to guide the development of technology education ' (p. 194).
These are only some ideas among others that, although they speak of the significant potential of philosophical reflections to yield a more concrete conceptualization of what is needed to be learned about technology, have not yet led to a well-articulated scheme in this regard; this both inspires us and rationalizes our approach to strive to develop such a practical method.
However, prior to moving any further, it is worthwhile and essential to mention that our attempt has been initiated based on a satisfying account of technological literacy, in the first place; though one has difficulty finding a well-articulated definition for this concept, this mainly has to do with being more acquainted with the intrinsic nature of technology and its interrelationship with different individual and social aspects of human life (see, e.g., ITEA, 2007;and Jones et al., 2013). Consequently, this account will deal with a broad area of concepts and concerns that need to be taken into contemplation for teaching about technology.
The first step of this study was dedicated to compiling a list of such concepts and concerns. In order to do so, we conducted a survey into the former relevant research, and the article of Rossouw et al. (2010) seemed an insightful work in this step; benefiting from the ideas of various experts with philosophical, historical (together with educational) perspectives to technology, this study had composed an innovative list of concepts and contexts necessary for education regarding the nature of technology, as a contribution to the aims of technological literacy. Yet, though a valuable contribution, there were two problematic issues in that method: • the provided list had originated from an experimental, not a philosophical, analysis, and therefore it could not be guaranteed to be comprehensive, and consequently, Vol. 6(1), 2016, pp 7 • it was difficult to ascertain any categorization or classification related to the nature of technology, as addressed by the philosophers, within it.
Thus, this list needed in our opinion to be completed and somehow changed so that it more effectively serves our goal. Afterwards, the next step was devoted to conducting an extensive review of certain well-known books or references regarding the philosophy of technology, principal among which were: • Thinking through technology (Mitcham, 1994) • Readings in the philosophy of technology (Kaplan, 2004) • Philosophy of technology: An Introduction (Dusek, 2006) • A companion to the philosophy of technology (Olsen, Pedersen & Hendricks, 2009) • New waves in philosophy of technology (Olsen, Selinger & Riis, 2009) • Philosophy of technology and engineering sciences (Meijers, 2009) • A philosophy of technology (Vermaas, Kroes, Van De Poel, Franssen & Houkes, 2011) This provided us with a more extensive list of relevant concepts that received the attention of philosophers of technology. However, we still needed an appropriate tool to be able to efficiently categorize this lengthy list. Then, as a complementary stage, we followed in accordance with Mitcham's theory (1994), previously recommended by scholars such as De Vries (1997) and Frederik, Sonneveld and De Vries (2011) to be considered in technology education. This theory was even resorted to, though only to a small extent, in the same way earlier by Compton (2007), as a philosophy-based criterion to assess and ensure the approach of The New Zealand Curriculum to teaching about technology. That is not to say that Mitcham's theory was the best; rather, it was one adequate method, among other possibilities, which fits our need here to classify the concepts.
Mitcham has distinguished four ways of defining technology: technology as object, knowledge, activity, and volition. In a later work, he explicates the background of his theory as: In the most general sense, technology is 'the making and using of artifacts,' but we should look at four deeper aspects of this phenomenon. First, this making and using can be parsed into the objects that we make and use, such as machines and tools. This is 'technology as object.' Second, if we focus on the knowledge and skills involved in this making and using activity, that's 'technology as knowledge.' Third, there is the activity in which technical knowledge produces artifacts and the related action of using them: this constitutes 'technology as action or activity.' Fourth, there is another often overlooked dimension of 'technology as volition' -the will that brings knowledge to bear on the physical world to design products, processes, and systems. This technological will, through its manifestations, influences the shape of culture and prolongs itself at the same time. (Mitcham, 2001) Finally, the last step was dedicated to applying Mitcham's theory to the aggregated concepts, which yielded Table 1, i.e., a framework that could be employed as our desired tool to analyze the intended case(s) in a systematic way. It is worth mentioning that Mitcham's own extensive explanation of different sides of technology, in his well-known book of Thinking through Technology (1994), has been predominantly used here in developing That said, it is also worthwhile to emphasize here that this framework is not claimed at all to be a perfect one; rather, it can be seen as an initial version that can be improved, specifically in terms of its entailed concepts, in later works. Bearing this in mind, let us move to the next section to demonstrate the manner in which it works and how it enables us to realize the extent to which the intended 'Standards' -here, that of the USA -satisfy our approach to learning about technology's nature.
CASE STUDY: THE USA'S STANDARDS FOR TECHNOLOGICAL LITERACY
Among the existing Standards of technological literacy in the education systems of certain countries, the American case of Standards for Technological Literacy: Content for the Study of Technology (ITEA, 2007) can be regarded as the most extensive and elaborated document, serving as a vision as to 'what students should know and be able to do in order to be technologically literate' (p. vii.).
This document (referred to henceforth as STL) has been sensibly organized to bridge the gap between students' life-and work-styles that are ever-increasingly dependent on technology and their understanding in this regard. By focusing on training K-12 students, STL has identified 20 principal standards necessary for them to learn about appropriately (Table 2); each standard in itself also entails certain benchmarks that present more practical and expounded instructi tions (ITEA, 2007, p.15).
Another structural characteristic of STL is its specific classification of students: they are trained according to their grade level regarding their diverse but related contingent needs, interests, and abilities whether physical or mental. In this respect, it suggests a form of grade-based categorization that begins with K-2 and continues through 3-5, 6-8, and 9-12, each accompanied by some further sub-categorizations (for more detai l, see ITEA, 2007, p. 14).
All this encouraged us to investigate such a structured long-term policy document to see to what extent it addresses our philosophical account regarding the concepts and concerns required to be learned about the nature of technology. Nevertheless, this was not as easy as it initially appeared because STL is actually not a curriculum directly related to the contents of educational materials nor is it detailed. Rather, being a very extensive attainment target, it entails a set of Standards for teachers in order to develop their relevant desired curricula, and this raised the challenging necessity of attempting to derive a distinct interpretation of the actual intention of some of its standards or benchmarks in terms of the concepts needed to be educated. For one thing, our results from the first inspection of STL were amazingly not entirely the same as those of the second, and this persuaded us to try again, this time bearing in mind these inconsistencies, to get to a more reliable result, as spelled out in Table 3. Vol. 6(1), 2016, pp 9 Tables 2 and 3, the standards have been categorized in a specific form, comprising five chapters -say five angles of view to technology-namely, the nature of technology, technology and society, design, abilities for a technological world, and the designed world (those which should be taught about, according to the aforementioned grade-based classification of students).
Chapters
Standards 3-Students will develop an understanding of The Nature of Technology. This includes acquiring knowledge of: This type of categorization, though it might seem acceptable at first sight, is the subject of dispute and, as deliberated upon later on, while taking some of the concepts of Table 1 into proper consideration, it disregards some others or at least does not appropriately touch upon them. This may have roots in the fact that STL is the outcome of usual experience-based educational reflections: which typically, as stated by De Vries (2013), emerge from the customary craft-oriented approaches. The following subsections present a more detailed discussion in this regard.
'Technology as Object'
Beginning with this aspect, one can easily observe that the notion of artefact, as the most immediately apparent side of technology, has been suitably taken into consideration at the very opening of STL, where Standard 1 and its included benchmarks attempt to deliver an appropriate introduction about artefacts and artefactual features and also to enable students -who are typically accustomed to identify only the high-tech artefacts as technological (see De Vries, 2005a, pp. 107-112) -to adjust their conceptual bias toward the actual essence of technical artefacts.
Speaking more philosophically, the concept of the dual nature of artefacts too has actually been to some extent considered among the Standards: they consider both the physical and intentional nature of artefacts, though not using the same terms, respectively through taking both the 'object' and 'volition' sides of them into account (see, e.g., benchmarks 1-3, and 13).
Nevertheless, STL scarcely provides a satisfying explanation as to the concept of 'a (specific) design' of artefacts -particularly as to how such 'a design' relates the physical structure of an artefact to its function (or intention). In other words, even though this document attempts to provide some preliminary understanding about 'a design' through standards such as the 12th, such an inspection has not much to do with that of 'the dual Vol. 6(1), 2016, pp 10 nature' perspective -considering the specific design of artefacts as an essential element for the 'physical' and 'intentional' natures to interrelate and interact with each other.
The concept of systems, finally, has been properly looked at from different directions, mainly in (1) the 2nd and 3rd standards, where students are supposed to know more about the systemic nature of technology, and (2) the 12th, where they learn to some extent how to use and maintain technological products and systems in more appropriate and accurate ways.
'Technology as Knowledge'
Let us begin this section firstly by investigating STL's deliberation on different aspects of the interrelation of Science & Technology, in terms of characterizing various dimensions of technological knowledge in relation to the scientific dimension, expounding their distinctions, and delineating the interactions between them. These subjects have been fairly well discussed throughout this document; it yields a number of general descriptions of knowledge in science and technology (Standard 3), talks about some relevant historical evidences in this regard (Standard 7), and in the meanwhile even scrutinizes notions such as the knowledge of design (Chapter 5) and creativity (Standards 1 and 8) to elucidate the 'non-scientific' side of technological knowledge. Nevertheless, there are still some missing points in this relation that deserve to be taken up more within STL. For instance, the know-how aspect of technological knowledge, as well as the manner in which it proceeds further hand-in-hand with the know-that aspect (see, e.g., Vermaas et al., 2011, pp. 63-64), is recommended to be considered far more than the minor reflection seen in its current speculation.
1: The Characteristics and Scope of Technology artefact (as objects) -artefact (as volition) -creativityinvention & innovation -needs & wants -social construction of technology -system 2: The Core Concepts of Technology designing -evaluation -management -modellingsociotechnical systems -system 3: Relationships Among Technologies and the Connections Between Technology and Other Fields invention & innovation -system -technology & science
Technological knowledge has other substantial specific characteristics as well that have not been seriously taken into account in STL. This type of knowledge, for instance, may be manifested by different qualities across various artefacts directly representing the level of their designers and/or engineers' knowledge and skills, in terms of providing effective ways and tools to satisfy the intended functions (see, e.g., Vermaas et al., 2011, Chapter 4).
Normativity is the next considerable feature of technological knowledge that has not been seriously touched upon within STL; only a little implicit attention has been paid to the role of different needs, expectations, ethical views, and the like in this regard. This is while this concept has been reflected upon in many respects by philosophers of technology such as De Vries (2005a), Franssen (2009), andFrederik et al. (2011). They argue about why and how our contextual beliefs, views, goals, and actions are strictly to do with our evaluations and judgments and lead to specific types of technological knowledge and design, the reflections of which can provide significant and practical insights for students about the real character of technological knowledge.
'Technology as Activity/Process'
This perspective on technology has a very different situation in STL, compared to those of technology as objects or as knowledge. That is to say, the problem of the case has not to do with covering the related concepts; all of them, as seen later on, have been considered to varying degrees, through this document. Rather, the concern this time is that two prominent concepts among them -namely, evaluation and modelling -have not been examined in a manner that satisfies our philosophy-originated expectations. Let us present a profounder inspection of the state of all these concepts, in STL.
Beginning with designing, encompassing most other notions placed in the technological 'activities' cell (of Table 1), this broad process has expectedly drawn significant attention here: one chapter (Standards 8-10) has entirely focused on various aspects of 'designing' and its sub-notions (this could be also as certained to some extent within Standard 2).
Turning to the concepts of (human) needs, wants and demands-as the main drivers of designing various artefacts -they too have been discussed in the course of standards such as the 1 st and the 6 th . Meanwhile, the critical role of the different types of invention and innovation in the designing process has been touched upon through the chapters 3 to 6. However, regarding evaluation (or assessment), STL mostly determines it as what normally occurs in different steps by diverse 'designers'; they, for instance, perform continuous assessments on their ideas, sketches, models, and prototypes, based on various feedback, in order to meet the desired function and quality: the aspect which has been referred to specifically in Standards 2, 8, 9, and 11. Nevertheless, 'evaluation' has another side as well that have not been extensively addressed in STL, that is, the side of the very aforementioned 'feedbacks' that in fact have root in customers' assessment of artefacts. They do so in order to realize the extent of fitness for what they have paid for with what they actually need, in terms of the (quality of the) function of the intended artefact(s), or to recognize the impact of (a specific) technology on their individual and social life.
As to the notion of the use plan, it can be seen to be discussed too, at least as much as is expected of an attainment target, through Standard 12.
Finally, modelling can be thought of as the most problematic concept of this subsection and, viewed from the philosophical perspective of this article, it seems that students do not acquire a comprehensive understanding about different dimensions of the nature of modelling, in this way.
All the same, this notion may initially appear to have received suitable attention in STL, through considerations such as follows: • General discussions regarding models as tools that can be employed in the design processes (Standard 8); • Modelling for conducting communication, representation, and evaluation about the designed solution(s) (Standards 5, 9, and 11); Vol. 6(1), 2016, pp 12 • Modelling for testing and receiving feedback in order to complete the final adjustments or improvements (Standards 9 and 11); • Modelling for prototyping (Standards 8,9,and 11); • Modelling as a visual (two-or three-dimensional) tool to benefit the comparison and selection of the best solution(s) (Standard 11); • Different types of modelling: graphical, mathematical, and physical (Standard 11).
Nevertheless, these do not seem to suffice the needs of students, who must become technologically literate; they need, as stressed by De Vries (2013), to learn more explicitly and more elaborately about the essence of models and the process of modelling-in the sense of what the nature of 'modelling' is, what various functions of 'models' are, how they come into use, etc. Indeed, these are the inquiries addressed in some way or another by the philosophers of technology who have realized more dimensions and categories of models in engineering practices. For instance, Boon and Knuuttila (2009) open up a compact, broad, but classified description for the goal of putting models to use in engineering sciences, that is '… to understand, predict or optimize the behavior of devices or the properties of diverse materials, whether actual or possible' (p. 693); they also emphasize the remarkable distinction between the models developed in 'engineering sciences' and those produced in 'engineering in practice'. Another valuable dimension elaborated on in this paper is the epistemic aspect of models: perceived by authors as not only 'representational' but also 'epistemic' tools -partially independent from theory and data-which assist engineers in enhancing their education by constructing and manipulating them and, sometimes, in realizing an unexpected innovative concept or area of research. Furthermore, philosophers such as De Vries (2013) also believe that students, in another aspect, must acquire a proper insight into the diverse typologies that classify models from different perspectives. He suggests a compact instance as to how models could be categorized, and recognized, based on their types and functions. All these are only some, among many other, philosophical considerations which have led us to realize the considerable gap between what modelling actually is -in its nature and practice-and how it has been considered in STL; the latter has only taken up modelling in a very limited manner confined to revealing certain representational functions of models (namely evaluation, test, prototyping, receiving feedback, and so on) accompanied by demonstrating a very simple classification in this regard.
'Technology as Volition'
This aspect of STL, as seen further on in this paper, has only partly to do with the philosophical considerations about technology; that is to say, while embracing to some extent a number of concepts addressed in Table 1, there are certain others which have not yet been suitably taken into account. In addition, a substantial conflict within STL, too, can be also recognized when examining it in this respect.
To begin with, artefacts as human volition, which refers to the social nature of objects (Vermaas et al., 2011, pp. 18-20), has been taken into consideration primarily in Standards 1 and 13. This makes sense because in order to be technologically more literate, in this sense, students should in tandem acquire • valuable knowledge about the social nature of artefacts (discussed under the subject of 'The Characteristics and Scope of Technology' in Standard 1) as well as • a proper level of abilities to live in a technological world (considered through the theme of becoming able to 'Assess the Impact of Products and Systems' in Standard 13).
These two sides of reflection are, moreover, in collaboration with inspecting how the design of artefacts (or systems) ties in to various volitional values of human beings -touched upon under the term of value sensitive design -which has been considered in Standards 11 and 13.
There are some explicitly society-based aspects of technological volition as well, concerned with the relationship between technology and the various sides of a human being's social life and taken up, in the philosophy of technology, with notions such as social construction of technology, technology and politics, technology and economics, and technology and culture; such aspects have been specifically deliberated on within Chapter 4.
Turning to the other concepts, it can be perceived that STL has paid particular attention (Chapter 7, Standards 14 to 20) to developing students' understanding of and helping them to be able to select and use various contexts of technologies including medical, agricultural and related bio-, energy and power, information and Vol. 6(1), 2016, pp 13 communication, transportation, manufacturing, and construction technologies. As a matter of fact, this longterm policy document seems to provide a plentiful contribution in this sense as well.
Now let us take a look at STL's approach to ethics, values, and moralities, the concepts of which are undoubtedly the most prominent subjects of discussion in the contemporary philosophy of technology. These have been addressed in the 4th chapter; they make students become more literate, in this sense, on different levels of designing, making, and using technical artefacts (or systems), which is well-intentioned in its own right. However, a significant conflict exists, in STL, with the philosophical reflections in this regard that needs to be clarified, as the latter mostly argues against the neutrality thesis (which considers technology as a neutral entity completely dependent on a human decision to be weighed). Recent philosophers typically believe that (some) technical artefacts or systems do entail certain characteristics which create specific values and impose them on human life; there are some notable reasons resorted to in this regard, such as follows: • The inherent side-effects, whether intentional or unintentional, of some technologies like harmful chemical plants or electromagnetic devices; • The inherent value or disvalue put in the specific design and the main goal of using some technologies; speed bumps, for instance, entail the value of increasing people safety; • The undeniable structure of sociotechnical systems, such as the civil aviation organisms, which cannot be excluded from the active role of its inside (human) actors as essential functioning parts -and not users -of that technological systems.
(See, for more detail, Vermaas et al., 2011, pp. 16-18) This value-laden account of (some) technologies, absolutely, contrasts with the perception upon which STL was developed (as clearly asserted from its very beginning): Students should come to see each technology neither good nor bad in itself, but one whose costs and benefits should be weighed to decide if it is worth developing (pp. 5).
This perspective is also emphasized by Standard 4 where this benchmark appears: Technology, by itself, is neither good nor bad, but decisions about the use of products and systems can result in desirable or undesirable consequences (pp. 60).
This problem is not at all a slight or negligible one, and it indeed deals with students' foundational account of technology. Therefore, such a perspective is better to be amended according to the non-neutrality insight into technology; otherwise, students will most likely encounter genuine conflicts between what they learn, in this sense, and what they will later experience in practice.
There are also some concepts -such as aesthetics-supposed to be taken into more consideration in this document. It has indeed been argued by philosophers like De Vries (2005a) that the 'aesthetical' aspect of technology needs to be seriously considered within the plans of teaching about technology; as the aesthetical values play prominent roles particularly in two important engineering fields: architecture and industrial design, that have coupled technology and art.
Last but not least, it appears as though STL has approached technology from the 'now' perspective through which students learn how to live better lives in their current customary sociotechnical world. However, it is difficult to find, for example, a significant benchmark discussing or tracing how different views on metaphysics have led (the 'past' outlook), do lead, or may lead (the 'future' outlook) to various types of interactions with technology and different lifestyles. The history of human life is full of substantial and attractive instances capable of guiding the minds of students to an improved understanding of technological evolutions and their relationship with various world views. It would then be interesting for them to know, for example: • How specific beliefs of the ancient Egyptians led to the design and construction of the Pyramids; • How Persians' perception of God influenced their particular architecture mainly rooted in the Safavid era; • Why the modern account of science and technology has underpinned a new path of technological development such as inventing the steam-engine motor, and the like, particularly in the West, and how it has led to post-modern technologies which are extensively based on IT and virtual space.
In this sense, students are really supposed to think more about the 'future'-in terms of tracking the current pathway of technology advancements and thinking of the future possible characteristics of technology and, consequently, the human life-and work-style, as well as contemplating which contexts of technology tend to gain a more impactful role and which will gradually diminish or be replaced by other fields of technological Vol. 6(1), 2016, pp 14 breakthroughs. They will learn much better in this manner how to enhance their abilities and knowledge in order to undertake more effective roles in shaping their own desirable future.
CONCLUSIONS
Summarizing the above mentioned points can afford an overall picture as to how this paper has taken advantage of the philosophy to contribute to improving the current Standards of technology education.
Through articulating the relevant concepts in an innovative way based upon Mitcham's characterization of various aspects of technology, this study could come up with a reasonable method to be used to address the proposed research question, which is concerned with delivering sufficient knowledge about the nature of technology to students. Then, applying the developed framework to STL, as an exemplar case, revealed that this long-term policy document, though a very useful contribution of certain strong points, could still undergo a number of modifications in order to yield a more comprehensive insight into the nature and various properties of technology, the claim which can be briefly recapitulated as Table 4 and briefly outlined as follows: • The particular attention of this Standard to 'the nature of technology' and 'design', respectively through the two distinct chapters of 3 and 5,affords a suitable account of technology as both 'object' and 'activity'; nevertheless, it still needs to pay more profound attention to 'the specific design' of artefacts, as what interrelates their physical and intentional natures, which has been scarcely discussed in an explicit way, as well as to the essence of 'modelling' and 'evaluation' which, though touched upon more or less, have not been talked over, at least, as compared to that described by philosophers of technology. • As to the 'knowledge' aspect of technology, there are certain essential concepts that it is hard to find any clear discussion of throughout STL, and it is therefore suggested that they are incorporated into upcoming revisions; students are proposed to become more acquainted with 'the normative nature' of technological knowledge and also distinguish its 'know-how' aspect from the 'know-that'; they also need to be capable of realizing how technological phenomena indicate diverse types and levels of knowledge and skills that support them. • Chapter 4 associates the societal dimension of technology, which is later accompanied by an extensive discussion of its various contexts in Chapter 7; together these provide a satisfying deliberation of technology's 'volitional' aspect for students. Yet certain subjects seem missing, namely, those which relate the notions of 'aesthetics', 'metaphysics', and 'the future of human beings' to the essence of technology. Moreover, as far as the subjects of ethics, values and moralities are concerned, STL's 'neutral' view toward technology is highly recommended to be revised and replaced by the 'nonneutral' perspective.
We would like to end the paper with some suggestions for further studies; since its initiated approach has been based upon a concrete ground of philosophical reflections on technology, it can be therefore applied to evaluate other Standards and even other types of curricula or materials of technological literacy as well. For one thing, the New Zealand Curriculum (Ministry of Education of New Zealand, 2007) and its Technology Curriculum Support (Ministry of Education of New Zealand, 2010) have claimed to pursue Mitcham's account of different aspects of technology; this claim can be examined using this method. Alternatively, one can analyze to what extent the craft-based and design-oriented approach of England's long-term policy document, i.e. National curriculum in England: design and technology programmes of study (Department of Education of the UK, 201 3), delivers a comprehensive understanding of technology in practice. Also, such an investigation can be accordingly extended to even analysing and modifying the relevant schoolbooks, where the above-mentioned general instructions have been given more deliberation. | 8,888.6 | 2016-03-03T00:00:00.000 | [
"Education",
"Engineering",
"Philosophy"
] |
The Five Diamond Method for Explorative Business Process Management
Explorative business process management (BPM) is attracting increasing interest in the literature and professional practice. Organizations have recognized that a focus on operational efficiency is no longer sufficient when disruptive forces can make the value proposition of entire processes obsolete. So far, however, research on how to create entirely new processes has remained largely conceptual, leaving it open how explorative BPM can be put into practice. Following the design science research paradigm and situational method engineering, we address this research gap by proposing a method called the Five Diamond Method. This method guides explorative BPM activities by supporting organizations in identifying opportunities from business and technology trends and integrating them into business processes with novel value propositions. The method is evaluated against literature-backed design objectives and competing artifacts, qualitative data gathered from BPM practitioners, as well as a pilot study and two real-world applications. This research provides two contributions. First, the Five Diamond Method broadens the scope of BPM by integrating prescriptive knowledge from innovation management. Second, the method supports capturing emerging opportunities arising from changing customer needs and digital technologies. Supplementary Information The online version contains supplementary material available at 10.1007/s12599-021-00703-1.
Appendix 1. Assessment of Five-Diamond-Method using the CAMAS Method
To define the context type of a method, i.e., the situations in which the method can be used, we use the CAMAS method (vom Brocke et al. 2020). According to four activities of the Assessment Process of the CAMAS method, the Five-Diamond-Method is assessed as shown in Figure
Appendix 2. Overview of Expert Interviews
To evaluate the Five-Diamond-Method from an ex-ante naturalistic perspective, we conducted expert interviews (Myers and Newman 2007) following an expert sampling approach (Bhattacherjee 2012). Thus, we recruited industry experts from our personal networks that differ in their personal and academic backgrounds and cover various departments and industries. Moreover, we only included experts with more than five years of professional experience and substantial experience in BPM, IM, or business development. We conducted semi-structured interviews structured along the method's activities (Myers and Newman 2007). Each interview took about one hour and was attended by two researchers. After eight interviews, we consented that the experts' feedback was consistent and that conceptual saturation had been reached. Therefore, we did not conduct further interviews. As the consulting companies primarily advices medium-sized product organizations, they were incorporated as a multiplier despite of its small size. Table 5 within the manuscript (Section 5.2) shows all industry experts and respective organizations, Table A-1 summarizes highlights of the experts' feedback. Details for method application, e.g., workshop setting, number of trends that should be identified and selected, participants should be provided to ensure its usefulness (ID2, ID7).
Appendix 4: Recommendation added that the Five-Diamond-Method should be applied repeatedly.
Section 4.2: Hint added that the overarching diamond intends to execute activity 1 to 4 in the proposed order. However, different starting points can also be chosen, or activities or techniques can be omitted. Moreover, the method proposes an iterative procedure model. Appendix 4: Based on both real-world applications (Section 5.3) recommendations for application are proposed.
Purpose Diamond
The purpose of an organization is crucial, but difficult to define (ID3, ID4, ID8). Besides the context framework additional industry classification schemes might be helpful to better define the organizational context, especially the own and relating industries (ID7).
Section 4.3: Information added on how to define the purpose of an organization.
Section 4.2: Two industry classification schemes, i.e., GICS und NACE, are provided as potential tools to define the industry context.
Business Diamond
There is a tremendous amount of industry trends that could be included.
Thus, it is difficult to focus on the most relevant ones. Various evaluation criteria might also be helpful select relevant trends (ID2, ID6).
Section 5.3 and Appendix 4: Ideas on how to focus on relevant trends (i.e., post-corona time) and evaluation criteria (e.g., relevance, enthusiasm for customers) are proposed when applying the method. Additionally, we recommend to include various stakeholder (e.g., business and market analyst) when searching for relevant business trends.
Technology Diamond
When identifying digital technologies, it is important that some organizations might only focus on emerging technologies to be a first mover, while other organizations might focus on more established ones (ID1, ID5).
Section 4.5: Hint added that the divergent thinking phase covers the collection of existing and emerging digital technologies.
Integration Diamond
Generating new process ideas can not only be based on business and technology trends, this can be also done by just using the defined organizational purpose (ID3).
Section 4.6: Hint added that new process ideas can build the purpose, business, and technology diamond as well as combinations of these.
Appendix 3. Overview Application Setting with Students
To evaluate the Five-Diamond-Method from an ex-post naturalistic perspective, we first taught the method to 22 bachelor students (enrolled in a business or business law major) which chose a specialization in BPM. In four sessions which lasted three hours each, the students got to know the Five-Diamond-Method and its theoretical foundation. The students then formed groups and applied the Five-Diamond-Method on case organizations they were assigned to. The students were asked to identify their organization's purpose and to justify necessary assumptions they made. The students had three weeks to apply the method on the case organization and to submit a written report.
After the students have applied the method and submitted the report, the students were asked to anonymously fill out an online questionnaire, in which 18 students participated. Through this survey, quantitative data about the perceived usefulness and ease of use could be gathered. For this, we adjusted the proposed items from (Davis 1989), with four items for each construct, e.g., "I find the Five-Diamond-Method useful for identifying innovation opportunities in business processes" for perceived usefulness. The participants were able to respond on a 7-point Likert scale with the extremes being labeled "strongly disagree" and "strongly disagree". The comprehensive results are shown in Figure 3 (manuscript).
Perceived Usefulness:
Using the 5-Diamond method would improve my performance in identifying innovation opportunities in business processes Using the 5-Diamond method would improve my productivity in identifying innovation opportunities in business processes Using 5-Diamond method would enhance my effectiveness in identifying innovation opportunities in business processes I find the 5-Diamond method useful for identifying innovation opportunities in business processes
Perceived Ease of Use:
Learning to use the 5-Diamond method would be easy for me I would find it easy to deploy the 5-Diamond method to do what I want it to do It would be easy for me to become skillful in the use of the 5-Diamond method I would find the 5-Diamond method easy to use
Appendix 4. Overview of Workshop Setting with Case Companies
To evaluate the Five-Diamond-Method from an ex-post naturalistic perspective, we applied the Five-Diamond-Method within two case organizations. The results for the insurance company are shown in Table A-2, for the facility management company in Table A-3.
Appendix 5. Learnings and Recommendations for Application
Based on both applications, we provide some recommendation for applying the Five-Diamond-Method in the following.
We recommend including different roles within the organization to include and account for different perspectives during the divergent and convergent thinking phases of the different activities. For instance, it is beneficial to include technology experts that are knowledgeable about the organization's technical infrastructure in order to get instant feedback about the technical feasibility. Alternatively, as seen real-world application 2, one can also involve one key stakeholder who is knowledgeable about business-related matters (e.g. strategy and vision) as well as the operational details (e.g. process designs). In such cases, however, it is important to moderate the discussion in a way that the participant has time and space to slip into the respective role. Applying the Five-Diamond-Method within a half-day workshop was challenging from a time perspective. While we recommend that the different stakeholders prepare individually by identifying appropriate trends beforehand, it is still advised to go through each diamond in the course of the workshop in order to agree on its relevance and impact before generating and evaluating innovative process ideas. Thus, we recommend at least one full day for conducting the workshop. Furthermore, following the feedback of one of the participants, it can be beneficial for the organization to have follow-up reflection opportunities. This can be important as the implementation of new processes can pose organizational challenges. Regarding the sequence of activities, we realized that there are many iterations between the last three diamonds (business, technology, and integration). We think this is beneficial, as the output of one diamond can trigger innovative process ideas in another. Similar to creative techniques like design thinking, iterations through the activities can be useful as it promotes learning (Plattner et al. 2014). To account for the specifies of a workshop, the Five-Diamond-Method does not provide a recommendation regarding the time spend on each diamond or the amount of trends/process ideas to be collected. However, we recommend specifying these parameters in the beginning of a workshop in order to clarify and align expectations of the participants. The creative nature of the workshop makes it also possible to generate process or purposeunrelated ideas. We thus recommend to continuously check the process focus is still maintained, especially within the integration diamond. Following the comment of one participant, it is important to involve a moderator who guides the process. | 2,217.6 | 2021-06-08T00:00:00.000 | [
"Business",
"Computer Science"
] |
The dynamic evolution of mobile open reading frames in plastomes of Hymenophyllum Sm. and new insight on Hymenophyllum coreanum Nakai
In this study, four plastomes of Hymenophyllum, distributed in the Korean peninsula, were newly sequenced and phylogenomic analysis was conducted to reveal (1) the evolutionary history of plastomes of early-diverging fern species at the species level, (2) the importance of mobile open reading frames in the genus, and (3) plastome sequence divergence providing support for H. coreanum to be recognized as an independent species distinct from H. polyanthos. In addition, 1C-values of H. polyanthos and H. coreanum were measured to compare the genome size of both species and to confirm the diversification between them. The rrn16-trnV intergenic regions in the genus varied in length caused by Mobile Open Reading Frames in Fern Organelles (MORFFO). We investigated enlarged noncoding regions containing MORFFO throughout the fern plastomes and found that they were strongly associated with tRNA genes or palindromic elements. Sequence identity between plastomes of H. polyanthos and H. coreanum is quite low at 93.35% in the whole sequence and 98.13% even if the variation in trnV-rrn16 intergenic spacer was ignored. In addition, different genome sizes were found for these species based on the 1C-value. Consequently, there is no reason to consider them as a conspecies.
www.nature.com/scientificreports/ Hymenophyllum s.l. is comprised of more than 300 species with a nearly cosmopolitan distribution 34 . Pryer et al. 28 showed that Serpyllopsis, Cardiomanes, and Microtrichmanes, which were segregated from each other or belonged to Trichomanes by transitional classifications 30,32,33,35 , were included within Hymenophyllum s.l. Although, they also mentioned that their data set was insufficient in terms of taxon sampling and the number of genes used to evaluate all of the segregates within Hymenophyllum s.l. Hennequin et al. 36 reconstructed the phylogenetic tree of Hymenophyllum s.l. using increased sampling and additional genes and divided Hymenophyllum s.l. into eight subgenera. Interestingly, four segregate genera and one section belonging to Trichomanes formed a clade with Hymenophyllum species, in agreement with Pryer et al. 28 . Recently, Ebihara et al. 29 divided Hymenophyllum s.l. into 10 subgenera based on molecular phylogenetic analyses and macroscopic characters.
In Korea, three species and one unacceptable taxon of the genus Hymenophyllum have been reported: Hymenophyllum barbatum (Bosch) Baker, H. polyanthos (Sw.) Sw, H. wrightii Bosch and H. coreanum Nakai. Among them, H. coreanum was first recognized by Nakai 37 who collected the sample from Kumgangsan in Gangwondo, Korea and described it as a plant with 2-5 mm stipes, 5-15 mm fronds and dense sori in apex fronds. In contrast to H. coreanum, H. polyanthos in Korea is 1.5-3 times longer and has involucres from the apex to the middle of the frond (Fig. 1) 38 . However, in spite of these different characteristics, H. coreanum has been considered a synonym of H. polyanthos by many researchers. The broadly described species, H. polyanthos has a worldwide distribution but probably includes a number of distant lineages in the subgenus Mecodium that do not have any specialized morphological characters 39 . The recent phylogeny of Hymenophyllum s.l. using molecular markers revealed that H. polyanthos was highly polyphyletic 36,40 . Therefore, more clear scientific evidence for considering the correct taxonomical position of H. coreanum and its relationship with H. polyanthos is necessary.
In this study, we sequenced four new plastomes of Hymenophyllum species distributed in the Korean Peninsula: (1) to explore the plastome evolution of early-diverging leptosporangiate ferns at the species level, (2)
Results
Genome structure and gene contents of plastomes in Hymenophyllum. The newly sequenced plastomes of the four Hymenophyllum species were from 144,112 bp to 160,865 bp in length with a GC content of 37.5-38.6% (Table 1). The large single copy (LSC) region in the subgenus Hymenophyllum was 10 kb longer than that in the subgenus Mecodium, due to inverted repeat (IR) expansion of the LSC region (Fig. 2). The length difference of the small single copy (SSC) region between the two subgenera mainly resulted from the insertions at the rpl32-trnP and ndhA introns in the subgenus Mecodium. The IR-SSC boundary was slightly different among species but positioned near the 5ʹ end of ndhF. There were 85 coding genes, 8 rRNA genes, 33 tRNA genes, and one pseudogene (trnL-CAA ) in the subgenus Hymenophyllum. However, three species in the subgenus Mecodium had one more rps7, ndhB and trnV due to IR expansion. Gene order remained stable among species in the genus Hymenophyllum except the IR expansion of the subgenus Mecodium.
Length variation hotspots in Hymenophyllum plastomes. The intergenic spacer (IGS) between trnV
and rrn16 was the most variable region in length (Fig. 2). Because coverage depths between three plant genomes in a NGS data were significantly different 41 , stable read depths throughout the plastomes in this study confirmed that these length variations did not result from mitochondrial plastome sequences or nuclear plastome sequences.
In subgenus Hymenophyllum, trnV-rrn16 of H. holochilum was 2387 bp in length and mostly belonged to the IR region. However, that of H. barbatum was expanded 3.4 kb more than H. holochilum and two-thirds of this region was part of the LSC region (Supplementary Table 1). In the subgenus Mecodium, trnV-rrn16 of H. wrightii was 2190 bp in length and 5.7 kb and 4.4 kb expansions were confirmed at the plastomes of H. polyanthos and H. coreanum, respectively. In total, 20 ORFs having more than 100 amino acid (aa) sequences in length were also found in this length-variable region of five Hymenophyllum species (Supplementary Fig. 1). These ORFs ranged Supplementary table 3). Among them, 22 loci (75%) were flanked by tRNA genes. The most frequent locus, including MORFFO, throughout the plastomes was rrn16-(trnV-GAC )-rps12 in which trnV-GAC was generally intact in eusporangiate ferns and early-diverging leptosporangiate ferns but was deleted or pseudogenized in most core leptosporangiate ferns.
In addition, all expanded regions had palindrome sequences at both ends with a minimum stem length of 5 bp, a minimum loop length of 5 bp, and a minimum stem-loop sequence of 20 bp. 35 , was recently reclassified into 10 subgenera by Ebihara et al. 29 based on morphological characters and molecular evidence 36 . Among the 10 subgenera, sensu Ebihara et al. 29 , within Hymenophyllum, eight subgenera have the basic chromosome number of x = 36. However, the subgenus Mecodium has a chromosome number of x = 28 and the subgenus Hymenophyllum has chromosome numbers of x = 11 to 28 29 . Because the latter two subgenera seemed to be the most recently derived taxa within the genus 36 , the ancestor of Hymenophyllales was suggested to have the chromosome number 2n = 72 43 , and it is assumed that the reduction in chromosome number is a synapomorphy of these two subgenera in the genus Hymenophyllum 44 .
According to Ebihara et al. 29 , the subgenus Mecodium consisted of H. polyanthos and its local derivatives. On the other hand, it was noted that H. polyanthos shows variation in its chromosome number 45,46 (n = 28 Japanese taxa or 27 taxa of other country outside of Japan), and more recently, the polyphyly of this cosmopolitan species, including several regional taxa, was discovered 36,40 . Therefore, the n = 27 population of H. polyanthos mentioned by Iwatsuki 46 is likely to be the result of a transition from n = 28 to n = 27.
In this study, the chromosome numbers of H. polyanthos and H. coreanum were not directly counted. However, the result of flow cytometry implied that the genome of H. coreanum is downsized compared with that of H. polyanthos. It is not clear whether H. coreanum has been adopted to the n = 27 population of H. polyanthos, www.nature.com/scientificreports/ mentioned by Iwatsuki 46 , based on the data at hand. However, it is clear that H. coreanum is distinct from H. polyanthos based on morphological characteristics and genome size. Therefore, H. coreanum should be considered as an independent species not one of the synonymous taxa of H. polyanthos. The divergence of plastome sequences between both species also strongly supports this taxonomical treatment. So far, it was rare to report more than two plastome sequences in the same species of ferns ( Fig. 5 and supplementary table 4). The percentage of sequence identity between two plastomes in a species ranges from 98.86 (Equisetum arvense) to 99.97% (Mankyua chejuense). Except for the case of E. arvense in which the copy number variations frequently occurred 15 , sequence identities within the same species are higher than 99.7% in ferns. In contrast, the sequence identities between the plastomes of two different species in a genus, with the exception of the genera Cyrtomium and Azolla, are less than 98.7% (Fig. 5 and supplementary table 4). Sequence identity between plastomes of H. polyanthos and H. coreanum is quite low at 93.35% in the whole sequence and 98.13% even if the variation in trnV-rrn16 IGS was ignored. Consequently, plastome sequences of these two species revealed that they are quite distinct from each other and there is no reason to consider them as an intraspecific variation.
In this study, we only investigated the Korean population of H. coreanum and it is not known whether the H. coreanum distributed in China, Taiwan and Japan 38 also has the same genome size and plastome sequences as the Korean population. In Korea, the species mainly inhabits high mountains, however information for this species is very poor because it has previously been considered a synonymous taxon of H. polyanthos. Further study of this species facilitate better understanding of the speciation of the subgenus Mecodium because many species within the subgenus are derivatives of H. polyanthos, and H. coreanum, at least in Korea, seems to be recently diverged from H. polyanthos.
MoRffo of ferns prefer stem-loop structures.
In previous studies 25,47 , the physical positions of MORFFO were considered to be related to inversions or IR borders even though it is not clear whether MORFFO are a cause or a consequence of inversions. In the present study, we made a new finding that the positions of MORFFO were between palindrome sequences, especially near tRNA genes (Supplementary table 3). Transfer RNA genes are very important in structural variation in fern plastomes. For instance, inversion between trnfM and trnE is specific to moninophytes 48 and a number of inversions occurring near tRNA genes have been found in fern plastomes 3,19,26,27 . In addition, length variations in noncoding regions of plastomes owing to tRNA gene repeats have been suggested 19,27 . Transfer RNA-related genome rearrangements are not unique characteristic of www.nature.com/scientificreports/ fern plastomes. Intermolecular recombination between distinct tRNA genes has been found in cereals 49 , and extensive rearrangements were also suggested due to repeats or tRNA genes in Trachelium caeruleum 50 . Consequently, tRNA genes in plastomes of ferns seem to be strongly associated with genome instability events which can lead to insertion of MORFFO using double-strand breaks 51 .
In bacterial genome evolution, the most transposable elements have short terminal inverted repeats 52 . The insertion of some specific sequences related to repetitive extragenic palindromic (REP) elements have been reported 53,54 . Therefore, the mobility of MORFFO within the plastome is likely to be associated to the palindrome sequences at both ends of insertion regions. Two MORFFO in distant positions in a plastome of certain species in Polypodiales 47 and Hymenophyllales in this study also seem to be caused by the palindrome sequences.
Correctively, it is likely that the mobile elements, consisting of palindrome sequences, lead to the insertion of MORFFO in the plastomes of ferns, and genome instability caused by hairpin structures at the ends of MORFFO influence genome rearrangements on the plastome. As a result, inversions seem to frequently occur in the loci having MORFFO.
Degradation after insertion of long DnA contig including MoRffo. So far, more than one hundred plastome sequences in ferns have been reported, and MORFFO have been found within at least one species in Marattiales, Ophioglossales, Hymenophyllales, Gleicheniales, Schizaeales, Cyatheales, and Polypodiales (Supplementary Table 2). The MORFFO status (present or absent) varies at the family level in Polypodiales 47 or at the genus level in Ophioglossaceae, Schizaeaceae, Blechnaceae, Dryopteridaceae, Lomariopsidaceae, Polypodiaceae, Pteridaceae, and Thelypteridaceae. In addition, the inserted region of MORFFO also differs in certain genera 25 (i.e., Asplenium, Cheilanthes, Myriopteris). It is uncertain exactly when MORFFO were inserted in the fern plastomes. However, if MORFFO were independent events at different lineages of ferns, it is difficult to explain (1) why expanded regions, including MORFFO, have high sequence similarities throughout the fern lineages 25 and (2) how MORFFO were frequently transferred into the highly conserved genome, in terms of gene transfer 55 . In addition, MORFFO-like sequences found in cyanobacteria or green algae imply that MORFFO come from a single origin that was started from nuclear genomes after endosymbiotic or horizontal gene transfer 3,25 .
Even though MORFFO are supposed to be under selective pressure 25 , they are not assumed to play a vital role for fern plastomes because there was no vestige of MORFFO in various fern families (Supplementary Table 2). It is not surprising to degrade unnecessary genes in a plastome of land plants 56,57 . Especially, the degradation of ndh genes in Orchidaceae showed how easy it is to delete genes in plastomes among closely related taxa 58 . MORFFO in the plastome of Mankyua and that in the mitochondrial genome of Helminthostachys 3 also suggest MORFFO degradation through intracellular gene transfer.
In Hymenophyllaceae, the length of rrn16-trnV of H. wrightii is slightly expanded similar to previously reported plastomes 20,59 , except Vandenboschia 60 , owing to MORFFO. Interestingly, rrn16-trnV is almost tripled in H. barbatum and H. coreanum or quadrupled in H. polyanthos compared with H. wrightii (Fig. 2). As mentioned earlier, MORFFO seems to have existed in the ancestor of Hymenophyllum, the length variation of rrn16-trnV in Hymenophyllum is able to be interpreted into a consequence of degradation.
Similar to the plastomes in Ophioglossaceae and Hymenophyllum, MORFFO survived or disappeared during the evolution of fern plastomes. Partial sequences of MORFFO in most core leptosporangiate ferns except Pteridaceae 25 are consistent with the degradation hypothesis. MORFFO-lacking lineages may come from an www.nature.com/scientificreports/ ancestor that has undergone an independent MORFFO degradation event. However, it is not clear why MORFFO remain in relatively recently diverged families in the fern phylogeny 17 , Pteridaceae of Polypodiales 25 , if they do not have any function. One hypothesis is that morffo1, morffo2, and morffo3, which are found in Pteridaceae, are not intact genes but partial genes of a large ORF. Robison et al. 25 described that these three ORFs cluster immediately adjacent to one another when they are present, and morffo1 and morffo2 often form a larger ORF. ORF_B1 in H. barbatum is highly similar to morffo1 and morffo2 consistent with their observation. If insertion of MORFFO contig was the consequence of genomic instability by stem-loop structures and then rearrangements including insertion/deletion and inversion were caused by genomic instability at the inserted region, a larger ORF can be fragmented and shuffling of smaller fragments can make novel chimeric ORFs. Actually, these novel chimeric ORFs have been reported in the mitochondrial genomes of land plants 61,62 .
Plastomes of eusporangiate and basal leptosporangiate ferns will shed light on the prototype of extant MORFFO, and this information will lead us to understand the evolution of MORFFO in fern plastomes. Sequencing and assembling. Four high-quality DNAs were sequenced by Illimina HSeq X Ten. The raw reads were trimmed by trimmomatic 0.36 63 with the options of leading:10, trailing:10, slidingwindow:4:20, and minlen:50. The assembly method was identical to Kim et al. 64 using Hymenophyllum hollochilum (NC_039753) as a reference sequence.
Materials and
Gene annotation. Genes in four plastome sequences were annotated compared with the genes of H. hollochilum using Geneious 10.2.6. Transfer RNA genes were verified using tRNAscan-SE 65 and ORFs with > 303 bp in length were also annotated using Geneious 10.2.6.
Analyses of ORFs in Hymenophyllum and expanded noncoding regions in fern plastomes.
To investigate the ORFs located in the rrn16-trnV IGS having high length variation in the genus, we extracted that the region from four plastome sequences. First, rrn16-trnV IGS was compared to other nucleotides using blastn 66 to find out the homologous sequences. Thereafter, ORFs were searched using blastp to confirm the origin of these ORFs. To explore the physical positions of MORFFO, 126 plastome sequences were downloaded from NCBI and eight plastome sequences were received from Lehtonen and Cárdenas 47 (Supplementary Table 2).
Phylogenetic relationships among four Hymenophyllum species. A total of 85 coding genes, except duplicated genes, were extracted from eight Hymenophyllaceae species and two Osmundales and one Gleicheniales species. To obtain the best partitioning scheme for each gene position in a single concatenated sequence, ModelFinder 67 and PartitionFinder 68 were used for ML analysis and BI analysis, respectively. MP and ML analyses were conducted using PAUP 69 and IQ-TREE 70 with 1000 bootstrap replications. BI analysis was inferred by Mrbayes 71 under selected models in PartitionFinder with ngen = 1,000,000, samplefreq = 500, burninfrac = 0.25.
Estimation of DNA amounts. The genome size of H. polyanthos and H. coreanum was measured using living materials. For H. polyanthos, three populations were collected from Mt. Jiri-san in mainland of Korean peninsula and Mt. Halla-san of Jeju Island located at the most southern region of Korea. Two populations of H. coreanum were also collected from Mt. Jiri-san and Mt. Gariwang-san of southern and northern part of Korea, respectively. For a calibration standard, we grew Nicotiana tabacum in a greenhouse for weeks and used only 1 × 1 cm 2 of young leaf. The reference and two samples were separately chopped with 500 μl of Nuclei extraction buffer in CyStain UV Precise P kit (Sysmex Partec, Germany) in a Petri dish on ice. The extraction time for samples was extended for eight minutes because the numbers of events less than that time were very low to visualize the graph. The debris was filtered using non-sterile CellTrics filter Green 30 μm (Sysmex Partec, Germany) and nuclei were stained by Staining Buffer in CyStain UV Precise P kit (Sysmex Partec, Germany). Particles of each sample were measured using CyFlow Cube 6 flow cytometer (Sysmex Partec, Germany).
Data availability
The complete sequence data used in the current study are available in the NCBI GenBank repository. All data generated or analyzed during this study are included in this published article and its Supplementary Information files. www.nature.com/scientificreports/ | 4,310.6 | 2020-07-06T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
LFV couplings of the extra gauge boson Z' and leptonic decay and production of pseudoscalar mesons
Considering the constraints of the lepton flavor violating (LFV) processes $\mu \rightarrow 3e$ and $\tau\rightarrow3\mu$ on the LFV couplings $Z'\ell_{i}\ell_{j}$, in the contexts of the $E_{6}$ models, the left-right (LR) models, the"alternative"left-right (ALR) models and the 331 models, we investigate the contributions of the extra gauge boson $Z'$ to the decay rates of the processes $\ell_{i}\rightarrow\ell_{j}\nu_{\ell}\nu_{\ell}$, $\tau\rightarrow\mu P $ and $P\rightarrow \mu e$ with $P=\pi^{0},\eta$ and $\eta '$. Our numerical results show that the maximal values of the branching ratios for these processes are not dependent on the $Z'$ mass $M_{Z'}$ at leader order. The extra gauge boson $Z'_{X}$ predicted by the $E_{6}$ models can make the maximum value of the branching ratio $Br(\tau\rightarrow\mu\nu_{\ell}\nu_{\ell})$ reach $1.1\times10^{-7}$. All $Z'$ models considered in this paper can produce significant contributions to the process $\tau\rightarrow\mu P$. However, the value of $Br(P\rightarrow\mu e)$ is far below its corresponding experimental upper bound.
Introduction
Although most of the experimental measurements are in good agreement with the standard model (SM) predictions, there are still some unexplained discrepancies and theoretical issues that the SM can not solve. So the SM is generally regarded as an effective realization of an underlying theory to be yet discovered. The small but non-vanishing neutrino masses, the hierarchy and naturalness problems provide a strong motivation for contemplating new physics (NP) beyond the SM at TeV scale, which would be in an energy range accessible at the LHC.
NP may manifest itself either directly at high energy processes that occurred at the LHC or indirectly at lower energy processes via its effects on observable that have been precisely measured. Generally, the NP effects may show up at rare processes where the SM contributions are forbidden or strongly suppressed. Therefore more theorists and experimentalists have growing interest in the rare decays and productions of ordinary particles. Such studies may help one to find the NP signatures or constraint NP and provide valuable information to high energy collider experiments.
An extra gauge boson Z ′ with heavy mass occurs in many NP models beyond the SM with extended gauge symmetry, for example see Ref. [1] and references therein. Experimentally, Z ′ boson is going to be searched at the LHC [2,3], although it is not conclusively discovered so far. However, stringent limits on the Z ′ mass M Z ′ are obtained, which are still model-dependent. Among many Z ′ models, the most general one is the non-universal Z ′ model, which can be realized in grand unified theories, string-inspired models, dynamical symmetry breaking models, little Higgs models, 331 models. One fundamental feature of such kind of Z ′ models is that due to the family nonuniversal couplings or the extra fermions introduced, the extra gauge boson Z ′ has flavor-changing fermionic couplings at the tree-level, leading many interesting phenomenological implications. For example, considering the relevant experimental data about some leptonic processes, Refs. [4,5] have obtained the constraints on the lepton flavor violating (LFV) couplings of the boson Z ′ to ordinary leptons and further studied their implications. In this paper, we will focus our attention on the extra gauge boson Z ′ , which is predicted by several NP models and has the tree-level LFV couplings to ordinary leptons, and consider its effects on the pure leptonic decays of the neutral scalar meson P → µe with P = π 0 , η and η ′ and LFV processes τ → µP , µ → eν ℓ ν ℓ and τ → µν ℓ ν ℓ with ℓ = e, µ or τ . Our program is that we employ the model-dependent parameters constrained by the experimental upper limits for the LFV processes ℓ i → ℓ j γ and ℓ i → ℓ j ℓ k ℓ ℓ to estimate the decay rates under consideration.
The rest of the paper is structured as follows: in section 2, we present the interactions of the extra gauge boson Z ′ with fermions, including the LFV couplings, and give the constraints of the LFV processes µ → 3e and τ → 3µ on the Z ′ LFV couplings to ordinary leptons in the contexts of the E 6 models, the left-right (LR) models, the "alternative" left-right (ALR) models and the 331 models. Based on the allowed LFV couplings, we calculate the contributions of the extra gauge boson Z ′ to the decays rates of the processes ℓ i → ℓ j ν ℓ ν ℓ , τ → µP and P → µe with P = π 0 , η and η ′ , in sections 3 and 4. Our conclusions and simply discussions are given in section 5.
Constraints on the LFV couplings
In the mass eigenstate basis, the couplings of the additional gauge boson Z ′ to the SM fermions, including the LFV couplings, can be general written as where f and ℓ represent the SM fermions and charged leptons, respectively, summation over i = j = 1, 2, 3 is implied, P L,R = 1 2 (1 ± γ 5 ) are chiral projector operators. The left(right)-handed coupling parameter g L(R) should be real due to the Hermiticity of Lagrangian L. Considering the goal of this paper, we do not include the flavor changing couplings of Z ′ to the SM quarks in Eq.(1).
Many Z ′ models can induce the LFV couplings Z ′ ℓ i ℓ j , in our analysis, we will focus our attention on the following Z ′ models as benchmark models: (i) The E 6 models, their symmetry breaking patterns are defined in terms of a mixing angle α. The specific values α = 0, π 2 and arctan(− 5 3 ) correspond to the popular scenarios Z ′ X , Z ′ ψ and Z ′ η , respectively. (ii) The LR model, originated from the breaking SU (2) where the corresponding the Z ′ couplings are represented by a real parameter α LR bounded 2/3 ≤ α LR ≤ √ 2. In our calculation, we will fix α LR = √ 2, which corresponds to pure LR model.
(iii) The Z ′ ALR model based on the so-called "alternative" left-right scenario.
Detailed descriptions of above Z ′ models can be found in Ref. [1] and references therein.
The flavor conserving left-and right-handed couplings g L and g R of the extra gauge boson Z ′ to the SM fermions are shown in Table 1 [6]. As a comparison, we also include in our analysis the case of Z ′ 331 predicted by the 331 models [7]. The couplings of Z ′ 331 to the SM fermions can be unify written as functions of the parameter β [8]. The relevant Z ′ 331 couplings are also given in Table 1, where we have assumed the parameter β = 1/ √ 3 as numerical estimation.
In general, the LFV couplings Z ′ ℓ i ℓ j are model-dependent. The precision measurement data and the upper limits on some LFV processes, such as ℓ i → ℓ j γ and ℓ i → ℓ j ℓ k ℓ ℓ can give severe constraints on these couplings. From Ref. [4], one can see that the most stringent bounds on the LFV couplings g µe L,R , g τ e L,R , and g τ µ L,R come from the processes µ → 3e, τ → 3e, and τ → 3µ, respectively. So we only consider the contributions of the extra gauge boson Z ′ predicted by the NP models considered in this paper to these LFV processes and compare with the correspond experimental upper limits. Figure 1: The maximally allowed values of the left-and right-handed couplings g µe L and g µe R as functions of the Z ′ mass M Z ′ for the different Z ′ models.
In the case of neglecting the mixing between Z ′ and the SM Z, the branching ratio Br(ℓ i → ℓ j ℓ jlj ) can be general expressed as where τ i and m i are the lifetime and mass of the charged lepton ℓ i , M Z ′ is the Z ′ mass.
In above equation, we have ignored the masses of the final state leptons. In our following numerical calculation, we will take s 2 W = 0.231, τ τ = 4.414 × 10 11 GeV −1 , τ µ = 3.338 × 10 18 GeV −1 , m τ = 1.777GeV and m µ = 0.106GeV [9], and assume that M Z ′ is in the range of 1T eV ∼ 3T eV . Assuming that only of g ij L,R is nonzero at a time, we can obtain constraints on g ij L,R from the current experimental upper limits [9] Br exp (µ → eeē) < 1.0 × 10 −12 , Br exp (τ → eeē) < 2.7 × 10 −8 , Our numerical results for different Z ′ models are summarized in Fig.1 and Fig.2, in which we plot the maximally allowed values of the left(right)-handed couplings g µe L,R and g τ µ L,R as functions of the Z ′ mass M Z ′ . One can see from these figures that the values of g ij L,R are slight different for various Z ′ models. The maximal values of g µe L,R are far smaller than those of g τ µ L,R . In the following sections we will use these results to estimate the contributions of Z ′ to the processes τ → µν ℓ ν ℓ , µ → eν ℓ ν ℓ with ℓ = e, µ or τ , P → µe, and τ → µP with P = π 0 , η and η ′ .
3. The extra gauge boson Z ′ and the LFV process ℓ i → ℓ j ν ℓ ν ℓ Neutrino oscillation experiments have shown very well that neutrinos have masses and mix each other [9,10]. Recently, the T2K experiment has confirmed the neutrino oscillation in ν µ → ν e appearance events [11]. Thus neutrino physics is now entering a new precise measurement era. Although there are not the relevant experimental data so far, the neutrino data allow the existence of the LFV processes ℓ i → ℓ j ν ℓ ν ℓ with ℓ = e, µ or τ , which have been studied in specific NP models [12]. From discussions given in section 2, we can see that the extra gauge boson Z ′ can contribute these LFV processes at tree level. The branching ratios
Models Br
can be approximately written as Comparing Eq.(1) with above equations one can see that, if we fix the values of the LFV couplings Z ′ ℓ i ℓ j as the maximum values arose from the current experimental upper limits for the process ℓ i → ℓ j ℓ jlj , the branching ratios Br(ℓ i → ℓ j ν ℓ ν ℓ ) do not depend on the Z ′ mass M Z ′ , their values differ from each other for various Z ′ models. Our numerical results are given in Table 2. One can see from Table 2 that the maximum values of Br(τ → µν ℓ ν ℓ ) are larger than those for Br(µ → eν ℓ ν ℓ ) at least four orders of magnitude in the context of these Z ′ models. For the Z ′ X model, the maximum value of Br(τ → µν ℓ ν ℓ ) can reach 1.1 × 10 −7 .
If the decay processes ℓ i → ℓ j ν ℓ ν ℓ would be accurate measured in future, it might be used to test different Z ′ model.
4.
The extra gauge boson Z ′ and the LFV processes τ → µP and P → µe with P = π , η and η ′ The LFV processes τ → µP with P = π , η or η ′ and P → µe are severely suppressed in the SM, which are sensitive to NP effects, for example see Refs. [13,14,15]. Although these processes have not been observed so far, their current experimental upper bounds have existed [9,16] Br(τ → µπ) < 1.1 × 10 −7 , Br(τ → µη) < 6.5 × 10 −8 , It is obvious that the LFV processes τ → µP and P → µe can be induced at tree level by the extra gauge boson Z ′ considered in this paper. In the local four-fermion approximation, the effective Hamiltonian is given by The relevant hadronic matrix elements that will enter in our calculations are the following [17] < P (p) | qγ µ γ 5 q | 0 >= −ib p q f q p p µ , where b p q is the form factor, f q p is the decay constant of the corresponding meson. For the meson π 0 , there are q = u or d, b . Neglecting terms of the order O (m µ /m τ ), the decay widths for the LFV decays τ → µP with P = π, η, and η ′ can be approximately written as with Where M P is the mass of the neutral pseudo-scalar meson which are taken as 134.98 MeV, 547.8 MeV and 957.78 MeV for the mesons π 0 , η and η ′ , respectively [9]. For the LFV left-and right-couplings g τ µ L and g τ µ R , same as section 3, we also take their maximum values satisfying the current experimental upper limit for the LFV process τ → 3µ. Then one can easily obtain the maximal values of the branching ratios Br(τ → µP ), which are shown in Table 3 for different Z ′ models. Among these Z ′ models, the contribution of Z ′ predicted by the pure LR model to the LFV decay τ → µP is the maximum. However, its maximal value of the branching ratio Br(τ → µP ) is still lower than the corresponding experimental upper limit at least by one order of magnitude. So, comparing with the LFV process τ → µP , the LFV process τ → 3µ can give more serve constraints on these Z ′ models. The maximal values of the branching ratios Br(τ → µP ) with P = π 0 , η and η ′ , The general expression of the branching ratio Br(P → ℓ i ℓ j ) contributed by the extra gauge boson Z ′ can be written as In the contexts of the various Z ′ models considered in this paper, using above formula, we can estimate the maximal value of the branching ratios for the LFV meson decays π → µe, η → µe and η ′ → µe. Our numerical results are given in Certainly, the extra Z ′ also has contributions to the FC meson decays π → e + e − , η → e + e − and µ + µ − , and η ′ → e + e − and µ + µ − . Although these decay processes are not depressed by the LFV couplings, the contributions of the extra gauge boson Z ′ are also very small being large Z ′ mass M Z ′ . We do not show the numerical results here.
Conclusions and discussions
Many NP models beyond the SM predict the existence of the extra gauge boson Z ′ , which can induce the LFV couplings to the SM leptons at the tree-level. This kind of new particles can produce rich LFV phenomenology in current or future high energy collider experiments, which should be carefully studied. It is helpful to search for NP models beyond the SM and further to test the SM.
In this paper, we first consider the constraints of the experimental upper limits for the LFV processes ℓ i → ℓ j γ and ℓ i → ℓ j ℓ k ℓ ℓ on the LFV couplings of the extra gauge boson Z ′ to ordinary leptons in the contexts of the E 6 models, the LR models, the ALR model and the 331 models. The most stringent bounds on the LFV couplings g µe L,R and g τ µ L,R come from the processes µ → 3e and τ → 3µ, respectively. We find that the values of g ij L,R are slight different for various Z ′ models. The maximal values of g µe L,R are much smaller than those of g τ µ L,R . Then, considering these constraints, we calculate the contributions of Z ′ to the LFV processes τ → µν ℓ ν ℓ , µ → eν ℓ ν ℓ with ℓ = e, µ or τ , P → µe, and τ → µP with P = π 0 , η and η ′ in these Z ′ models. Our numerical results show that the maximal values of the branching ratios for these LFV decay processes are not dependent on the extra gauge boson Z ′ mass M Z ′ at leader order. For the process τ → µν ℓ ν ℓ , the Z ′ X model can make the maximum value of Br(τ → µν ℓ ν ℓ ) reach 1.1 × 10 −7 . All Z ′ models considered in this paper can produce significant contributions to the process τ → µP . However, the values of the branching ratio Br(τ → µP ) are still lower than the corresponding experimental upper bounds. The value of the branching ratio Br(P → µe) is far below its corresponding experimental upper bound for all of the Z ′ models.
The extra gauge boson Z ′ , which can induce the LFV couplings to ordinary leptons, might produce observable LFV signatures at the LHC. In the context of G(221) model, Ref. [18] has studied the possible signatures of the LFV couplings Z ′ ℓ i ℓ j at the LHC and shown that, under reasonable expectations and conditions, the eµ signal could be used to test this NP model in near future. If one considers the constraints of the experimental upper bound for the LFV process µ → 3e on the LFV coupling Z ′ µe, the production cross section and the number of µe events will be significantly reduced. The final state with a lepton τ is difficult to reconstruct from its decay products. However, the number of τ µ or τ e events is larger than the number of µe events at least by three orders of magnitude.
This case is helpful to test the Z ′ models, which will be carefully studied in near future. | 4,384.8 | 2014-08-12T00:00:00.000 | [
"Physics"
] |
Assessment of Spectra of the Atmospheric Infrared Ultraspectral Sounder on GF-5 and Validation of Water Vapor Retrieval
Atmospheric Infrared Ultraspectral Sounder (AIUS) aboard the Chinese GaoFen-5 satellite was launched on 9 May 2018. It is the first hyperspectral occultation spectrometer in China. The spectral quality assessment of AIUS measurements at the full and representative spectral bands was presented by comparing the transmittance spectra of measurements with that of simulations. AIUS measurements agree well with simulations. Statistics show that more than 73% of the transmittance differences are within ±0.05 and more than 91% of the transmittance differences are within ±0.1. The spectral windows for O3, H2O, temperature, CO, CH4, and HCl were also analyzed. The comparison experiments indicate that AIUS data can provide reliable data for O3, H2O, temperature, CO, CH4, and HCl detection and dynamic monitoring. The H2O profiles were then retrieved from AIUS measurements, and the precision, resolution, and accuracy of the H2O profiles are discussed. The estimated precision is less than 1.3 ppmv (21%) below 57 km and about 0.9–2.4 ppmv (20–31%) at 60–90 km. The vertical resolution of H2O profiles is better than 5 km below 32 km and about 5–8 km at 35–85 km. Comparisons with MLS Level 2 products indicate that the mean H2O profiles of AIUS have a good agreement with those of MLS. The relative differences are mostly within ±10% at 16–75 km and about 10–15% at 16–20 km in 60∘–80∘ S. For 60∘–65 ∘ S in December, the relative differences are within ±5% between 22 km and 80 km. The H2O profiles retrieved from AIUS measurements are credible for scientific research.
Introduction
Water vapor is the main component of greenhouse gases. The change of water vapor in the stratosphere affects the radiation flux of long waves and short waves, as well as the temperature in the stratosphere and troposphere, which in turn plays a significant role in the balance of the global energy radiation budget. It has been proved that stratospheric water vapor contributes to global warming and may lead to more frequent extreme weather [1]. Water vapor serves as a dynamical tracer for monitoring the regional atmospheric circulation and the water circulation in real-time [2]. It occupies a large proportion of hydrogen in the middle atmosphere, which is mainly composed of H 2 O, H 2 , and CH 4 . Studying the exchange of hydrogen among different gases can provide valuable information for monitoring the physical condition of the atmosphere [3][4][5]. The distribution of water vapor in the polar regions is especially worth noticing, because water vapor enhancement in such a high altitude can promote Polar Mesospheric Cloud (PMC) formation by combining with the lowest temperature in the mesopause [3]. Therefore, understanding the spatial and temporal distribution of atmospheric water vapor and its long-term variation comprehensively is essential for our research on atmospheric dynamics, chemistry, and radiation.
Presently, many ground-based and satellite-borne sensors have been developed for H 2 O detection. Space-borne sensors observe H 2 O by nadir viewing, occultation detection, or limb sounding. Occultation and limb observation are less affected by the underlying surface and have high vertical resolution and sensitivity. Examples include Halogen Occultation Experiment (HALOE), Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS), and Tropospheric Emission Spectrometer (TES) [6][7][8][9]. Atmospheric Infrared Ultraspectral Sounder (AIUS) aboard the Chinese GaoFen-5 satellite, launched on 9 May 2018, is the first hyperspectral occultation spectrometer in China. AIUS observes infrared occultation transmission for the retrieval of trace gases over the Antarctic, including water vapor (H 2 O). The main objective of this paper is to assess the AIUS spectral data and validate the H 2 O profiles retrieved from AIUS measurements. The overview of AIUS instrument is presented in Section 2. The Level 1 data quality assessment is presented in Section 3. The H 2 O retrieval and validation are described in Section 4. Finally, discussions and conclusions are given in Sections 5 and 6.
AIUS Instrument
GF-5 is a sun-synchronous satellite orbiting at an altitude of 705 km, with an inclination of 98.2 • . It has a mass of 2800 kg and a design life of 8 years. AIUS is a Michelson interferometer on the GaoFen-5 satellite with a spectral range of 750 to 4100 cm −1 . The spectral resolution is 0.02 cm −1 , which is similar to that of ACE-FTS in Canada. The measurements extend from the troposphere to mesosphere and span from 55 • S to 90 • S with a field of view of 1.25 mrad. AIUS can acquire information on many trace gases over the Antarctic [10]. Thus, it can provide sufficient data for weather forecasts and climate change research.
AIUS has a high spectral resolution and solar tracking accuracy [10]. The spectral resolution is mainly determined by the maximum Optical Path Difference (OPD) of the interferometer, and the spectral sampling interval is inversely related to maximum optical path difference [11]. Thus, a large OPD is needed to achieve hyperspectral resolution. A new design of interferometer that consists of double cube corner reflectors, an end mirror, a beam splitter, and a compensator applied magnifies the OPD eightfold. The specific sun-tracking camera and a two-dimensional pointing mechanism are used to provide fine pointing toward the radiometric center of the sun with a stability of 0.06 mrad. AIUS has two spectral bands: InSb and MCT. MCT and InSb detectors can obtain effective data in 750-2000 cm −1 and 1850-4100 cm −1 , respectively. A more detailed description is presented in the document by Li et al. [12].
After data acquisition, both science data and auxiliary data are sent to the ground. The raw data (Level 0) are received at the ground station in binary format. Level 1 data processing is then performed to obtain transmittance.
Assessment of the AIUS Spectra
The spectral assessment of AIUS spectra was carried out by comparing the transmittance spectra of measurements with that of simulations. The Reference Forward Model (RFM) was used to simulate the transmittance. It has been widely used in ground-based, airborne, and space-borne observation [8,[13][14][15][16][17][18]. The atmospheric profiles used in simulation were taken from the integrated atmospheric background dataset and were developed by our research group [19].
In this work, AIUS measurements from 60 • -65 • S in June 2019 were applied to perform the comparison experiments. Because the data quality of the MCT band in 1850-2000 cm −1 is better than that of InSb, transmittances in 750-2000 cm −1 were taken from the MCT band, and transmittances in 2000-4100 cm −1 were taken from the InSb band. The mean differences and standard deviations of transmittance were calculated at different altitudes to quantify the differences between the AIUS data and simulated data.
The mean transmittance differences between the AIUS and simulated data are recognized as (B-O) mean . y AIUS is the AIUS measurement, k is the number of AIUS measurements over the region of interest, and F(X i ) is the simulated data. In the following context, the transmittance differences are simply expressed as differences.
The Comparison in the Whole Spectral Band
After about seven months of the in-orbit test, AIUS have been continuously providing measurements [12]. Figure 1a shows AIUS spectra observed on 20 December 2019, where the horizontal axis is the wavenumber, and the vertical axis is the transmittance at different altitudes. In the 750-4100 cm −1 range, many components have characteristic absorption lines. Figure 1b shows the absorption line distribution of some trace gases (CO 2 , CO, HCl, N 2 O, H 2 O, O 3 , NO 2 , CH 4 , NO, and HNO 3 ) in the 750-4100 cm −1 range. CO 2 , CO, and N 2 O have strong absorption lines, and the absorption intensity can reach up to 3.5 × 10 −18 cm/mol. Figure 2 shows an overview of the transmittance differences between the AIUS observed data and simulated data in the spectral region of 750-4100 cm −1 for 60 • -65 • S in June 2019. Thirty-two pairs of measurements and simulations were averaged at three altitudes (20 km, 40 km, and 60 km). In general, the differences at lower altitudes were larger than those at higher altitudes. This suggests that the differences were related to the increased pressure and pressure broadening at lower altitudes. Figure 2a,c,e shows the mean transmittance of observed data in the red line and that of the simulated data in the blue line. It is obvious to notice that the simulated spectra are similar to the measurements. Figure 2b,d,f shows the mean differences at different altitudes, where the color represents the point density. AIUS measurements and simulations agreed with each other. Most of the differences were within ±0.1. The lower differences were found in 800-1000 cm −1 , 1070-1250 cm −1 , and 2400-2900 cm −1 at 60 km, where the absorption was very weak. The larger differences can be seen in 980-1095 cm −1 , 2250-2400 cm −1 , 2800-3200 cm −1 , and 3500-3750 cm −1 . This might be due to the low signal-to-noise ratio (<200) in these spectral ranges and the uncertainties of the a priori profiles. T r a n s m i t t a n c e M e a n s i m u l a t e d d a t a M e a n A I U S m e a s u r e m e n t s ( e ) a l t i t u d e : ~2 0 k m Figure 2. The comparison between mean AIUS measurements and the mean simulated data. Figure 3 shows the frequency distribution histograms of the differences between AIUS measurements and simulations at three altitudes (60 km, 40 km, and 20 km) in five spectral bands. Figure 3a1-a3 shows the histograms for the full spectral band (750-4100 cm −1 ) of AIUS. There were 73% of the transmittance differences within ±0.05 at 20 km, 92% at 40 km, and 96% at 60 km. Figure 3b1-b3,c1-c3 are the histograms for the MCT and InSb bands. The histograms of the two bands were almost the same, which indicated that their detection ability was comparable. Figure 3d1-d3,e1-e3 are the histograms of two spectral bands (2000-2350 cm −1 and 3500-4000 cm −1 , respectively) with large transmittance differences, as shown in Figure 2. There were more than 85% of the transmittance differences within ±0.15. Therefore, the AIUS measurements agreed well with the simulations. As ACE-FTS has similar instrument characteristics, a comparison between the ACE-FTS measurements and simulations was also carried out. The AIUS and ACE-FTS measurement used in the experiment were acquired on 24 May 2019 (68 • S, 37 • E) and on 10 May 2012 (66 • S, 61 • E), respectively. As the acquisition time and location of the two measurements are not identical, a direct comparison of them is not allowed. The comparison is performed using simulations as the radiative transfer medium. The differences between the AIUS/ACE-FTS data and the simulated data are demonstrated in Figure 4. Figure 4a-d shows that the transmittance differences of ACE-FTS were similar to those of AIUS at 20-80 km, which indicates the transmittance of AIUS is mostly consistent with that of ACE-FTS and can provide reliable data for scientific research. Both sensors have a larger difference in 2250-2400 cm −1 , 2800-3200 cm −1 , and 3500-3750 cm −1 . The differences between AIUS/ACE-FTS measurements and simulations were probably mainly due to uncertainties of the environment and the instrument during the observations that cannot be exactly simulated. Figure 4e shows that the differences of ACE-FTS are different from those of AIUS at 15 km, which indicates there is a problem with the AIUS transmittance spectra at 15 km. This might be due to the large noise level and uncertainties of the a priori profiles at lower altitudes.
Comparison in the H 2 O Spectral Window
The comparisons were also made between AIUS measurements and simulations in six spectral windows for O 3 , H 2 O, temperature, CO, CH 4 , and HCl. Here, we only present the comparison in the spectral window of H 2 O. The other comparisons are shown in Appendix A. Figure 5 presents the mean transmittance of AIUS measurements and simulations in 1400-1600 cm −1 at 40 km, where H 2 O and NO 2 are the main absorbers. H 2 O can be detected and retrieved by this spectral window. The red dash line represents the measurement, and the blue solid line represents the simulation. Figure 5a indicates that the AIUS measurements have a good agreement with simulations. The transmittance of measurements were slightly larger than those of the simulated data around the absorption peaks. Figure 5b shows the spectra in a zoom-in window centered at 1506 cm −1 . The absorption peaks can be seen clearly in it. The absorption peaks of measurements agreed well with those of simulations. Therefore, AIUS can provide reliable data for H 2 O detection and dynamic monitoring. Figure 6 shows the transmittance differences between the measurements and simulations in 1400-1600 cm −1 at 15 km, 20 km, 40 km, and 60 km, respectively. The differences vary in height and wavenumber. Figure 6a,b shows that the differences were within ±0.2 at 40 km and 60 km. Figure 6c illustrates that the differences were mostly in the range of −0.2 to 0.1 at 20 km. Figure 6d shows that the differences were in the range of −0.4 to 0.2 in 1400-1485 cm −1 and within ±0.13 in 1485-1600 cm −1 at 15 km. In general, the transmittance differences at higher altitudes were more discrete than at lower altitudes. T r a n s m i t t a n c e ( a ) W a v e n u m b e r ( c m -1 ) T r a n s m i t t a n c e ( b )
Method of H 2 O Retrieval
AIUS has thousands of channels covering most of the infrared spectra. Before performing retrieval, an appropriate set of micro-windows has to be selected for H 2 O retrieval. The micro-windows should be sensitive to H 2 O and avoid the interference from other trace gases [20,21]. Sensitivity analysis and information entropy calculation are required [12].
The calculation of information entropy is given in Equation (3), where S i is the covariance matrix of the ith iteration. A more detailed description can be found in Rodgers (1996) [22].
(3) Figure 7 shows the selected micro-windows for H 2 O retrieval at different altitudes. Eleven spectral micro-windows were selected in the 1000-2000 cm −1 . The comparison of micro-windows between AIUS and ACE-FTS is shown in Figure 8. The solid line represents the transmittance spectra, the red dot represents the micro-window selected from AIUS, and the blue dot represents the micro-windows selected from ACE-FTS. The micro-window selected from AIUS were basically contained in those of ACE-FTS. Moreover, only 15% of the channels accounted for 75% of the information, which can improve the retrieval efficiency. T r a n s m i t t a n c e A I U S A C E -F T S W a v e n u m b e r ( c m -1 ) T r a n s m i t t a n c e The optimal estimation method (OEM) was employed for H 2 O retrieval in this study [23,24]. The OEM retrieval framework has been used widely as it can solve the problem effectively of a nonlinear least squares minimization of a cost function [25]. The retrieval was initialized with the a priori profiles as the constraint, and the optimal atmospheric state was obtained through iteration. The H 2 O profiles were updated by Equation (4), where Y is the observation, X is the state of the atmosphere, F is the forward model, S a is the covariance matrix of the a priori profiles, S e is the covariance matrix of the observation error, and K is the matrix of "weighting functions" or "Jacobians".
The AIUS H 2 O retrieval algorithm is presented in detail in the document by Li et al. [12].
H 2 O Profiles Retrieved from AIUS
In this study, 153 H 2 O profiles were retrieved from AIUS measurements obtained in 2019. The measurements were divided into four latitude zones of 60 • -65 • S, 65 • -70 • S, 70 • -75 • S, and 75 • -80 • S. Information about the AIUS measurements used in this study is listed in Table 1. The H 2 O retrievals of representative months (January, June, May, and April) in four latitude regions are presented in Figure 9. The dashed line represents single profile retrieved from AIUS measurements, and the red line represents mean profiles retrieved from AIUS measurements. In general, the vertical distributions of the H 2 O mean profiles were different in different latitude zones and months, especially at the lower stratosphere and upper mesosphere. For 70 • -75 • S and 75 • -80 • S, the single profile distributions were more discrete than that in 60 • -65 • S and 65 • -70 • S, which indicated that the variance of the H 2 O profiles can be larger. At the same time, the comparison of the H 2 O profiles retrieved from the AIUS measurements and simulations was carried out to reveal the impact of the transmittance differences between AIUS measurements and simulations. The blue line represents the mean profile of simulations shown in Figure 9. Figure 9a-d illustrates that the mean profile of AIUS is similar to that of simulations at 16-90 km. However, the mean H 2 O profiles of AIUS measurements deviate from that of simulations below 16 km. The biggest difference of H 2 O profiles was found at 12 km for 70-75°S in May, approximate 62%. The comparison of the AIUS spectra and simulation spectra illustrated that the transmittance differences were larger between the AIUS measurements and simulations at 15 km as shown in Figure 4e. Thus, the larger transmittance differences around 15 km can lead to greater differences of H 2 O profiles. The great uncertainties of measurement spectra and retrieval profiles at lower altitude might be introduced by the large noise level.
Validation of H 2 O Retrievals
The validation was carried out between the monthly mean profiles of AIUS and the equivalent Microwave Limb Sounder (MLS) Level 2 products. For MLS v4.2 H 2 O profiles, the effective vertical resolution is less than 3 km in 316-1 hPa and about 3.3-10.3 km in 1-0.002 hPa [26]. The precision is less than 35% in 178-0.046 hPa. The accuracy is within ±10% in 100-0.021 hPa, 10-25% in 316-100 hPa, and about 10-34% in 0.021-0.002 hPa [27]. The matching MLS and AIUS data was selected within 3 • latitude and 30 • longitude.
Precision and Resolution
The retrieval precision was calculated from the diagonal elements of the solution covariance matrix as Equation (5) [28,29]. The precision of H 2 O profiles for 60 • -65 • S in January is shown in Figure 10. The red solid line represents the precision and the blue dash line represents the precision in percentage. The precision was less than 1.3 ppmv (21%) below 57km and about 0.9-2.4 ppmv (20-31%) at 60-90 km. It illustrated the precision of H 2 O profiles is comparable to that of MLS. The vertical resolution of the H 2 O retrieval was obtained from the full width at half maximum (FWHM) of the averaging kernels [29]. Figure 11 shows the averaging kernels and vertical resolution of H 2 O retrieval for 60 • -65 • S in January. The solid line is the averaging kernel at 12-85 km with different color representing different altitudes. The dashed line represents the vertical resolution. The vertical resolution of H 2 O profiles was better than 5 km below 32 km, and about 5-8 km between 35 and 85 km.
Comparison with MLS H 2 O Product
For the comparison experiment, vertical resample with the same altitude grids was applied to both AIUS retrievals and MLS Level 2 products, from 12 km to 90 km, with 2 km steps below 30 km and 3 km steps over 30 km [12]. Horizontal interpolation was neglected because of a lack of high horizontal resolution correlative data [28].
(1) 60 • -65 • S The comparison of H 2 O profiles for 60 • -65 • S in January and December is presented in Figure 12. Figure 13. Figure 13b-2 shows that the relative differences were basically within ±10% between 14 km and 84 km. Figure 13c-1 shows the mean profile of AIUS and MLS in November. Figure 13c-2 shows that the relative differences were less than 6% at 16-72 km and less than 10% at 72-84 km. The relative differences of the H 2 O profile mostly stayed within the acceptable range of MLS differences above 16 km for 65 • -70 • S. However, the relative differences were a little larger below 16 km. The relative differences of the H 2 O profiles were the same as those shown in Figure 9, which indicates that the relative differences below 16 km were mainly caused by the large uncertainties of measurement at lower latitudes. Figure 14a-1,b-1 illustrate that the mean profile of AIUS had a good agreement with that of MLS in both May and August. The AIUS profile had an obviously negative deviation from MLS below 30 km in October, shown in Figure 14c-1. Figure 14a-2,b-2,c-2 show that the relative differences were mostly within ±6.5% at 18-81 km in May, ±9.6% at 18-90 km in August, and ±12% at 26-81 km in October. The accuracy of H 2 O retrievals at lower altitudes for 70 • -75 • S is not as good as that of 65 • -70 • S. The reasons for this need to be further explored. In the future, more data will be used for analysis. Figure 15a-1,b-1,c-1 illustrate that the mean profile of AIUS agreed well with that of MLS in all three months, which indicated that that the vertical distribution of the H 2 O profile was captured well by AIUS measurements. The standard deviations of the AIUS H 2 O profiles in August were slightly larger than the other two months. Figure 15a-2,b-2,c-2 show that the differences were within ±4.5% at 16-54 km and −2% to 11% at 54-81 km in April, ±10% at 14-78 km in August, ±6% at 22-72 km, and ±15% at 14-20 km in October. In general, the differences mostly stayed within the acceptable range of MLS differences at 16-90 km for 75 • -80 • S.
Discussion
The assessment of AIUS spectra was illustrated by comparing the measurements with simulations. The differences between AIUS measurements and simulations at lower altitudes are larger than those at higher altitudes. This suggests that the differences are related to the increased pressure and pressure broadening at lower altitudes. The larger differences can be seen in the 980-1095 cm −1 , 2250-2400 cm −1 , 2800-3200 cm −1 , and 3500-3750 cm −1 . The absorptions are dominated by O 3 , CO 2 , CH 4 , and CO 2 , respectively, in these spectral bands. This might be mainly caused by the uncertainties of measurement during observation and the uncertainties of the a priori profiles. In addition, the signal-to-noise ratio is less than 200 in these spectral bands, which may also lead to large transmittance differences.
The H 2 O profiles were retrieved from AIUS measurements and then compared with MLS Level 2 products. The H 2 O retrieval experiments were carried out at 12-90 km as the quality of the measurements beyond the altitude range was unacceptable. The comparisons show that the mean AIUS H 2 O profiles are similar to the mean MLS profiles, while larger relative differences of H 2 O profiles can be found below 16 km. One reason for the larger relative difference may be the large uncertainties of AIUS measurement in lower altitudes. Another reason might come from the uncertainties of the tangent altitudes below 20 km. The comparison also revealed that the differences of H 2 O profiles were larger in the mesosphere. This could be caused by the larger uncertainty of the official MLS products above 60 km.
Conclusions
In this study, the spectral quality assessment of AIUS measurements and the validation of H 2 O retrievals were demonstrated. The transmittance of AIUS measurements were compared with that of simulated spectra and ACE-FTS measurements. The mean and standard deviations of transmittance differences between AIUS measurements and simulated spectra were then calculated. It illustrated that AIUS measurements were mostly consistent with simulations and ACE-FTS measurements. Therefore, AIUS measurements can be used for O 3 AIUS is the first hyperspectral occultation spectrometer in China. The noise level is still slightly higher in 980-1095 cm −1 and 3500-3750 cm −1 . Presently, the relative differences of the H 2 O retrievals were large below 16 km. In the future, further studies are required to improve the method of H 2 O retrieval in the lower altitudes to reduce the impact of uncertainties of measurement on H 2 O retrievals. In addition, more measurements are required to validate the accuracy of the H 2 O profile and other trace gases profiles. Further validation will be performed by comparing the mean profiles with more correlative data from space-borne, ground-based, and balloon platforms.
Author Contributions: Methodology, X.L. and X.C.; investigation, X.Z. and S.L.; writing-original draft preparation, X.C.; writing-review and editing, X.L., S.L., and X.Z.; visualization, X.C. and S.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.
Acknowledgments:
The ACE-FTS data were provided by the ACE-FTS team. ACE, also known as SCISAT, is a Canadian-led mission mainly supported by the Canadian Space Agency (CSA). Thanks are also given to Anu Dudhia for providing the RFM source code and help.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
Comparisons were also made between AIUS measurements and simulations in the spectral windows of O 3 , CO, CH 4 , HCl, and temperature.
(1) 1000-1200 cm −1 for O 3 Figure A1 presents the mean transmittance of AIUS measurements and simulations in 1000-1200 cm −1 at 40 km, where absorptions are dominated by O 3 , H 2 O, CH 4 , N 2 O, and HNO 3 . O 3 can be detected and retrieved by this spectral window. Figure A1a illustrates that the AIUS measurements had a good agreement with simulations. Figure A1b shows the spectra in a zoom-in window centered at 1021 cm −1 . The absorption peaks of measurements agreed well with those of simulations. Therefore, AIUS can provide reliable data for O 3 detection and dynamic monitoring. Figure A2 shows the transmittance differences between AIUS measurements and simulations in 1000-1200 cm −1 at 15 km, 20 km, 40 km, and 60 km, respectively. Figure A2a indicates that the transmittance differences were mostly within ±0.1 in 1000-1065cm −1 and ±0.05 in 1065-1200 cm −1 at 60 km. Figure A2b shows that the transmittance differences were within ±0.2 in 1000-1065 cm −1 and ±0.1 in 1065-1200 cm −1 at 40 km. Figure A2c,d show that the transmittance differences were within ±0.1 in 1000-1060 cm −1 and ±0.2 in 1060-1200 cm −1 at 40 km and 20 km. In 1000-1060 cm −1 , the larger differences were found at 40 km where the absorption was strong. In 1080-1200 cm −1 , the transmittance differences decreased with altitude increasing.
(2) 2000-2250 cm −1 for CO Figure A3 shows the comparison between AIUS measurement and simulation in 2000-2250 cm −1 at 40 km. CO, N 2 O, and H 2 O are the main absorbers in this spectral window. It can be used for CO detection. Figure A3a shows the spectra of measurement and simulation, which were similar to each other. Figure A3b shows the spectra in a zoom-in window centered at 2100 cm −1 . The absorption peaks of measurements agreed well with those of the simulations. Figure A4 presents the transmittance differences variation with altitudes (15 km, 20 km, 40 km, and 60 km) in 2000-2250 cm −1 . Figure A4a illustrates that the transmittance differences were within ±0.1 in 2000-2040 cm −1 and ±0.3 in 2040-2250 cm −1 at 60 km. Figure A4b illustrates that the transmittance differences were within ±0. Figure A4c illustrates that the transmittance differences were within ±0.25 at 20 km, with 90% of the differences within ±0.1. Figure A4d illustrates that the transmittance differences were in the range of −0.3 to 0.2 in 2000-2200 cm −1 and −0.01 to 0.1 in 2200-2250 cm −1 at 15 km. Generally, in 2000-2050 cm −1 and 2135-2180 cm −1 , the transmittance differences at lower altitudes were larger than at higher altitudes. In 2200-2250 cm −1 , the transmittance differences were larger at higher altitudes.
(3) 2600-2800 cm −1 for CH 4 Figure A5 shows the comparison between AIUS measurement and simulation in 2600-2800 cm −1 at 40 km. CH 4 and N 2 O are the main absorbers in this spectral window. It can be used for CH 4 detection. Figure A5a shows that the measurements had a good agreement with simulations in 2600-2800 cm −1 . Figure A5b shows the spectra in a zoom-in window centered at 2772 cm −1 . The absorption peaks of measurements agreed well with those of the simulations. T r a n s m i t t a n c e ( a ) M e a n s i m u l a t e d d a t a M e a n A I U S m e a s u r e m e n t s W a v e n u m b e r ( c m -1 ) T r a n s m i t t a n c e ( b ) Figure A5. Mean transmittance of AIUS measurements and simulations in 2600-2800 cm −1 at 40 km. Figure A6 presents the transmittance differences variation with altitudes (15 km, 20 km, 40 km, and 60 km) in 2600-2800 cm −1 . Figure A6a illustrates that the transmittance differences can be ignored at 60 km. Figure A6b illustrates that the transmittance differences were mostly within ±0.05 in 2600-2750 cm −1 and ±0.1 in 2750-2800 cm −1 at 40 km. Figure A6c illustrates that the transmittance differences were within ±0.3 at 20 km, with 79% of the differences within ±0.05. Figure A6d illustrates that the transmittance differences were mostly in the range of −0.2 to 0.2 at 15 km. In general, the transmittance differences were larger at lower altitudes in 2600-2800 cm −1 . (4) 2800-3000 cm −1 for HCl Figure A7 shows the comparison between AIUS measurement and simulation in 2800-3000 cm −1 at 40 km. The absorptions are dominated by HCl, N 2 O, CH 4 , and NO 2 . It can be used for HCl detection. Figure A7a shows the measurements agreed well with simulations. Figure A7b shows the spectra in a zoom-in window centered at 2989 cm −1 . The absorption peaks of measurements were consistent with those of the simulations. T r a n s m i t t a n c e ( a ) M e a n s i m u l a t e d d a t a M e a n A I U S m e a s u r e m e n t s W a v e n u m b e r ( c m -1 ) T r a n s m i t t a n c e ( b ) Figure A7. Mean transmittance of AIUS measurements and simulations in 2800-3000 cm −1 at 40 km. Figure A8 presents the transmittance differences variation with altitudes (15 km, 20 km, 40 km, and 60 km) in 2800-3000 cm −1 . Figure A8a illustrates that the transmittance differences were in the range of −0.1 to 0.15 at 60 km. Figure A8b illustrates that the transmittance differences were within ±0.4 at 40 km, with 96% of the differences within ±0.05. Figure A8c shows that the transmittance differences were within ±0.4 at 20 km, with 88% of the differences within ±0.1. Figure A8d shows that the transmittance differences were within ±0.3 at 15 km, with 60% of the differences within ±0. 15. In general, the transmittance differences were larger at lower altitudes in 2800-3000 cm −1 . (5) 1800-2080 cm −1 for temperature Figure A9 presents the mean transmittance of AIUS measurements and simulations in 1800-2080 cm −1 at 40 km, where absorptions are dominated by CO 2 , H 2 O, H 2 O, NO, etc. This can be used for temperature detection and retrieval, as the CO 2 concentration is relatively stable. Figure A9a shows the measurements agreed well with simulations in 1800-2080 cm −1 . Figure A9b shows the spectra in a zoom-in window centered at 2072 cm −1 . The absorption peaks were mostly consistent with each other, with a slight deviation in 2071.7 cm −1 , 2072.04 cm −1 , 2072.64 cm −1 , etc. Figure A10 presents the transmittance differences between the measurements and simulations in 1800-2080 cm −1 at 15 km, 20 km, 40 km, and 60 km. Figure A10a shows that the transmittance differences were mostly within ±0.2 at 60 km. Figure A10b shows that the transmittance differences were mostly within ±0.3 at 40 km, with 94% of the differences were within ±0.05. Figure A10c illustrates that the transmittance differences were within ±0.3 at 20 km, with 94% of the differences within ±0.1. Figure A10d illustrates that the transmittance differences were within −0.4 to 0.2 at 15 km, with 66% of the difference within ±0.15. There was a sudden difference reduction at 2000 cm −1 in Figure A10d. The reason was that the transmittance around 2000 cm −1 was obtained by different detectors (MCT and InSb). This indicates that the performance of the two detectors was different near 2000 cm −1 at 15 km. | 8,017.8 | 2021-01-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Interaction of terahertz radiation with tissue phantoms: numerical and experimental studies
O. A. Smolyanskaya, Q. Cassar, M. S. Kulya, N. V. Petrov, K. I. Zaytsev, A. I. Lepeshkin, J.-P. Guillet, P. Mounaix, V. V. Tuchin 1 ITMO University, Saint-Petersburg, Russia<EMAIL_ADDRESS>2 Bordeaux University, IMS Laboratory, Bordeaux, France Bauman Moscow State Technical University, Moscow, Russia Prokhorov General Physics Institute of the RAS, Moscow, Russia Saratov State University, Saratov, Russia Institute of Precision Mechanics and Control of the RAS, Saratov, Russia
Introduction
New biomedical devices require test objects to check their performance and periodic calibration to monitor the system efficiency over the time. A phantom is a bio-like object mimicking the intrinsic real-tissue properties that serves as a test object. A variety of phantoms are commercially available. Phantoms are already used in spectroscopy within various spectral range. However, currently, the THz frequency range suffers from a lack of investigations in this direction. Only the first steps have been performed [1]. In the present work, we investigate interaction of submillimeter radiation with the breast-mimicking and skin-mimicking phantoms. Results for breastmimicking phantom were compared with numerical models based on a double-Debye model of the dielectric permittivity for different content of fat, fibrous and cancerous tissue. The wave front propagation using the angular spectrum representation was used to simulate the theoretical response of skinmimicking phantoms.
Model of wavefront propagation
Electromagnetic fields can be represented in various ways. The angular spectrum representation is a numerical technique describing optical fields in a homogeneous media [2]. Optical fields are described as a superposition of plane waves and evanescent waves. A useful approach to describe optical field diffraction is to conduct the Fourier analysis at a given plane so that the different Fourier components of the field distribution are identified as plane waves propagating away from that plane in different directions. Thus taking less computation time for numerical reconstruction. This technique is preferable for the analysis of light diffraction by histological slides [3]. Angular spectrum representation consists of the following stages: (i) the representation of the field through the angular spectrum of 2D waves (Eq.1); (ii) multiplication by the transfer function that contains a complex refractive index of the object (Eq.2); (iii) the back transition from plane waves to the wave field in the calculated z-plane (Eq.3). (1) where is the spatial and frequency distribution of the complex index of refraction.
Materials and methods
a. Sample preparation We have used three-component phantoms made of fat, protein and water of different content. Vegetable oil (10 -75%), soya (13 -75%) and water (0 -70%) were homogenized to generate an emulsion. Then, the phantoms were deposited into vacuum packages.
b. THz-Spectrometer The commercially available TPS 4000 (Teraview Ltd, UK) spectrometer working in reflection mode was used. The spectral range of the system is from 0.06 to 4.50 THz. The entire system was under an air-dried dome to limit the interaction of water molecules with generated pulses. Samples were mounted on a sapphire substrate during measurement. The sapphire cut was chosen not to exhibit a birefringence that would disrupt results of measurements [4]. Each sample was measured 5 times, independently, i.e. the sample was removed and replaced again into system. c. Numerical modelling A double-Debye model of the dielectric permittivity was used to compare optical properties of the threecomponent phantoms and some biological tissue (adipose, fibrous and cancerous) taken from our previous paper [5].
Results and discussion
The propagation dynamics of the pulsed THz radiation through a skin sample was numerically simulated in the temporal and spectral domains. The index of refraction, , , , , , , exp 2 nre, absorption coefficient, α, as well as real, εre, and imaginary, εim, part of dielectric permittivity was calculated from the experimentally obtained data. Figure 1a depicts the time-domain spatial evolution of the THz pulse. Agreement between measured and simulated THz waveforms is confirmed by the comparison shown in Figure 1b. In Figure 2 the spectral dependences of the absorption coefficient of the three-component phantoms consisting of fat, protein and water are depicted. It is shown that within the spectral window, the absorption coefficient of phantoms containing water exceeds the absorption coefficient of phantoms free of water. This is due to the fact, that water molecules, which strongly absorb THz radiations, are replaced by low absorbing components (fat and protein). Our results were compared with simulated signals of adipose, fibrous and cancerous tissues. taken from our previous paper [5].
Conclusion
To overcome the lack of knowledge on tissuemimicking phantoms within the submillimeter spectral range, we conducted preliminary studies on skin-and breast tissue-mimicking samples. Simulations performed for skin-mimicking phantoms have match the experimental spectral data. Thus demonstrating advantages of the technique used in this study. The absorption coefficient spectral dependences extracted from measurements for breast-mimicking phantom, are clearly differentiated into two distinct groups. One for phantoms containing water and those being water-free. Such a discrimination is also shown via the double-Debye model of the dielectric permittivity. Thus, finding of a proper composition of tissue phantom is possible by an appropriate matching of model and experimental data. Deeper investigations are however still required to fulfill a set of phantoms that could be used to perform tissue measurement within the THz-range. Such a set would be profitable to design new medical THz-devices without human biopsy. | 1,227.8 | 2018-10-22T00:00:00.000 | [
"Physics"
] |
Complex Langevin dynamics and zeroes of the fermion determinant
QCD at nonzero baryon chemical potential suffers from the sign problem, due to the complex quark determinant. Complex Langevin dynamics can provide a solution, provided certain conditions are met. One of these conditions, holomorphicity of the Langevin drift, is absent in QCD since zeroes of the determinant result in a meromorphic drift. We first derive how poles in the drift affect the formal justification of the approach and then explore the various possibilities in simple models. The lessons from these are subsequently applied to both heavy dense QCD and full QCD, and we find that the results obtained show a consistent picture. We conclude that with careful monitoring, the method can be justified a posteriori, even in the presence of meromorphicity.
Introduction
The determination of the QCD phase diagram in the plane of temperature and baryon chemical potential is one of the outstanding open questions in the theory of the strong interaction, as it is relevant for the early Universe, ongoing heavy-ion collision experiments at the Large Hadron Collider and the Relativistic Heavy Ion Collider, nuclear matter and compact objects such as neutron stars. Ample progress has been made along (or close to) the temperature axis, where lattice QCD can be used to solve the theory numerically, and in recent years it has been possible to simulate QCD with 2 + 1 flavours of light quarks using physical quark masses while taking the continuum limit [1,2]. This is directly relevant for ultrahigh-energy heavy-ion collisions. The remainder of the phase diagram has not yet been established from first principles. As is well-known [3,4], at nonzero baryon chemical potential, the quark determinant in the standard representation of the QCD partition function is complex, rather than real and positive, ruling out the immediate use of standard numerical methods based on importance sampling. This is generally referred to as the sign problem.
There are various proposals available to circumvent the sign problem, see e.g. the reviews [4][5][6][7][8] and lecture notes [9]. One approach which has generated substantial attention in the past years is the complex Langevin (CL) method, since it has so far proved to be quite successful in simulating systems with a complex action S, or complex weight ρ, from simple toy models to QCD [10][11][12][13][14][15][16][17][18][19][20][21]. While the method was suggested already in the 1980s [22,23], recent progress has come in several ways: the theoretical justification has been provided [24,25] (see also Refs. [26,27] for related theoretical developments); numerical instabilities can be eliminated using adaptive stepsizes [28]; explicit demonstrations that the sign problem can be solved in spin models and field theories have been given, even when it is severe [11,13,29]; and finally, for nonabelian theories, controlling the dynamics via gauge cooling [15], possibly adaptive [30] (see also Ref. [31]), has been shown to be necessary and effective, resulting in the first results for full QCD [16,18,20,21,32]. Promising steps beyond gauge cooling have also been taken [33].
There is, however, a serious conceptual problem that has to be faced. It is by now quite well established that when the weight ρ ∼ exp(−S) is free from zeroes in the whole complexified configuration space, the only worry is the possibility of slow decay in imaginary directions [24,25], which will result in incorrect convergence. However, for theories which include fermions, such as QCD, integrating out the latter will yield a determinant which will always have zeroes for some complexified configurations. These zeroes lead to a meromorphic drift; the formal justification for the CL method [24,25] requires holomorphicity, however (this will be reviewed below), and poles may cause convergence to wrong results. The relevance of this has first been pointed out by Mollgaard and Splittorff [34,35] in the context of a random matrix model and has been further investigated in Refs. [36][37][38][39]. Possible consequences for the behaviour of the spectrum of the Dirac operator [40] have been studied in random matrix theory [41], as has the interplay with gauge cooling [42].
This problem has both theoretical and practical aspects. Concerning the former, it requires a re-analysis of the derivation and justification of the method, given for the holomorphic case in Refs. [24,25]. To do so is the first aim of this paper and is the topic of Sec. 2. In practice, it has been observed in a number of papers that a meromorphic drift will not necessarily cause convergence to wrong resultssometimes without this issue being explicitly flagged up (one example being when the meromorphicity is due to the Haar measure). However, this aspect is not yet properly understood; while there is a collection of results for a variety of models, an overall understanding is lacking. In Sec. 3 we address this issue using simple models, in which a detailed understanding can be obtained. Lessons from this analysis are summarised in Sec. 4. In Sec. 5 we then move to a more intricate SU(3) model and see how the lessons apply in that context. Finally, in Sec. 6 we turn to lattice QCDheavy dense QCD and full QCD -and compare our findings with the understanding developed previously. A discussion of the results obtained in the various models is contained in Sec. 7. We conclude that an overall consistent picture can be extracted, applicable across all models considered, and give guidance on how to tackle this problem in future simulations. The Appendices contain some additional material, including proposals on how to handle poles in the drift in special cases. We note that partial results have already been presented in Refs. [43][44][45].
Formal justification in the presence of poles
We briefly recall the basic principles of the CL method, adapting the results for its justification [24,25] to include a meromorphic drift, i.e. a drift with a pole.
Given a holomorphic action S we denote by ρ the (normalised) complex density on the original real field space. For simplicity we assume here a flat configuration space, i.e. R n . A complex drift K(x + iy) is defined by analytic continuation as K(x + iy) = ∇ρ(x + iy) ρ(x + iy) = −∇S(x + iy). (2. 2) The CL equation, a stochastic differential equation in the complexified field space, with the drift given by the real and imaginary parts of K, 3) y = K y + η I , K y ≡ Im K, η I (t)η I (t ′ ) = 2N I δ(t − t ′ ), (2.4) leads to the Fokker-Planck equation describing the evolution of the (positive) probability density P (x, y; t),Ṗ (x, y; t) = L T P (x, y; t), (2.5) with where N R − N I = 1 and N I ≥ 0. We used here 'complex noise' (N I > 0) for presentation purposes; below we specialise to real noise (N I = 0), as advocated earlier [24,25]. Averaging over the noise, the evolution of holomorphic observables O(x + iy) is governed by the equatioṅ O(x + iy; t) = LO(x + iy; t) =LO(x + iy; t), (2.8) withL = [∇ z − (∇ z S(z))] ∇ z , (2.9) where in the last step we used the Cauchy-Riemann equations, i.e. holomorphy of O(x + iy; t), and z = x + iy.
For holomorphic actions, this requires care in the imaginary directions, |y| → ∞. Slow decay, for instance power-law decay in polynomial models, does not allow partial integration to be carried out for all holomorphic observables z n without picking up contributions at the boundary. On the other hand, if the distribution is strictly localised in a strip in the complex configuration space, no boundary terms will appear and the results from the CL simulation can be justified, see for instance Ref. [46] for an explicit example.
In the case of a meromorphic drift, the topic of this paper, we have to introduce two boundaries: one at large |y| and near the location(s) of the pole(s), which we denote generically as z p . Let us first reconsider Eq. (2.12): by definition we have F (t, t) = P (x, y; 0)O(x + iy; t) dxdy. (2.15) We may consider a single trajectory starting at (x, y) = (x 0 , 0), which means choosing P (x, y; 0) = δ(x − x 0 )δ(y). (2.16) We then find F (t, t) = O(x 0 ; t). (2.17) This is well defined. Furthermore, provided x 0 = z p , the time-evolved observable O(z; t) is holomorphic for z = z p . However, according to Eq. (2.8), we have to expect that O(z; t) has an essential singularity at z = z p , since formally and each term of the series in general will produce a pole of higher order. This is the first finding. Now let us look at Eq. (2.14): For simplicity we assume that there is only a single pole at z = z p and consider a one-dimensional configuration space. Integration by parts can be used at first only for the domain (2.19) in which the dynamics is nonsingular; later we have to take the limits Y → ∞ and ǫ → 0. The first integral in Eq. (2.14) is of the form where J is the 'probability current' Using the divergence theorem (Gauss's theorem) one finds that the first integral is equal to where ds stands for the line element of the boundary ∂G and n denotes the outer normal. The boundary has 3 disconnected pieces: two straight lines at y = ±Y and a circle at |z − z p | = ǫ. We assume now as usual that P has sufficient decay so that the contributions from y = ±Y disappear for Y → ∞. Then the question remains if the circle around z p gives a nonvanishing or even divergent contribution. Numerically it has been found that always P (x p , y p ) = 0, (2.24) and furthermore that P vanishes at least linearly with the distance from z p , with some angular dependence. But the expected essential singularity of the evolved observable O(x + iy; t) at z p could lead to a finite or even divergent contribution as ǫ → 0. Numerically, however, we never found divergent behaviour, so presumably the boundary terms are finite. But they may be nonzero, spoiling the proof of correctness. This is the second finding, the appearance of boundary terms, similar to the ones that may appear at |y| = Y . Let us apply integration by parts a second time to the bulk integral Green's first identity (also a consequence of the divergence theorem) says that this is equal to The discussion of the new boundary terms is almost identical to the one above; again what happens depends on the detailed behavior of O(x + iy; t) near z p . In practice we found (numerically) no indication of any divergence caused by the existence of an essential singularity of O(x + iy; t). 1 The reason for this seems to be that both P (x, y; t) and O(x + iy; t) have nontrivial angular dependence. In Sec. 3.2 we discuss a probably typical situation in which P (x, y; t) vanishes identically in two opposite quadrants near the pole. So if O(x + iy; t) shows strong growth only in those quadrants, the product may well be integrable, i.e. the boundary terms near the pole remain bounded.
To summarise, we find that the time-evolved observable will generically have an essential singularity at the pole, which, however, is counteracted by the vanishing distribution. Concerning the justification, partial integration at the boundaries now also includes integration around the pole, which requires the distribution to vanish rapidly enough for partial integration to be possible without picking up boundary terms. In the following section, we will study this first in simple models, focussing on the essential elements.
One-pole model
The simplest model of a system with a pole is given by the density on R, where we take β real. When z p is real, the weight is real as well, but the model has a sign problem for odd n p , while for even n p the zero in the distribution may potentially lead to problems with ergodicity. When z p is complex, the weight is complex of course. The complex drift appearing in the Langevin process is given by While the original weight vanishes at z p , the drift diverges and is hence meromorphic.
We will refer to this model as the "one-pole model". Special cases (with n p = 1) have been considered long ago [47,48], while recently this model has been studied again, in particular for a large range of values of n p [37]. Our focus is somewhat different; we are mostly interested in the interplay between the location of the pole and the distribution and, for real z p , the difference between n p = 1 and n p = 2. This model captures the presence of a meromorphic drift in QCD in a very rudimentary way, as follows. Consider the QCD partition function for n f degenerate flavours, (3.4) where in the last expression we have written the fermion determinant in terms of the eigenvalues of the Dirac operator, λ i (U), which depend on the gauge field configuration, as indicated with the U dependence. The drift contributing to the update of link U will now have a contribution from the fermion determinant as where D denotes the derivative. When λ i goes to zero (and the determinant vanishes), the drift has a pole. In the one-pole model, the complicated dependence of λ i on U is replaced by a simple pole located at z p , i.e.
In QCD, the links U are of course fluctuating and the dependence is considerably more complicated. The relation between the number of flavours (n f ) and the order of the zero (n p ) depends on details of the fermion determinant.
Strips in the complex plane
To continue, we allow the location z p of the pole in the drift to be complex in general and take β real and positive. The drift has fixed points (K(z) = 0) at and hence, for real or imaginary z p , this yields (a) z p = x p real ⇒ both fixed points z 1,2 are real and attractive; (b) z p = iy p imaginary, y 2 p < 2n p /β ⇒ z 1,2 complex: both fixed points are attractive; (c) z p = iy p imaginary, y 2 p > 2n p /β ⇒ z 1,2 imaginary: the fixed point closer to the real axis is attractive, the other one repulsive.
In order to find where the pole is with respect to the equilibrium distribution P (x, y), and be able to discuss the three cases above (pole is outside, on the edge or inside the distribution), we note the following. The drift in the imaginary direction is given by Without loss of generality we take y p ≥ 0. Hence it immediately follows that the drift is pointing downwards when y > y p and upwards when y < 0. In the case of real noise (which we use from now on), this implies that the equilibrium distribution will be nonzero only in the strip 0 < y < y p [24,46]. Hence generically in this model the pole will be on the edge of the distribution. Moreover, since the distribution is strictly zero outside the strip, partial integration at y → ±∞ is not a problem and therefore this aspect of the justification is under complete control. Following the analysis of Refs. [24,46], we can in fact derive a stronger result. It follows from the FPE that the equilibrium distribution has to satisfy the condition ∞ −∞ dx K y (x, y)P (x, y) = 0. (3.10) Since P (x, y) ≥ 0, it follows that if K y (x, y) has a definite sign as a function of x for given y, P (x, y) has to vanish for this y value. Following exactly the same steps p y=y + y=y_ x y y=0 a) y=y b) Figure 1. One-pole model: strips where the equilibrium distribution P (x, y) is nonzero. The pole is located at z p = x p + iy p , with y p > 0 (red square). Left: y 2 p < 2n p /β: P (x, y) > 0 when 0 < y < y p and the pole is on the edge. Right: y 2 p > 2n p /β: P (x, y) > 0 when 0 < y < y − and the pole is outside the distribution. The strip y + < y < y p can be visited during the Langevin process, provided that the process is initialised at y > y + , but will eventually be abandoned (transient).
as in Sec. 4.2 of Ref. [46], we find the following. As a function of x, K y (x, y) has an extremum at x = x p and the value at the extremum is given by The zeroes of F (y), at determine the presence of additional boundaries at y ± , provided they are real [46]. We find that a) y 2 p < 2n p /β: no additional boundaries; b) y 2 p > 2n p /β: additional boundaries at y ± , P (x, y) = 0 when y − < y < y + .
This situation is sketched in Fig. 1.
In the latter case, no conclusion from this argument can be drawn regarding the strips 0 < y < y − and y + < y < y p . However, an additional analysis of the classical flow pattern shows that the strip 0 < y < y − is an attractor, while the strip y + < y < y p can only be visited when the process starts at y > y + . The drift inside this strip is pointing mostly towards y = y − and hence this region will eventually be abandoned. It will therefore at most be present as a transient.
We conclude that in this model the pole is either on the edge of (case a) or outside (case b) the distribution. In the following we address each of these possibilities.
Ergodicity and bottlenecks for real poles
We first discuss the real case, with z p = x p , since this allows us to introduce the concept of a 'bottleneck', which will turn out also be relevant for the complex case. In this case, the distribution ρ(x) is real, but with a sign problem for odd n p . As follows from the analysis above, both fixed points are attractive and the equilibrium distribution lies on the real axis. This is illustrated in Fig. 2 for β = z p = 1 and n p = 1, 2. We note that close to the pole, the drift is repulsive along the real direction and attractive along the imaginary direction; it is easy to see that this is true in general.
In the limit of continuous Langevin time, trajectories of a real Langevin process will not cross the poles [49]. This leads to a 'separation phenomenon', a point made some time ago [50]. In an actual simulation, because of the finite step size, crossing of the poles may happen (depending on the step size) [48]. It is instructive to look at the corresponding stationary Fokker-Planck equation (on the real axis) Clearly P (x) ∼ ρ(x) is a solution, but wherever there is a sign problem, it cannot be the stationary probability distribution, since P (x) should be nonnegative. Instead we find two linearly independent, nonnegative solutions: any linear combination of P + and P − with nonnegative coefficients is likewise a possible long time average. Hence the Fokker-Planck Hamiltonian has two ground states.
If the simulation manages to slip through the barrier sufficiently easily, we expect to get P q (x) = P + (x) + P − (x) = |ρ(x)|, i.e. the phase-quenched model, as already found in Ref. [48]. We have verified this numerically for n p = 1. One way to cross the bottleneck and facilitate tunneling through the pole is by adding a small amount of imaginary noise. However, the drift (3.2) is insensitive to sign changes in ρ and the phase-quenched result is recovered. We conclude that the Langevin process cannot give correct results for odd n p . For even n p > 0, there is no sign problem, but the lack of ergodicity exists as well. In this case, because of the stronger repulsion away from the pole, our simulations typically do not cross the pole, and hence produce incorrect results when started on one side of the pole. In this case, adding a small imaginary noise term does facilitate the crossing and leads to correct results. 2 In conclusion, we find that zeroes in the distribution lead to a bottleneck and hence ergodicity problems. Whether this zero is crossed depends on the order of the zero: the higher the order, the more difficult the crossing is. We will see that the same is true in the complex case, even though it is easier to go around the pole in the complex plane in that case.
Poles outside the distribution
We now consider the complex case and take n p = 2, z p = i (y p = 1) and three β values: β = 1.6, 3.2, 4.8. The relevant parameter determining the distribution is 2n p /βy 2 p , which takes the values 5/2, 5/4 and 5/6 respectively. Hence for β = 4.8 the distribution is confined to the strip 0 < y < y − ≈ 0.296 and the pole is outside the strip, while for the other β values the distribution touches the pole and 0 < y < y p = 1. The classical flow diagrams are shown in Fig. 3, for β = 1.6 and 4.8. It is easy to see from the flow patterns that the general conclusions apply. For completeness, the corresponding thimbles 3 are shown in Fig. 4. Here the full (blue) lines are the stable, contributing thimbles and the dashed lines are the unstable, noncontributing thimbles. We note that at β = 4.8 the unstable thimble for the lower fixed point is the stable thimble for the upper fixed point. At the lower β value the thimbles meet at the pole, while at the higher value the stable thimble avoids the pole, consistent with the Langevin analysis.
We first consider the case β = 4.8. The histogram for P (x, y) is shown in Fig. 5 and is confined between 0 < y < y − ≃ 0.296, as it should be. The results for the observables z n (n = 1, 2, 3, 4) from a complex Langevin simulation are shown in Table 1 in line with the formal derivation. We hence state the following Proposition: If the drift is such that the equilibrium distribution is confined to a simply connected region not containing any poles of the drift, the complex Langevin process converges to the exact results.
Poles on the edge of the distribution
We now turn to β = 1.6 and 3.2, with the pole at the edge of the distribution. The corresponding histograms for P (x, y) are shown in Fig. 6 and the results for z n are listed in Table 1. Here we note that the Langevin results for β = 1.6 are wrong, while the results for β = 3.2 appear to be correct (within the error). To understand this better we employ two methods.
First we note that the histograms look quite different. At β = 1.6 the distribution is nonzero very close to the pole, which one expects yields boundary terms in the formal justification, which invalidate the outcome. On the other hand, at β = 3.2 the distribution is peaked predominantly away from y = 1 and the pole appears to be avoided. We make this more precise by computing the partially integrated distribution The results are shown in Fig. 7 on a linear scale (left) and on a logarithmic scale (right). On a linear scale it is easy to see that at β = 1.6, P y (y) is nonzero up to y = 1 and goes to zero linearly (at the pole the distribution is zero of course). Based on the formal justification, we conclude that this slow decay invalidates the applicability of the approach. On the other hand, at β = 3.2 the distribution appears to drop exponentially in an extended interval 0.5 < y 1, possibly with two exponentials. Hence expectation values of polynomials z n can be computed safely, as illustrated in Table 1.
For β = 1.6, it can be seen that there is a nonvanishing boundary term around the pole. Instead of a small circle surrounding the pole at z = i we may consider a horizontal line y = 1 − ǫ approaching the pole for ǫ → 0. Then the boundary term in Eq. 2.23 becomes (for N I = 0) (3.17) The smooth terms can be replaced by their values for ǫ = 0 and a boundary term arises because lim ǫ→0 K y (x, 1 − ǫ)P (x, 1 − ǫ) dx = 0. These functions are related to the expectation values of exp(ikz) by In Fig. 8 we show the functions log Q k (x) for integer values k = 0, −1, . . . , −12. In all three cases the shape of the functions seems to stabilise with growing k, whereas there is approximately constant shift upwards with k. This suggests the following asymptotic behaviour, with some constant c > 0, so exp(ik(x + iy)) ∼ exp(ck) dx f (x)e ikx = exp(ck)f (k). (3.23) How can this remain bounded for k → ∞? The only possibility is that the Fourier transformf (k) decays exponentially; this will be the case if f (x + iy) is analytic in a strip |y| < const. In particular f (x) has to be smooth. Looking at Fig. 8 one can see clearly that for β = 1.6 f (x) is developing a kink, wheres in the other two cases it at least appears to be smooth and the effect of the pole appears to be negligible. Hence we may conclude that the incorrect convergence is due to the failure of the bound (3.19).
To summarise the findings in the one-pole model, we conclude that if close to the pole the distribution drops to zero fast enough, e.g. exponentially in the case considered here, the meromorphicity of the Langevin drift is not necessary an obstacle and correct results can still be obtained. When on the other hand the distribution is not falling rapidly at the pole, incorrect convergence is observed.
U(1) one-link model
In order to analyse what happens when poles are inside the distribution, we switch to the following U(1) integral with a complex weight, This model was introduced in Ref. [10] (for n p = 1) as a toy model for QCD, with a complex 'fermion determinant' satisfying [D(x; µ)] * = D(x; −µ * ). Complex Langevin dynamics was studied extensively in Ref. [10] for κ < 1, while problems for κ > 1 were first reported in Ref. [34]. Subsequently thimbles were analysed in Ref. [52]. When κ < 1 the weight is positive when µ = 0, while for κ > 1 there is already a sign problem at µ = 0. Concerning Langevin dynamics, we note that good results are obtained when κ < 1 (and k not too large and negative), while problems emerge for κ > 1 and β not too large [34,52]. It should be noted that in view of the later sections even values of n p ≥ 2 can be physical as the QCD determinant has double zeroes when the Wilson fermion formulation is used.
Results of CL dynamics for the observables e ikz (k = ±1, . . . , ±5) are shown in Fig. 9. For set (1), we observe good results, except when k is large and negative, k = −4, −5. For those values, fluctuations are large and increasing the simulation time does not improve this, a sign of non or poor convergence. For set (2), excellent agreement with exact results is obtained. For set (3), we observe agreement for large and positive k, but increasingly worse behaviour as k is reduced. The results for k = −4, −5 have larger errors, but the values of the averages are robust as the Langevin time is increased, hence here we find incorrect convergence. Since for our choice of parameters the poles are located at y p > 0, we note that exponentials with k > 0 (k < 0) will be less (more) sensitive to the presence of the poles, as a suppression (enhancement) with e −ky (e ky ) arises naturally. This is indeed supported by the data.
In the following we focus on the case where κ > 1 and β 1, since this is where complex Langevin dynamics converges, but possibly to an incorrect result. Moreover, we will compare n p = 1, 2 and 4.
Poles inside the distribution
We consider parameter set (3), with β = 0.3, κ = 2, µ = 1 and n p = 1, 2 and 4. Classical flow diagrams are given in Fig. 10 for n p = 1, 2 (note the periodicity in x). Besides the attractive 'perturbative' fixed point at x = 0, there is an additional attractive fixed point at x = ±π. The other two fixed points are repulsive. It is clear to see from the flow diagrams, and can be confirmed following a similar analysis as above, that the equilibrium distribution will be contained in a horizontal strip between the two attractive fixed points. Finally, the pole is attractive in the imaginary direction and repulsive in the real direction (as always), making the pole an approximate bottleneck, just as in the real case considered in Sec. 3.1.2. Hence, as the attractive fixed points move closer together in the imaginary direction, the distribution gets narrower and narrower.
In Fig. 11 we show logarithmic contour plots of the equilibrium distribution sampled during the CL process in the complex plane for n p = 1 (left) and 2 (right). Note that the darker colours correspond to the most frequently visited regions. The position of the pole can clearly be identified as the place where the distribution is pinched, resulting in a bottleneck; this effect gets stronger with increasing n p . The distribution is strictly zero outside the strip set by the attractive fixed points.
To better understand this structure, we note that the approximately disconnected regions (i.e. the 'head' and the 'ears' in Fig. 11) are characterised by the sign of the real part of the determinant, (3.27) Figure 11. Logarithmic contour plots of the distribution in the xy plane, for β = 0.3, κ = 2, µ = 1, n p = 1 (left) and n p = 2 (right).
and hence we will refer to them as G ± , with G + the 'head' and G − the 'ears'. For n p = 1, we observed frequent crossings between the two regions. For n p = 2, the crossings are rarer but still frequent enough such that both regions are visited during long runs. This might, however, be due to the finite time step. In the continuous time limit it is possible that the two regions that are not connected by the process, i.e. the process might not be ergodic. Of course rare crossings make it hard to collect good statistics. For n p = 4 (not shown) no crossings were observed and the distribution only has support in G + . To translate these findings to an observable easily accessible also in more complicated models and lattice theories, we consider the complex determinant. Logarithmic contour plots of D are shown in Fig. 12. We observe a similar structure, with the zero of D acting as the bottleneck. We will use this diagnostics in the more complicated models discussed below.
In view of the formal justification, see Sec. 2, it is important to know the rate at which the distribution goes to zero at the pole. This is shown in Fig. 13 for the partially integrated distribution P x (x) (left) and the real part of D (right). For n p = 1 we observe a linear decrease at the pole (recall that x p = ±2π/3), while for n p = 2 the decay is faster. For n p = 4, the pole is not crossed and the entire dynamics takes places in G + . Since the pole does not negatively influence the dynamics in this case, we expect good agreements with the exact results, although there may be problems with ergodicity, similar to the real case. This is demonstrated in Fig. 14, where Re e ikz is shown on a logarithmic scale, for 10 values of k. For n p = 2 we find approximate agreement, especially for k close to 0. For n p = 4, good agreement is seen for all k values considered. This is consistent with the formal derivation: for n p = 4 the pole is avoided and only the region sufficiently far from D = 0 is relevant. Finally, we stress once more that further support for the validity of the formal arguments comes from the observed interplay of the observables and the pole: it is possible that for some observables good agreement is found, while for others it is not. This crucially depends on the region in configuration space most relevant for the observable under consideration, as exemplified in this model by the observables e ikz , with k ≷ 0.
What does the CL simulation actually compute?
In order to further understand the relevance of the contributions from the nearly disconnected regions G ± , we have analysed the results from Langevin simulations for e ikz by separating the trajectories based on the sign of the real part of D. The results are summarised in Table 2 in the columns labeled CL[G ± ]. We note that the results obtained when restricted to G + are close to the exact results, listed in the first column, but not quite equal. We can understand this as follows: first we shift the contour of integration of the original integrals to go through the zeroes of ρ(z). For set (3) this means Im z = µ. Next we split the integration into two contributions coming from the two inequivalent paths connecting the zeroes, one living in G + and the other in G − , and define and similarly The exact results, restricted to G ± , are shown in Table 2 in the columns labeled exact[G ± ]. The agreement between the restricted Langevin and exact results is convincing. This should not be surprising, since the formal proof of correctness provided earlier is directly applicable to the model restricted to G + or G − . Since the exact values for the full model can be obtained as Table 2. Re e ikz for several values of k, when restricted to G ± (Re D ≷ 0), for β = 0.3, κ = 2, µ = 1, with n p = 2, comparing complex Langevin (CL) and exact results.
a way to obtain the correct results would be to combine the restricted simulation results with the weights Note that since Z − /Z + ≃ 0.0281 ≪ 1, the deviation between full results and those restricted to G + is on the order of a few percent as well, as illustrated in Table 2.
The problem with this prescription is of course that in realistic models the weights are not known. However, below we will see that typically w − is tiny and can be approximated by zero; here it is nonnegligible because we chose the rather extreme value κ = 2.
In the case of n p = 4, the process never crosses into G − , which indicates a lack of ergodicity, similar to what was found in Sec. 3.1.2. The process simulates a version of the original integral restricted to a path running between the zeroes, which is not quite equal to the full integral. This causes a tiny systematic error which is, however, not visible in the data since it is highly suppressed; for n p = 4, Z − /Z + ≃ 0.00302 ≪ 1.
Lessons from simple models
The following lessons can be learned from the simple one-variable models.
Lesson 1: It has been suggested [34,35] that the winding of the Langevin paths around the pole is the source of the problem, because the pole corresponds to a logarithmic branch point in the action. However, in the one-pole model of Sec. 3.1 we have demonstrated explicitly that no such winding occurs, since the pole lies either on the edge or outside the distribution. Nevertheless wrong results can be encountered. Further indication that it is not the winding which matters has been given in Ref. [37], see also Sec. 6 for the case of full QCD.
Lesson 2: It has been said (see for instance Ref. [37]) that it is sufficient for correctness if the distribution P is 'practically zero' at the pole. Using again evidence from the one-pole model, we note that this is not correct in general: for small β = 1.6 wrong results are obtained, but P (x, y) vanishes at the pole. On the other hand, we have shown (and demonstrated numerically for β = 4.8) that it is sufficient for P to be nonzero only in a simply connected region whose closure does not contain the pole(s). The intermediate case β = 3.2 seems to have at least a distribution P vanishing at very high (maybe infinite) order at the pole, also leading to good results. All this can be understood in the light of the fact discussed in Sec. 2: the observables evolving according to Eq. (2.8) typically develop an essential singularity at the location of the pole of the drift.
Lesson 3: A strong attractive fixed point sufficiently far from any poles of the drift leads to correct results. This almost obvious fact has been observed already earlier, e.g. in QCD with static quarks [15].
Lesson 4: The existence of a 'bottleneck' between two regions G + and G − , such as in the U(1) one-link model of Sec. 3.2, is a signal for potential trouble. The best variable to analyse this is the determinant D (not raised to any power), because it can also be used in more complicated lattice models, as we will see below.
Lesson 5: It is possible that the relative weight of one of the two regions is suppressed, i.e. w − ≪ w + . Then a modification of the process which includes only trajectories with Re det D > 0, i.e. those contained in G + , or using long runs such that the weight of runs in G − is naturally suppressed, seems to produce reasonably good results. On closer inspection, however, it only gives approximate results, since only one part of the original complex integral is represented, namely the part contained in G + . However, if indeed w − ≪ w + , this may give a numerically accurate approximation to the complete problem.
Lesson 6: The effect of increasing the strength of the pole by increasing n p is twofold: On the one hand the 'pull' in the imaginary directions towards the pole is increased, which is bad; on the other hand the 'push' in the real directions away from the pole is strengthened, which is good.
In the one-pole model, with the pole on the imaginary axis, the first effect dominates: hence increasing n p makes the situation worse. We have checked that for n p = 2, to obtain correct results, a larger value of β is needed than for n p = 1.
For parameter set (3) in the U(1) model the second effect dominates: increasing n p makes the bottleneck between the two regions G + and G − narrower and inhibits transitions between the two regions; furthermore it reduces the relative weight of G − . For n p = 1 this bottleneck does not prevent the process from moving between the two regions; for n p = 2, transitions are already rarer and it seems that each of the regions around the two attractive fixed points supports an invariant measure by itself; for n p = 4 no transitions are observed even for extremely long runs. It should be noted that in lattice QCD with n f flavours of Wilson fermions the degrees of freedom make n p at least 2n f . Lesson 7: The interplay between an observable and the distribution determines how close the expectation value of the former is to the correct one: if the observable is naturally suppressed/enhanced near the pole, it is possible to obtain, within the numerical error, correct/manifestly incorrect results. This explains why one can encounter both apparently correctly and manifestly incorrectly determined expectation values in a single analysis.
We will now take these lessons and see how they apply to more realistic models.
Effective SU(3) one-link model
In the following section we investigate the role of the zeroes and the ensuing lessons in a system with more degrees of freedom, which is however still exactly solvable, namely an effective SU(3) one-link model. Versions of this model have been considered before, see e.g. Refs. [10,14]. Here, the form of the model and the choice of parameters is motivated by QCD with heavy quarks (HDQCD), to be discussed in Sec. 6. The starting point is QCD with N f flavours of Wilson fermions. At leading order in the hopping expansion, the fermion determinant can be expressed as a product of factors involving Polyakov loops at each spatial site, see Sec. 6 below, where the remaining determinant is in colour space only 4 and P with N τ the number of time slices in the temporal direction. The parameters C,C arise from the hopping expansion and read Employing the temporal gauge we can see that the product of local factors is equivalent to having only one temporal link in each factor. Using standard relations, again valid for both SU(N c ) and SL(N c , C), the remaining determinants can be expressed in terms of the traced Polyakov loops, Explicitly, for N c = 2 this gives and for N c = 3, For larger N c the relations become more complicated but the determinant always includes a C Nc term which dominates at large µ (making the sign problem increasingly harmless toward the saturation regime). Notice that for SU(N c ), |P x |, |P ′ x | ≤ 1. In the following we concentrate on the N c = 3 case.
Effective one-link model for HDQCD
To define an effective model for HDQCD in four dimensions we consider the resulting fermion determinant on a single spatial lattice site, such that P = tr U/3 and P ′ = tr U −1 /3 are the only degrees of freedom. Here U is the remaining temporal link in the temporal gauge. To approximate the Yang-Mills integration of the lattice model we consider the temporal link U surrounded by its neighbours, see Fig. 15, and replace the contributions from the staples connected to U by a single matrix A, such that There are various ways to proceed [14]. Here we diagonalise U, with eigenvalues e iw k ( k w k = 0, k = 1, 2, 3). The group integral then includes the reduced Haar measure The complete one-link action to consider now takes the form where the diagonal elements of A are represented by the α k 's (where we took out a factor of 6). The Langevin drift is determined by K = −∇S and complex Langevin dynamics can be implemented for all three w k 's or after eliminating the constraint k w k = 0 [14]. Zeroes in the Haar measure also lead to poles in the drift, but these generally do not lead to problems and, in fact, stabilise the dynamics. This has been discussed in Ref. [14]. More details concerning the distribution of zeroes of det M are given in App. C.
As observables we consider Exact results are obtained by numerically integrating over the angles w k . When the action is real, In order to determine reasonable parameter values, relevant for HDQCD, we write the fermion determinant as where the critical chemical potential for onset at zero temperature, i.e. the chemical potential at which the density changes from zero to nonzero [19]. This is illustrated in Fig. 16 (left), where the µ dependence of a and b is shown for given N τ and κ. With increasing κ, µ 0 c decreases and the peaks shift to the left, while with increasing N τ the peaks become narrower. We also note that the (anti-quark) contributionD becomes increasingly irrelevant as N τ increases, asC becomes exponentially small. Hence we will usually neglectD. In the following we use N τ = 8, unless stated otherwise, and N f = 1. Since in the one-link model there is no transition as β is varied at µ = 0, see Fig. 16 (right), we choose to work at β = 0.25, but we have also studied larger β values. We considered two κ values κ = 0.120, µ 0 c = 1.427, and κ = 0.145, µ 0 c = 1.238, (5.17) where µ 0 c is the corresponding critical µ value (5.16), corresponding to C = 1. The sign problem is (nearly) absent exactly at onset, where C = 1, a = b = 3/2, and D in Eq. (5.13) is real (D is exponentially close to 1). This will explain some of the results below and has been noted before [53,54]. The behaviour for the two κ values is rather similar therefore we shall only show the results for κ = 0.120.
Ordered lattice
We first consider the ordered lattice (α k = 0). Fig. 17 contains results for the observables O ±n (n = 1, 2, 3), averaged over 100 trajectories, using random starting points. The runs are relatively short: the total Langevin time is around 130, with 20% thermalisation. Note that O +n and O −n are typically rather close together. We see very good agreement, except around µ ≃ µ 0 c = 1.425. The same behaviour is found at the larger κ = 0.145. We hence focus on three µ values: µ =1.375 (below onset, CL fine), 1.425 (close to onset, CL problematic), 1.475 (above onset, CL fine). In Fig. 18 we show results for each of those µ values, using 50 independent, relatively short, trajectories. The figures on the left show the observables against trajectory index. When CL is fine, all trajectories fluctuate around the exact result. However, when CL is problematic (middle figure), the trajectories appear to split in two groups, indicated by the red and blue symbols. We identify those using the minimal absolute value of the determinant on the trajectory, To investigate these two types of trajectories further, we show in Fig. 18 (right) the corresponding scatter plots for the determinant. When CL works well, the points from all trajectories appear similarly distributed, even when d min gets very small. At the middle µ value of µ = 1.425 a different picture appears: the trajectories of the first group (d min > d c ) give a similar picture as at the lower and higher µ values (the "red fish"), while the second group (d min < d c ) yields a very peculiar structure (the "blue whiskers"). The appearance of two essentially disjoint contributions in the determinant is very similar to what was observed in the U(1) one-link model. A red/blue code for identifying the disjoint ("regular"/"deviant") contributions is used in Figs. 18, 19, 22, and explained in the captions.
In the scatter plot we showed results for the determinant det M = (DD) 2 which enters in the determination of the drift. More information, however, is provided by the unsquared factors DD ≃ D. In Fig. 19 (bottom left) we show the scatter plot of the unsquared factors DD ≃ D at µ = 1.425. The "blue whiskers" have Re D < 0 and come from trajectories which approach the pole (zero of the determinant) with d min < d c . As in the simple models, a bottleneck separates them from the region with Re D > 0. Moreover, the contributions have a very small weight. Depending on the starting configuration, trajectories may run for a while in the region with negative Re D, before switching to the positive side. The contributions from the "whiskers" therefore practically fades out after enough thermalization: already after t ≃ 1000 all configurations appear in the region Re D > 0, see Fig. 19 (top). Some of the results are summarised in Table 3.
Similar as in Sec. 3.2.2, we defined here partition functions and weights restricted to subsectors, namely where ρ(U) is the original complex distribution. Table 3 also contains an estimate of how much time is spent in the region with Re D < 0, which is denoted with p − . It should be noted that p − depends on the details of transient behaviour and crossings, and is hence not immediately related to w ± = Z ± /Z. The exact relative weight of the region Re D < 0 is easily found and is O (10 −4 ). We find that the process, once arrived in the positive region, only rarely visits the Table 3. Simulation results at β = 0.25, κ = 0.12, N τ = 8 and µ = 1.375, 1.425, in the ordered case, from all trajectories (CL), from trajectories with d min > d c (CLD), from all trajectories after dropping the points with Re D < 0 (CL+), using 100 or 50 trajectories with varying length of Langevin time t = t therm. + t meas. . Errors are not indicated but are at the permille level. Imaginary parts are zero within the error. Also indicated are exact results: full, restricted to Re D ≷ 0, and phase quenched (pq). w − is the relative weight of the Re D < 0 region in the partition function, for the simulation p − is given instead, estimated via the proportion of Re D < 0 points.
negative region and typically only very briefly. Hence random starts with Re D < 0 give that region an artificially large weight and a considerate choice of the start configuration will reduce the necessity of long thermalisation times. These findings suggest it might be useful to discard trajectories with d min < d c and thus sample the Re D > 0 region only, to ensure nearly correct convergence. We find that the value of d c does not need tuning, in the above case any value between 10 −5 − 10 −8 is acceptable, see Fig. 20 (left). Alternatively one can keep all trajectories but drop contributions from configurations with Re D < 0, as not to lose statistics. This is demonstrated in Fig. 20 (right). Keeping only trajectories with d min > d c leads to the similar results. As already suggested by Fig. 18 the signal for deviant contributions also shows up in observables, as can be seen in Fig. 19 (right). In this long run and rather representative case only three trajectories out of fifty contain configurations with Re D < 0, which lead to outlying contributions in the observables. These few contributions falsify the averages by nearly 1%, while the average over the other trajectories reproduces the exact result within the error ( 0.1 %). The irregular signal is clear in the scatter plot of the observables, which also suggests that they are related to certain initial configurations, such that their weight will diminish in long runs.
The above effects become evident by plotting the correlation between the determinant and various other quantities. In Fig. 21 we show the histograms of the probability distributions of Re D and of one selected observable, O 2 = tr U 2 (top). The scatter plots (middle, bottom) show a clear correlation between Re D and the unitarity norm tr(U † U − 1 1), the drift, and O 2 .
Disordered lattice
Next we consider a disordered lattice, i.e. we take into account the nontrivial effect of the neighbours when the lattice theory is reduced to a one-link model, see Eq. (5.10). We take α k = 0 and choose the values given at the end of Sec. 5.1. Fig. 22 demonstrates the behaviour of the unsquared determinant and for O 1 , see also Table 4, which is very similar as for the ordered case. We find that trajectories starting with Re D < 0 ("blue whiskers") need much more time to switch to the region with Re D > 0 ("red fish"). Nevertheless the weight of the former is only about 0.001 − 0.05, and discarding the contribution with Re D < 0 decisively improves the results. The results, however, deteriorate with increasing lattice disorder, which may indicate the effect of large excursions in the noncompact directions, for which the adaptive stepsize and gauge cooling become essential. Since this was studied in previous papers, we do not analyse this problem any further here. Table 4. As in the previous Table,
Expansion
Finally we study the possibility to ameliorate the dynamics using an expansion of the determinant, which is also discussed in App. D for the simple models. We restrict ourselves here to the ordered case. The fermionic part of the drift is of the form Neglecting the factor 1 + C 3 , which cancels in the drift, we write and similar forD. The pole is at X = −1. We then write a Taylor expansion centred at the shifted point λ, such that and again similar forD, with parameterλ. Notice that the λ used here differs by one unit from D 0 used in App. D. Since the pole is at X = −1, the expansion around Fig. 19 for the strongly disordered lattice 1 2 X Figure 23. Complex X plane, with the pole at X = −1. The expansion around X = λ has a larger radius of convergence than around X = 0. X = λ with conveniently chosen λ has an increased radius of convergence compared to the expansion centred at X = 0, see Fig. 23.
The "regularisation parameters" λ,λ can be chosen conveniently, and can also be adapted during the simulation (dynamical analytic continuation, see App. D). We note here that this procedure can also be used for inverting matrices 1 1 + X, where a simple choice for the regularisation term is λ1 1. In lattice QCD, this can e.g. be applied to the fermion matrix. Notice that this shift is an exact procedure and does not represent an approximation for which a subsequent extrapolation is needed. In practice the expansions are of course truncated and their effectiveness depends on the radius of convergence, which is however improved by the λ-shift. In Fig. 24 results for this procedure are shown. We have tested λ real, imaginary and 0 (no regularisation). For the latter, the expansion shows runaways and does not converge. With imaginary λ = i(a + b), the convergence was found to be not very good. On the other hand, for real λ = a+ b the convergence and results are excellent. In Fig. 24 (left) convergence of the expansion is shown at µ = 1.425, i.e. the µ value where the "whiskers" affect the result. The expansion is truncated, n ≤ N, and the dependence on N is shown. Excellent convergence is observed. In Fig. 24 (right), observables are shown as a function of µ, using a fixed N = 16 , leading to good agreement with the exact results (cf. Fig. 17).
We find therefore that meaningfully regularised expansions converge to the correct results, even at µ values for which the standard procedure does not work. This can be understood in the following way: Choosing the shift such that the expansion point lies in the "fish" region of the scatter plots, e.g. by choosing λ = a + b, the trajectories explore this region and never enter the "blue whiskers" region, due to the bottleneck. Hence it has the same effect as avoiding the Re D < 0 region altogether. A dynamical expansion which adapts λ when approaching the edge of the domain of convergence is easy to implement as well and will cover the full analyticity domain. In this case, however, it will also collect data from the "blue whiskers", leading to the same wrong results as when all trajectories are used.
Discussion and tentative conclusions
As long as the parameters a, b,ã,b are below 1, the Langevin process is found always to converge to the correct results, within the error. Significant discrepancies appear for C ≃ 1 where a, b ≃ 1.5. Although the measure is det M = (DD) 2N f ≃ D 2N f , the relevant factor for the analysis is DD ≃ D. The sign of Re D appears to identify two separate regions with two different contributions to expectation values. This conspicuous situation appears close to onset. The determinant also becomes squeezed in the imaginary direction. These regions show up in the scatter plots as "red fish" (Re D > 0) and "blue whiskers" (Re D < 0), the former producing good expectation values for the observable but the latter leading to strongly deviant contributions. This outlying behaviour is also visible in the scatter plots of the observables directly sensitive to the pole or the fermionic degrees of freedom, which may provide a practical test in realistic lattice simulations at no cost, as the gauge invariant observables are calculated anyway. The clear correlation between the Re D < 0 region and the outlying contributions to various quantities is shown in Fig. 21, which also illustrates the small weight of these regions.
At least in the examples analysed here the two regions appear separated by a bottleneck which can only be crossed by trajectories approaching |D| = 0 below a certain threshold. The approach to the pole can therefore signal the possible sampling of "deviant" contributions from the Re D < 0 region. The latter has a significantly smaller weight, which may only appear as a quasi-transient whose contribution for very long Langevin trajectories is extremely small. This suggests that discarding the contributions from the region with Re D < 0 will automatically lead to good results. Long enough thermalisation times, or adequate starting points in the Re D > 0 region can help by inhibiting the development of these regions. The expansion, on the other hand, seems to do just that if we set the expansion centre in the region of Re D > 0, leading to good results via this approach. We note that the appearance of the bottleneck, separating the complex configuration space into two regions, with w − ≪ w + , is consistent with the findings in the U(1) model. Larger β and/or larger N τ show a picture consistent with the above one, sup- We conclude that the development of Re D < 0 regions signals failure in the simulation: although the weight of these regions is small, their contributions deviate strongly from the ones with Re D > 0 and hence they may affect the results by many standard deviations. Dropping, one way or the other, those contributions leads to results agreeing with the exact ones, at the level of the statistical errors (at the permille level in these runs). The fact that the process fails to account correctly for the region with Re D < 0 at certain parameter values suggests, however, that we should attribute a possible systematic error proportional with the weight of this region, which is O (10 −2 − 10 −4 ) in the examples studied here.
We may try to quantify the relevance of the region Re D < 0 by calculating the logarithm of its relative weight, ∆F eff = − ln(w − /w + ), using the exact integral expressions. As can be seen in Fig. 26 (left), ∆F eff appears bounded from below, and even increasing with N τ for some µ values far from µ 0 c . The bound is about 10% lower on the disordered lattice but shows similar behaviour. With increasing number of fermion species, the difference in free energy scales approximately with the number of flavours, ∆F eff (N f ) ≃ N f ∆F eff (1), see Fig. 26 (right). Since in HDQCD the lattice determinant is a product of factors of the form D 2 over the spatial lattice, we may ask how the spatial volume would manifest itself in ∆F . Fully correlated Polyakov loops would then behave as if in the presence of many flavours, but the general case is not trivial and will be discussed in the next section.
Extrapolating the lesson from this discussion to realistic QCD lattice calculations, we conclude that one has to monitor the appearance of disconnected regions with a bottleneck at |D| ≃ 0. This can be done by monitoring various quantities such as some selected observables, the drift or the lowest determinant modes avoiding time consuming procedures. Dropping by hand the occasional contributions of regions of type "blue whiskers" (Re D < 0) should already produce good results, while an estimate of the relative impact of such contributions would suggest a measure for possible systematic errors.
Lattice QCD
In the following section we aim to apply the lessons found above to the case of QCD at nonzero quark density, first in the case of heavy quarks (HDQCD) and then for full QCD, using the staggered fermion formulation.
In QCD the partition function is, after integrating out the quarks fields, written as where S YM is the Yang-Mills action, U are the gauge links, and M is the fermion matrix. The Langevin update for the gauge links U reads [55], in a first-order discretised scheme, with Langevin time t = nǫ, where λ a are the Gell-Mann matrices, normalised as Trλ a λ b = 2δ ab , and the sum over a = 1, . . . , 8, is not written explicitly. More details can be found in Refs. [10,15,16]. The drift is generated by the action S and reads
Heavy dense QCD
To assess the importance of these poles, we need to specify the fermion matrix. We consider first heavy dense QCD. This approximation to full QCD can be obtained by a systematic hopping-parameter expansion of the fermion determinant, preserving terms that cannot be ignored for large chemical potential, as well as the terms related by symmetry under µ → −µ. For Wilson quarks, this amounts to an expansion in terms of κ, keeping κe µ fixed and preserving terms that go as κe −µ . A detailed discussion can be found in Refs. [18,56,57], see also Refs. [58,59] for combined hopping and strong-coupling expansions.
Here we consider the resulting theory at leading order, using N f degenerate quark flavours, for which the fermion determinant reads [10] (see also Sec. 5) The remaining determinant is in colour space only. The parameter h = (2κ) Nτ arises from the hopping expansion (N τ the number of time slices in the temporal direction) and P (−1) x are the (inverse) Polyakov loops, see Eq. (5.1). Note that the gluon dynamics is included in Eq. (6.3) via the usual Wilson Yang-Mills lattice action, with gauge coupling β.
In order to study the zeroes of the determinant, we identify the basic building block of determinant, defined such that the full determinant in the path integral weight is written as Here the local determinants are where z = he µ/T ,z = he −µ/T and We study the zeroes of the local determinants rather than the full determinant, since this is what is closest to the analysis carried out above and will allow us to focus on individual factors getting small. HDQCD has been used extensively to justify the results obtained with CL, e.g. via reweighting [15,19,30]. Since reweighting and CL have very different systematic uncertainties, the agreement of the results obtained by both methods is a strong argument for the correctness of either approach. In particular, since reweighting does not suffer from potential problems caused by zeroes of the determinant, agreement indicates that the latter do not cause problems for CL.
We note here that HDQCD has also been used to test and compare variations of the hopping parameter expansion to higher order [18], in particular with regard to the standard hopping expansion, obtained via [60] det for which zeroes of the determinant do not appear. An expansion in spatial hopping terms only, for which HDQCD is the leading-order term, has been discussed and assessed in Ref. [18]. We now discuss some results in HDQCD, focussing on potential zeroes of the determinant. We mostly use the following choice of parameters β = 6, κ = 0.12, Note that the lattice spacing (or gauge coupling β) and the spatial volume (N 3 s ) are fixed, but we consider two temperatures (N τ = 8, 16). In this theory, the quark number at zero temperature changes from 0 below onset to saturation above onset, with n sat = N spin × N colour × N f = 6N f , and the critical chemical potential is given by µ 0 c = − ln(2κ) = 1.427. (6.10) At nonzero temperature, this transition is smoothed and the critical chemical potential µ c (T ) < µ 0 c , eventually connecting to the thermal confinement-deconfinement transition line at higher temperature and lower chemical potential. A study of the phase diagram at fixed lattice spacing can be found in Ref. [19]. In Fig. 27 we show the density, in units of saturation density, for the parameters in Eq. (6.9) on the 8 4 lattice. The rapid rise around µ = µ 0 c is indeed observed. Writing the determinant as a product of its absolute value and phase, det M = | det M|e iϕ , (6.11) and using the symmetry we can extract the average phase factor in the full (i.e. not in the phase-quenched) theory via a computation of [10] e 2iϕ = det M(µ) det M(−µ) , (6.13) which is accessible using CL dynamics. The result is shown in Fig. 27 as well. The sign problem is severe in the onset region. At the critical chemical potential, the fermions are at 'half-filling', i.e. half of the available fermionic states are filled, and the theory becomes particle-hole symmetric [54]. Exactly at µ c , the sign problem becomes very mild, as the first dominating factor in Eq. (6.6) becomes real (z = 1). The sign problem due to the second factor is very small, sincez = (2κ) 2Nτ ≪ 1.
To investigate the zeroes of the measure in HDQCD, we have analysed det M x , see Eq. (6.6). Note that the corresponding factor in the effective SU(3) model was discussed in Sec. 5. We find that the simulations largely avoid the zeroes of the determinant, except in the vicinity of the critical chemical potential. This is illustrated in Fig. 28 (top), where a histogram of the absolute value of the local determinant det M x is shown, on a double-logarithmic scale for two lattice sizes. For small chemical potential, the absolute value of the determinant is close to 1, as z,z ≪ 1. As µ is increased, the distribution widens and its maximum shifts towards larger values. However, we also observe that the distribution is nonzero for smaller values, with an apparent power decay towards zero. Based on these simulations, we find the Note that −π < Φ < π and that the distributions are symmetric around zero. Away from the critical chemical potential, the distribution drop to zero rapidly; around µ c we observe a decay ∼ 1/Φ 3 . The relation between this phase and the phase of the full determinant ϕ is not immediate. However, we note that in general it is expected that the latter will vary rapidly, as the full determinant is a product of 2N f N 3 s local determinants.
To compare with the results presented in the previous sections, we show in Fig. 29 the probability density of the local determinants on a logarithmic scale, for two values of the chemical potential close to µ 0 c = 1.427, namely µ = 1.3 (left) and 1.425 (right). Note the very different horizontal and vertical scales. These distributions look remarkably similar to those encountered in the simpler cases, although with only a very thin presence of the 'whiskers', if at all. We find that at the critical point the distribution is highly elongated in the positive real direction, and that it shrinks again for µ > µ c . Exactly at µ c , the distribution shrinks in the imaginary direction, leading to a much smaller typical phase of the determinant, and thus a milder sign problem. This explains the milder sign problem at µ = µ c , as observed via e 2iϕ in Fig. 27. There are some configurations where Re det M < 0, but these appear very infrequent and do not carry substantial weight.
However, at lower temperatures a clear sign of the whiskers appears, which indicates the possibility of contamination from configurations with Re det M < 0. This is illustrated in Fig. 30, where we show results at a fixed lattice spacing, on a 8 4 and 16 4 lattice, such that the temperature is twice as low on the 16 4 lattice. Close to µ c , the weight of the region with Re det M < 0 is approx. 0.005% on the 8 4 lattice and 0.08% on a 16 4 lattice, indicating a growing importance as the temperature is lowered. At the lower temperature, the "whiskers" are remarkably similar to those encountered in the SU(3) one-link model. In Fig. 31 we aim to reduce the lattice spacing, while keeping the physical volume and the temperature constant. Here we see that the power decay towards zero remains approximately the same and hence the role of configurations with a small absolute value of the local determinant does not change when going (somewhat) closer to the continuum limit. We also investigated whether changing N f influences the appearance of the whiskers. While using a larger N f is beneficial, configurations with determinants in the whiskers do appear at low temperatures. Finally we have studied the volume dependence of the weights w ± , signifying the importance of the regions G ± , as suggested by the analysis of the one-link models, but did not find a clear suppression of the region with Re D < 0.
We hence conclude that the zeroes of the determinant for HDQCD appear to have no effect except close to the critical chemical potential. Since for those chemical potentials the sign problem is quite mild, this observation is not related to the severeness of the sign problem but to details of the CL process. Although the zeroes might influence the results in this region, the relative weight of configurations with Re det M < 0 is quite small and their influence is therefore suppressed.
Full QCD
Finally, we consider full QCD, with dynamical fermions. We note that full QCD is numerically much more costly than HDQCD, as the inversion of the fermion matrix has to be carried out numerically for every update. Similarly, an assessment of the zeroes of the determinant is harder, since it requires the computation of the full determinant. It should be noted that the determinant itself is not required for the Langevin update, only M −1 evaluated on a fixed vector [16].
Here we show results obtained using staggered fermions, with the (unimproved) staggered fermion matrix where η νx are the staggered sign functions, η 1x = 1, η 2x = (−1) x 1 , η 3x = (−1) x 1 +x 2 , η 4x = (−1) x 1 +x 2 +x 3 . Note that this formulation describes four tastes, due to the fermion doubling. There are several indications for full QCD that, at least at high temperatures and for the quark masses considered, the CL simulations are unaffected by poles in the drift: these include comparisons to systematic hopping-parameter expansions, which have holomorphic actions [18], comparisons to reweighting, which does not depend on the action being holomorphic [61], and by observing spectral properties of the fermion matrix [7] (see also below). At low temperature, it is at present not known whether poles affect the CL results, partly because simulations are more expensive due to the larger values of N τ required, or are hindered by the ineffectiveness of gauge cooling on coarse lattices. In order to study the phase of the determinant in full QCD we start from an initial configuration on the SU(3) submanifold. After thermalisation we then follow the evolution of the phase. The results are shown in Fig. 32, for a 8 3 ×4 and a 12 3 ×4 lattice. On the smaller volume and for the smaller chemical potentials µ/T = 0.4 and 1.2, one observes very mild time dependence with no winding of the phase around the origin. For the larger chemical potentials µ/T = 2.0 and 3.2, however, we see frequent crossings of the negative real axis. This signals that there is a sign problem in the theory which is hard to counter with reweighting, as the average phase factor gets close to zero. As expected, this behaviour gets worse as the volume is increased.
The corresponding histograms are shown in Fig. 33, for three different spatial volumes, namely 8 3 , 12 3 , 16 3 , with fixed N τ = 4. As expected, the distribution is localised on the smallest volume, but gets increasingly wider as the volume and/or the chemical potential are increased. To relate this behaviour to possible zeroes of the determinant, we have computed the eigenvalue spectrum of the Dirac operator for typical configurations in the ensemble, using the same setup as in Fig. 32. Fig. 35 contains the spectra, while Fig. 34 contains histograms of the absolute values of the eigenvalues, obtained by averaging over 100 configurations. We note that in spite of the frequent circlings of the origin by the fermionic determinant there are typically no eigenvalues close to zero, suggesting that the probability density of configurations around the singularities of the drift is very small, as in the simpler models discussed above. The change of the total phase is given by the sum of the changes over all the eigenvalues, so in contrast to toy models the frequent crossing of the negative real axis does not suggest that the poles are affecting the CL dynamics. We note that the increase of the volume, from 8 3 to 12 3 , leaves the shape of the spectrum very similar, but increases the density of eigenvalues.
To summarize our findings, in full QCD at high temperatures the singularities of the drift appears to be outside the support of the probability density of configurations. We conclude therefore that in this situation complex Langevin dynamics provides correct results, in line with the formal arguments and the lessons from the simple models. What happens at lower temperatures remains an open question.
Discussion
In this section we summarise the key findings and discuss them in the context of the various models. The main objective was to understand the role of zeroes in the path integral measure after analytic continuation in the context of complex Langevin dynamics, in which the zeroes show up as poles in the Langevin drift. Since the derivation of the justification of the complex Langevin approach for complex measures relies on holomorphicity of the drift, the presence of poles makes a re-analysis of this derivation necessary.
We have shown that a crucial role is again played by the behaviour of the observables considered and the (real and nonnegative) distribution, which is a solution of the associated Fokker-Planck equation and effectively sampled during the Langevin process. While for holomorphic drifts it is the behaviour at large imaginary directions in the complex configuration space that matters, for meromorphic drifts we have shown that an additional constraint arises from the behaviour close to the poles: to justify the method it is necessary to be able to perform partial integration, without picking up finite boundary terms, both around the poles and for large imaginary directions. This condition gives a requirement of fast decay of the distribution in those regions. In simple cases it is possible to verify this requirement analytically, for instance when it can be shown that the distribution is strictly zero in those regions, but for most models this has to be verified a posteriori by diagnosing the output from numerical simulations.
Besides this, we also found that time-evolved observables typically have an essential singularity at a pole. However, this is counteracted by the distribution going to zero at this pole, with nontrivial angular behaviour. This ensures that the contribution to expectation values from this region is finite, but not that boundary terms are necessarily absent.
In order to further understand and support these analytical considerations, we have subsequently analysed a number of models and theories with increasing complexity, from the one-pole model with one degree of freedom to full QCD at nonzero baryon density, with the aim of extracting common features. Logically a pole can be outside, on the edge of, or inside the distribution, and we have given examples of each of these. As expected, when the pole is outside, it does not interfere with the Langevin process. The possibility of a pole on the edge is important, since it indicates that it is not the winding around the pole that matters, but the decay of the distribution towards the pole. Indeed, we have encountered both correct and incorrect convergence in this case, and this can be traced back to the fast decay of the distribution, or lack thereof.
As a side remark, we note that further support for our analytical understanding comes from the observed interplay between observables, drift and the distribution: if the observable is naturally suppressed (enhanced) near the pole, it is possible to obtain correct (incorrect) results. This explains why in one analysis one can encounter both correctly and incorrectly determined or nonconverging expectation values.
When the pole is inside the distribution, we have found that it typically leads to a bottleneck, i.e. a region in configuration space which is difficult to pass and effectively divides the configuration space in two, nearly disjoint regions. For QCD and QCDlike models, it is zeroes of the determinant that correspond to poles and determine the location of the bottleneck. Hence the complex-valued determinant, preferably not raised to any powers (such as N f ), or the real part of the determinant, provide useful diagnostic observables to analyse the dynamics. We have studied a number of models in this way, namely U(1) and SU(3) one-link models and heavy dense QCD (HDQCD) in four dimensions. In all of these, we indeed observed similar behaviour: the emergence of two regions, which were denoted with G ± and are identified by the sign of the real part of the determinant (before raising it to a power). We found that it is typically the region with positive real part that dominates the dynamics, but that excursions to G − can upset expectation values, even when their relative weight is suppressed. In some cases the bottleneck is particularly difficult to pass, which may occur when the order of the zero is increased. It is possible to analyse each region G ± separately. The error made by restricting the simulation to G + can then be estimated and was typically seen to be small, depending on the parameters used. In the SU(3) one-link model and HDQCD, the process is affected by zeroes close to the half filling point, where the sign problem is milder. In this case the region G − took the form of characteristic "whiskers": even though the relative weight of these regions was small, the contribution to expectation values could be large. Hence an exclusion of this region improves the results, but with a systematic uncertainty. In HDQCD we found indications that the role of zeroes is unchanged when the lattice spacing is decreased, while keeping the physical volume approximately constant. Finally, we considered QCD with dynamical staggered quarks at high temperature and analysed the eigenvalues of the Dirac operator. Here we noted that there are typically no eigenvalues close to zero, which suggests that complex Langevin dynamics is applicable in this part of the phase diagram.
Summary and Outlook
We have given a detailed analysis of the role of poles in the Langevin drift, in the case of complex Langevin dynamics for theories with a sign problem. Since the standard derivation of the formal justification relies on holomorphicity of the drift, we have revisited the derivation and shown that, besides the requirement of a fast decay of the probability distribution at large imaginary directions, an additional requirement of fast decay near the pole(s) is present. The probability distribution is typically not known a priori, but its decay can be analysed a posteriori.
We then studied a number of models, from simple integrals to QCD at nonzero baryon chemical potential, and found support for the analytical considerations. In the cases when the simulation is affected by the pole(s), we found that typically the configuration space is divided into two regions, connected via a bottleneck. For theories with a complex fermion determinant, such as QCD, the bottleneck is determined by the zeroes of the determinant. In the simple models, and even for QCD in the presence of heavy (static) quarks, this understanding is sufficient to analyse the reliability of the Langevin simulation.
In full QCD, with dynamical quarks, ideally it requires knowledge of the (small) eigenvalues of the fermion matrix throughout the simulation, which is nontrivial. At high temperature, it was shown that the eigenvalues are typically not close to zero. Hence the most important outstanding question for QCD refers to the lowtemperature region in the phase diagram. Here a number of hurdles remains to be taken. When eigenvalues become very small, the conjugate gradient algorithm used in the fermion matrix inversion becomes ineffective. This is common to many problems at nonzero density, even in the absence of a sign problem, and requires e.g. a regulator. A successful approach in this case has not yet been developed. Besides this, on coarse lattices gauge cooling, which stabilises the Langevin process, is ineffective. This situation is improved on finer lattices, which are however more expensive due to the larger values of N τ required. A definite statement on the applicability of complex Langevin dynamics throughout the QCD phase diagram can only be made once further analysis of theses issues has been brought to a conclusion.
www.lrz.de), as well as for providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS share of the supercomputer JU-RECA and JUQUEEN [62] at Jülich Supercomputing Centre (JSC). Some parts of the numerical calculation were done on the GPU cluster at Eötvös and Wuppertal Universities.
the boundary conditions at x = 0. The drift is strongly repulsive away from the origin along the real axis, the probability density ρ is vanishing quadratically there and the Langevin process does not cross the origin.
This means that mathematically we have to consider H with 0-Dirichlet boundary conditions at the origin. To avoid confusion, we call the corresponding Hamiltonian H D . All functions in the domain of definition of H D have to have at least a square integrable second derivative; this means that they are continuous and vanish at the origin. The ground state wave function of H (ignoring the b.c. at 0), which superficially seems to belong to a negative eigenvalue of H D , is not in the domain of definition (neither are all even eigenfunctions of H). Actually, because there is no communication between the two half-lines, we may as well consider the problem only on one of the half-lines R ± . From now on we choose R + .
The as it has to be. The first excited state of L is φ 2 = N 3 (x 2 − 3/ω); it belongs to the eigenvalue −2ω. We thus find which converges to the correct expectation value for t → ∞. Similarly convergence to the correct value is found for all even functions. How about the superficially unstable mode 1/x, see Eq. (A.3)? It corresponds to the ground state of H on L 2 (R), which is not in the domain of definition of H D . The odd eigenfunctions of H are complete in the subspace of odd functions; hence the odd eigenfunctions restricted to R + are complete in L 2 (R + ) and so the apparent eigenvector of H D with a negative eigenvalue ψ 0 should be considered as a L 2 (R + ) convergent series Instead of considering −H D as a self-adjoint operator on L 2 (R + ) we may consider L itself as a self-adjoint operator on the Hilbert space H obtained as (the completion of) the set of functions φ on R + with the scalar product The eigenfunctions φ n , n = 0, 1, . . ., are orthogonal with respect to this scalar product. So we can write equivalently where the convergence is now to be understood in the sense of H. Since the models are real, one might attempt to treat them by the real Langevin method. But a simple consideration shows that for a real model with a sign problem this cannot produce correct results. The reason is that the real Langevin equation will have a positive equilibrium measure on the real axis and thus cannot reproduce all the averages which would be obtained with a signed measure. When we modify the process to allow it move out into the complex plane, the story changes: it seems then a priori not impossible that a positive measure on C reproduces correctly the averages of holomorphic observables with a signed measure on R and there are examples that bear this out. A simple example is given by It is easy to verify that the correct expectation values are reproduced by the positive density in C, This simple solution is, however, unrelated to the CL method.
B.2 One pole, n p even For n p > 0 and even, there is no sign problem, but the lack of ergodicity exists as well.
In this case, because of the stronger repulsion away from the pole, our simulations typically do not cross the pole, and so produces incorrect results when started on one side of the pole. (If there is a symmetry x → −x, this defect can easily be remedied by starting the process with equal probability on either side of the pole). Another way to facilitate the crossing and achieve correct results is by adding a small imaginary noise term. where the symbol · stands for the ordinary real or complex Langevin long time average. But this cure, like any reweighting method, while it works for one-variable models, it is not very useful for lattice models. So we will not pursue it any further.
B.3 A cure for compact real models
The final cure we consider is for a real but nonpositive weight ρ. Let c be a constant such that ρ + c > 0 (B.7) and define σ ≡ ρ + c. So a correct procedure is to run real Langevin with the drift derived from the positive density σ and correct the normalisation as shown above. We take the U(1) model with n p = 1, such that which increases as c is increased (c = 0 is the original process). The observables are Re/Im e ikx with k = 1, 2.
As expected, the numerical results start agreeing with the exact results as soon as c is large enough to make σ nonnegative. Therefore a plot like this can also serve to determine the minimal c for which correct results are obtained and a priori knowledge about the zeroes of the density ρ is not required.
The advantage of this cure is that it can be easily generalised to the complex case µ = 0; it turns out that it does work reasonably well, but not perfectly, provided µ is not very large. But again, since the cure involves some reweighting, it is not very useful for lattice systems.
C Zeroes of the HDQCD determinant
In this Appendix, we further consider the HDQCD determinant for gauge group SU(3) or SL(3,C), reduced to a single link U. We will demonstrate that zeroes of the determinant do not come as isolated points.
As discussed in Sec. 5 But there is a different way to think about this: define u ≡ tr U = z 1 + z 2 + 1 z 1 z 2 , v ≡ tr U −1 = 1 z 1 + 1 z 2 + z 1 z 2 .
(C. 4) u and v are algebraically independent and can be used instead of z 1 and z 2 to parametrise conjugacy classes of SL(3, C). The map from z 1 , z 2 to u, v is not one-toone, since interchanging z 1 and z 2 will leave u, v unchanged. The inverse map from u, v to z 1 , z 2 will have branch points where any two eigenvalues coincide. The fact that u, v ignore permutations of the eigenvalues is actually an advantage because D too remains unchanged. The zeroes of D are then determined by The three manifolds (1,2,3) in Eq. (C.3) are thus mapped into a single manifold given by Eq. (C.5). This manifold is an affine complex plane in the space C 2 (affine means that it does not go through the origin).
tr U −1 does not have to be computed by taking the inverse of U; one can instead use the identity, valid for U ∈ SL(3, C), So we get, using u 2 = tr U 2 , which vanishes on a complex parabola. The determinant factorD is quite similar; proceeding as before we find so its zeroes are again described either by a complex affine plane in u, v or a complex parabola in u, u 2 . The main point is that they are real manifolds of codimension two, so there are no isolated zeroes.
D Expansion methods
One possibility to deal with the pole is to use power series expansions in order to approximate the meromorphic drift by polynomials. In QCD the pole in the drift is always due to a zero of the determinant. In the one-pole model the role of the determinant is played by the factor D(x) ≡ x − z p .
D.1 One-pole model
We explain the approach in the one-pole model. We study two basic procedures: (1) Fixed expansion: Let D np be the 'determinant' causing problems due to its zeroes. Consider the drift caused by D, In order to obtain a holomorphic approximation to K D , we choose a point (x 0 , y 0 ) not too far from the peak of the distribution but far enough from the pole(s). We then expand 1/D around this point to order N as follows: let D 0 ≡ D(x 0 , y 0 ), then replace 1/D by converges to 0 if and only if |D/D 0 | < 0. There could be problems if the process goes outside the region of convergence, but experience shows that typically the drift will tend to keep the process inside the region of convergence. An illustrative example is shown in Fig. 37. Because the expansion point (x 0 , y 0 ) is chosen once and for all, we call this the fixed expansion. In any case, by varying N one can check whether this is the case. Numerical studies using this fixed expansion are presented below for the U(1) one-link model and for the SU(3) one-link model in Sec. 5.
(2) Dynamic expansion: one may choose different expansion points (x i , y i ), with D(x i , y i ), in such a way that the domain of analyticity is covered, while always staying well inside the domain of convergence. The quality of the expansion can be fixed by changing (x i , y i ) to the actual configuration point whenever (1 − D/D 0 ) n+1 > ǫ with some pre-chosen ǫ. If ǫ is chosen small enough we should not find any appreciable difference between the results using D and D N . By studying various situations we find that, as expected, the dynamically expanded drift generally performs just like the unexpanded one: it works where the latter works and it fails where the latter fails. has two poles for κ > 1. As described for the one-pole model we obtain a holomorphic approximation to K D by choosing a point z 0 = x 0 + iy 0 somewhere near the center of the equilibrium distribution; here the natural choice is z 0 = iµ, i.e. D(z 0 ) = 1 + κ. The Taylor expansion of 1/D around this point to order N then looks as in Eq. (D.2). Again the drift from the expansion tends to keep the process inside the region of convergence, as shown in Fig. 38.
We have tested this approach numerically, using expansions to order 10 and 20, making sure that the centre of the expansion was chosen reasonably far away from the poles and near the maximum of the distribution P (x, y) in the region with positive Re D, i.e. G + . We found the results to be more or less comparable to the ones obtained by restricting the process to G + , and not a great difference between orders 10 and 20.
Hence we conclude that the fixed expansion is a potential cure of the ills of meromorphic drift in the cases where restricting the process to G + works as well. It should be noted though that potentially significant, though much reduced deviations from the exact results remain. | 22,262.8 | 2017-01-09T00:00:00.000 | [
"Physics"
] |
Nervous System of Periplaneta americana Cockroach as a Model in Toxinological Studies: A Short Historical and Actual View
Nervous system of Periplaneta americana cockroach is used in a wide range of pharmacological studies, including electrophysiological techniques. This paper presents its role as a preparation in the development of toxinological studies in the following electrophysiological methods: double-oil-gap technique on isolated giant axon, patch-clamp on DUM (dorsal unpaired median) neurons, microelectrode technique in situ conditions on axon in connective and DUM neurons in ganglion, and single-fiber oil-gap technique on last abdominal ganglion synapse. At the end the application of cockroach synaptosomal preparation is mentioned.
Introduction
The cockroach, especially the Periplaneta americana species, is recognized as a very useful model in neurobiological studies [1]. The field of toxinology owes much to the use of various nervous preparations obtained from this insect. Periplaneta americana represents an excellent model applied in different pharmacological methods, especially in electrophysiology, which plays a vital part in most of research activity in toxinology. It may be based on using natural and "artificial" preparations, as for example the transfected Xenopus oocytes. A wide range of nervous functions have been described on the basis of studies on various parts of cockroach nervous system (Figure 1(a)) and the experiments can be performed on natural models. Biophysical principles of the nervous system function in insects are much the same as in mammals. In both groups of animals similar neurotransmitters can be found, although their distribution varies. Thus the observations made on the cockroach can nearly be applied in vertebrates. On the other hand, some arthropod neurotoxins show high selectivity to insect nervous system and they are considered as potent bioinsecticides. Cockroach model has been largely used for the description of their mode of action. In the following paragraphs, various electrophysiological methods using cockroach preparations will be presented, along with their contribution to the development of toxinology.
Giant Axon and Single-Fiber Double-Oil-Gap Method
Cell body and the dendritic tree of the P. americana giant interneurons, located in the last abdominal ganglion, have been well identified for a long time. They possess unmyelinated, very long (1.5-2 cm), large diameter (up to 50 μm) axons. The axons, being surrounded by glial Schwann cells, can be isolated manually (Figure 1(b)) under microscope, and their activity can be observed using the double-oil-gap method-a refined electrophysiological technique- Figure 2(a) [2][3][4][5]. Such axonal preparation exhibits simple bioelectrical properties; one type of voltage-dependent sodium and potassium (Kdr) channels can be found; they are responsible for generating short (0.5 ms) action potentials. Double-oilgap technique permits, in current-clamp, to evoke large (up to 100 mV) axonal action potentials (Figure 2 (5). Giant axon is isolated from one connective between the 4th and the 5th ganglion (2). DUM neurons and cercal nerve-giant interneuron synapses are located in last abdominal ganglion (3). (b) Isolated giant axon dissected from one connective, accompanied by the second connective which protects the axon when the preparation is transferred to experimental chamber; axon diameter ∼50 μm. (c) Isolated DUM neuron. of axonal membrane (resistance and capacitance) can be analyzed using longer (e.g., 5 ms) hyperpolarizing pulses ( Figure 2(b)(B)); the application of long depolarizing pulses allows to observe delayed rectification (Figure 2 In voltage-clamp configuration, depolarizing voltage pulses induce a short, inwardly directed, tetrodotoxinsensitive sodium current and delayed, noninactivating outward potassium current (Figure 2(d)), most sensitive to 3,4 diaminopyridine [6]; hyperpolarizing pulses are used to record the leak current. At holding potential of −60 mV, about 50% of sodium current is inactivated; this inactivation is completely removed when the HP reaches −80 mV and, in these conditions, a larger current can be observed, similarly to what has been reported by Pichon [2] (Figure 2(e)). Double-oil-gap method has several advantages: (1) the recordings are stable and long-lasting (up to 1 hour), (2) various experimental protocols in current-clamp as well in voltage-clamp can be applied, (3) the quantity of the tested molecules can be very small (e.g., 0.2 mL 10 −7 M in the case of a highly active substance), (4) it is inexpensive. However, the preparation of an isolated axon is a sophisticated procedure and requires much experience.
The activity of giant axon can also be recorded in in situ conditions in the whole nerve cord [7]. In such case, axon is left on its place in nervous chain, stays in contact with others axons and its microenvironment is much less changed compared to isolated axon. After one connective is desheathed, the giant axons become accessible for microelectrodes. Introducing them allows to record the exact axonal resting potential in its semi-physiological microenvironment. Extracellular stimulation, even performed from some mm distance, evokes action potentials similar in size and amplitude to action potential recorded from the isolated axon (Figure 2(c)), however, long-pulse stimulation can only accelerate the generation of action potential. The accessibility of the axon in situ to the tested molecules is much more limited, however, their effect can be considered more physiological in such conditions. Double-oil-gap method has been used in toxinological studies for many years. The long history of anti-insect scorpion toxins started with this technique, when in the 1970-80s the toxins purified from North African scorpions (Androctonus australis Hector-AaH IT and Buthus judaicus-Bj IT) by Professor Eliahu Zlotkin and collaborators from the Hebrew University of Jerusalem, Israel, and Faculté de Médecine, Marseille, France [8], were tested for the first time on cockroach isolated giant axon by electrophysiologist Professor Marcel Pelhate, at Angers University, France [9,10]. Toxicity tests and biochemical studies, especially by means of binding assays, established the specificity of this toxin, which has been confirmed by electrophysiological experiments. In the subsequent years, the story developed with the application of more and more modern methods and presently many details of arthropod toxins' selectivity to insects are known [11][12][13]. Their applicability as bioinsecticides is now under consideration [12,14,15].
Scorpion toxins, active on sodium channels, are divided into two large groups: alpha and beta toxins, modifying mainly the channels' inactivation and activation, respectively [12,13,16,17]. Among alpha toxins, an alpha-like subgroup has been established [18,19]. The insect-selective toxins, depressant and excitatory, are beta toxins. Several toxins related to these groups were tested and their mode of action was determined using the single-fiber oil-gap method. It helped to discriminate between excitatory and depressant groups of toxins. The most distinctive features of axonal bioelectric activity modification induced by excitatory toxins (AaH IT1 from Androctonus australis Hector, Bj IT1 from Buthotus judaicus- [9,10,20]; Bm 32-VI and Bm 33-I from Buthus martensi- [21]; are a slight depolarization, a decrease in threshold for action potential generation (Figure 3 [4]. α and β are the lateral (recording and stimulating) electrodes immersed in 180 mM KCl-they have contact with the cut ends of axons in connective and represent intracellular electrodes; γ electrode is plunged in physiological saline (with the tested substances)-it has contact with the extracellular side of the isolated axon. Amplifier 1 is a high-input impedance, negative capacitance amplifier, 2-high-gain differential amplifier, 3-current-to-voltage converter; 4-analogue compensator for leakage and fast and slow capacitive currents. In CC γ electrode is grounded and resistor response to current stimulation (Figure 3(a)(C)). The ability of the axon to generate such bursting discharges manifested itself especially when it was artificially repolarized or hyperpolarized (Figure 3(a)(D)). Exalted axonal excitability resulted from the increase in sodium current amplitude at negative membrane potentials and its prolongation observed under voltage pulse (Figure 3(b)). The analysis of sodium current voltage dependence revealed a shift in sodium current voltage dependence towards more negative membrane potentials [20]. In toxicity tests, these toxins caused an immediate contraction paralysis of fly larvae and a quick excitatory "knock-down" effect on locusts [8,10], which remains consistent with the observations done in electrophysiological experiments. The name "depressant toxins" comes from the flaccid paralysis they cause in fly larvae. The toxins within this group (Bj IT2- [10]; toxin from Scorpio maurus palmatus venom- [22]; Bot IT3 and BoT IT4 from Buthus occitanus tunetanus- [23,24]; BmK ITa and BmK ITb from Buthus martensi Karsch- [23]; Lqh IT2- [25] evidently increase sodium permeability at resting potential, inducing relatively fast depolarization and distinct decrease in action potential amplitude (Figure 3(c)(A) and (B)), eventually blocking the conduction (Figure 3(c)(C)). In voltage-clamp experiments, the lowering of peak sodium current was observed, however, along with a development of a constant current, at holding potential equal to axonal resting potential, that is, −60 mV.
There have also been cases of finding toxins of which the effects on axonal electrophysiological activity appear intermediate between those of excitatory and depressant groups, for instance: BcTx1 from East African scorpion Babycurus centrurimorphus [26] or BoT IT2, which, moreover, induced a new, unusual current with slow activation/deactivation kinetics [23,24,27]. Bot IT2 is insect-selective; its binding site is similar to that one of AaH IT1 excitatory toxin, but amino acid sequence resembles the ones found in depressant toxins [23,24]. Similar modification of axonal bioelectrical activity by Bot IT2 has been evidenced using a toxin from the venom of ant, Paraponera clavata, also tested on the cockroach isolated axon [28].
The comparative analysis of electrophysiological effects of several excitatory and depressant toxins suggests that the discrimination between these two groups of toxins is not always easy and certainly not possible only on the basis of bioelectrical activity modifications. Parallel biochemical and structural studies are absolutely required. As mentioned above, these two groups of toxins belong to one very large group of beta toxins that bind to receptor 4 on sodium channel [13,14,17].
Toxin VII from South American scorpion, Tityus serrulatus, is a typical beta toxin with no selectivity to any group of organisms [29,30]. Its effect on cockroach axon can also be described as intermediate between those of excitatory and depressant toxins. In the presence of the toxin, resting depolarization was observed jointly with a tendency to repetitive discharges. Sodium current was activated at more negative potential than normally; it was prolonged, but holding current development was slower than in the case of depressant toxins [31].
Scorpion alpha toxins bind to the receptor site 3 on sodium channel and inhibit the channel's inactivation [12]. In voltage-clamp experiments, short sodium current is prolonged during depolarizing voltage pulse (Figure 4(b)); potassium current is not modified. In current-clamp, short action potentials are extended and transformed into plateau action potentials (Figure 4(a)). Such effects were observed when the first anti-insect toxin LqhαIT from Leiurus quinquestriatus hebraeus scorpion venom was tested on cockroach isolated axon [32,33]. Later, several new anti-insect toxins with similar LqhaIT characteristics were described, as for example BoT IT1 [23,24].
Alpha toxin (LqhαIT) was also tested on cockroach giant axon in in situ conditions. The first observed effect, after the toxin's application, was a repetitive generation of slightly prolonged action potentials in response to short stimulation-such effect has never been observed in the experiments on isolated giant axon (Figure 4(c)). Later on, plateau action potential sometimes appeared (Figure 4(d)), but it was never as long in duration as in the case of isolated axon, even after a prolonged application of high toxin concentration. The influence of axonal microenvironment created by glial cells and other axons is a very likely factor in this matter. And it is necessary to consider the fact that the experimental conditions may vary from the physiological ones in a very diverse degree, which in turn may affect significantly the effects of the tested molecules.
Electrophysiological experiments performed on the cockroach axon helped to define a new group of scorpion neurotoxins: the alpha-like toxins. Receptor site of these molecules overlaps with receptor site 3 on sodium channel, where alpha toxins bind [18,19]. Alpha-like toxins do not express selectivity toward insect or mammalian sodium channels. Pharmacological characteristics of alpha-like toxins are similar but not identical with those of alpha neurotoxins. They induce plateau potentials, but in addition, they depolarize progressively the axonal membrane. They inhibit the inactivation of sodium current, but to a lesser degree than in the case of LqhαIT. Moreover, a tail current appears, which increases when depolarizing pulses are applied repeatedly (every 5 s) and at the same time, the peak sodium current decreases [18,19].
Along with the science progressing, modern techniques in molecular biology were applied and a "new era" in scorpion anti-insect toxins launched. One of the most noteworthy approaches towards this issue so far have been presented by Professor Michael Gurevitz and Dr. Dalia Gordon from the Department of Molecular Biology and Ecology of Plants, Tel Aviv University, Israel. They and their collaborators isolated the genetic material responsible for the synthesis of toxins from the scorpion venom, they defined the cDNA sequence and developed artificial expression systems [33][34][35][36]. New, recombinant toxins LqhαITr [36], Lqh IT2 [37] and Bj-xtrIT [38], tested on the cockroach isolated giant axon, showed much resemblance in their mode of action to the native counterparts. Such studies provided an extremely valuable conclusion: the activity of properly prepared recombinant toxins is the same as that of the native ones. This was a huge step in the field of toxinology. Nowadays, the recombinant toxins are accessible in a greater number than the native molecules and the advent of further mutations is feasible. It is pivotal now to study the molecular basis of anti-insect selectivity, as well as anti-mammalian specificity.
Toxicity tests, binding studies and electrophysiological recordings along with the molecular modeling, gene cloning and the site-directed mutagenesis create an opportunity to investigate the molecular basis of toxins' activity. The experiments performed on alpha-like toxins from Buthus martensi Karsch using cockroach axon showed that mutation of a single amino acid can change completely the toxin mode of action [39]. A phrase: "a story of one amino acid" may well summarize the long history of multiapproach studies on depressant toxins from Leiurus quinquestriatus hebraeus.
In the experiments on the isolated axon of the cockroach, it has been shown that the replacement of the amino acid in position 58 from Asn to Asp is able to change completely the toxin's mode of action. At the same time, its affinity to sodium channel target and its toxicity decreased. Conclusion from these studies was as follows: Asn in position 58 plays the mandatory role in the activity of depressant toxins [13,40].
The isolated axon of the cockroach was also used in the tests on the toxins obtained from the spider venom. In most cases they prolonged the duration of action potential, however, plateau action potential could only be generated in the presence of a potassium channel blocker. Sodium current inactivation was inhibited in the presence of the toxin, but never to such a degree as in the case of scorpion alpha toxins [41][42][43]. Postapplication of LqhαIT increased the late sodium current recorded during depolarizing pulse (Stankiewicz, personal observations).
DUM Neurons in Patch-Clamp and Microelectrode Techniques
In 1989 and 1990 two articles describing the application of patch-clamp technique in the study on the activity of neurosecretory dorsal unpaired median (DUM- Figure 1(c)) neurons from terminal ganglion of Periplaneta americana nerve cord were published [44,45]. The technique of single DUM cell isolation and patch-clamp recording on it have been developed in Laboratory of Neurophysiology at Angers University in France by Professor Bruno Lapied. DUM neurons possess an endogenous pacemaker activity which depends on a wide range of ionic membrane conductances [46]. Various types of receptors provide a very precise regulation of the neurons' spontaneous activity and neurosecretory function. DUM neurons represent an outstanding model for studies on intracellular processes [47][48][49] as well as for pharmacological tests [50,51]. They have also been used in toxinological experiments. There were several experiments on DUM cells performed simultaneously with observations on an isolated axon. Although DUM neurons represent a much more complex model of bioelectrical activity than the axon, the observations were similar and the conclusions-compatible [25,52]. Background sodium channels (bNa) in DUM neurons were examined using the patch-clamp cell-attached technique [53]. They appeared to be a new target for LqhαIT toxin, even more sensitive than the classical voltagedependent Na channels in this preparation. The activity of bNa is limited at membrane potential −50 mV (DUM neuron resting potential) and can be "liberated" under LqhαIT action or at very negative (−90 mV) membrane potential. In the presence of the toxin (10 −8 M), unclustered, brief single channel openings in control (at −50 mV) were transformed into large, multistep amplitude bursting activity, separated by periods of silence. Open probability of the channels increased by about 20-fold. Such channel activity was well corresponding to the transformation observed in DUM neurons: from regular beating, spontaneous activity to rhythmic bursting [53,54].
Background sodium channels in DUM neurons are also the target for beta toxin (VII) from Brazilian scorpion, Tityus serrulatus, venom [55]. In the toxin's presence, singleamplitude openings of the channels were replaced by events with several distinct subconductance current levels. Channel open probability rose by about 50-fold and similarly did the open-time duration; additionally, very long duration events emerged. Classical voltage-dependent sodium channels were also modified in a manner typical for beta toxin. The experiments performed with calcium imaging demonstrated the rise in the intracellular calcium concentration in the presence of toxin VII. A very complex study of this phenomenon evidenced the participation of high-voltage activated Ntype calcium channels and the activation of noncapacitative calcium entry (NCCE). An important conclusion has been drawn from these studies that the reduction of the activity of NCCE may prove a useful strategy in the development of a drug for antienvenoming therapy [55].
The activity of DUM neurons can also be observed in in situ conditions using the microelectrode technique. Terminal abdominal ganglion is separated from the nerve cord, desheated and fixed. Neurons remain in their place in ganglion surrounded by glial cells and other neurons. DUM neurons are recognized by spontaneous action potentials when the microelectrode enters the cell. The pattern of neuronal activity is similar to the recordings obtained on isolated cells, however, the effect of toxins is not exactly the same. The application of alpha toxin (LqhαIT, 10 −5 and 10 −4 M) never induced plateau action potential in situ; only transformation from regular firing into bursting activity was observed and the bursts were generated from a level of slight depolarization ( Figure 5(c)). In the ganglion, DUM cells are mainly under the influence of glial cells, as well as other neurons; thus, the ionic microenvironment remains partially undisturbed. In order to investigate the toxin's mechanism of action, isolated neurons should be used by all means, however, in situ experiments may in some cases reveal more on physiological effect of toxic molecules. The interactions between two neurotoxins observed on isolated and in situ DUM cells are also different (Stankiewicz et al., in preparation).
Cercal Nerve-Giant Interneuron Synapse and Single-Fiber Oil-Gap Technique
The information about mechanical stimulation arising in the cockroach cercal sensory neurons is transmitted to giant interneurons. Cercal nerves X and XI are connected with giant interneurone dendritic tree by inhibitory and excitatory synapses, respectively, located in the last abdominal ganglion. The activity of these synapses can be observed using the single-fiber oil-gap technique (Figure 6(a)), developed by Professor Jean-Jacques Callec in Rennes University [56,57], later improved and applied for several years by Professor Bernard Hue from Laboratory of Neurophysiology at Angers University, France [58][59][60]. This method allows the extracellular recording of spontaneous and evoked excitatory postsynaptic potentials (EPSP) as well as inhibitory postsynaptic potentials (IPSP) [57,60,61]. Nerve XI is classified as excitatory and its fibers activate nicotinic receptors at postsynaptic membrane. Unitary excitatory postsynaptic potentials (uEPSP) result from the stimulation of single hair mechanoreceptors covering the cercus; composed postsynaptic potentials (cEPSP) are the effect of cercal nerve XI electrical stimulation. At the presynaptic face of such synapses, muscarinic receptors are present, the role of which is to regulate the acetylcholine releasing by negative feedback [62]. Muscarinic receptors were also found on postsynaptic membrane [63]. Stimulation of nerve X evokes inhibitory postsynaptic potentials (IPSP) via activation of GABA receptors [61]. Single-fiber oil-gap method allows to perform long-term experiments (even up to 12 hours) and welladjusted perfusion of the ganglion enables to obtain reliable dose-response curves. Using the iontophoretic application of acetylcholine or carbachol to postsynaptic mem-brane vicinity permits to discriminate between pre-and postsynaptic action of the tested drugs. Synaptic preparation from the cockroach represents a remarkably useful model for pharmacological studies, however, obtaining a high-quality stable preparation requires much experience.
Cockroach synaptic model allowed to test the toxins which are active on cholinergic receptors. Snake venom is a well-known source for them. Alpha-bungarotoxins and k-bungarotoxins completely blocked uEPSP and cEPSP at concentration of 10 −7 M. All performed experimental protocols indicated that the toxins block neuronal postsynaptic nicotinic receptors and α-bungarotoxin was more effective [64,65]. Scorpion alpha toxin LqhαIT induced a substantial increase in the postsynaptic spontaneous activity, that is, in the frequency and the amplitude of uEPSP (Figures 6(b) and 6(c)), as well as in the amplitude and duration of cEPSP ( Figure 6(d)). Higher presynaptic activity results in the increased releasing of acetylcholine, however, it activates the negative feedback via presynaptic muscarinic receptors in turn and after several minutes of the toxin's action, a decrease in postsynaptic events can be observed (Stankiewicz, personal observations).
The experiments performed using the single-fiber oilgap method with the scolopendra, Scolopendra sp., venom determined its components which induced the depolarization of the cockroach postsynaptic membrane and the decrease in EPSP amplitude. Such effect was limited after pretreatment with atropine. Along with the studies performed on Drosophila muscarinic receptors (Dm1) expressed in Xenopus oocytes, it was evidenced that the venom of scolopendra comprises a component acting as agonist on insect muscarinic receptors [66].
Lastly, the experiments performed on cockroach synaptic preparation contributed to studies which allowed to explain the mechanism of synergistic interaction between chemical neurotoxins: pyrethroids (permethrin) and carbamates (propoxur) [67]. The conclusions from these studies are important for practical implementation in the field of crop protection.
Synaptosomal Preparation from Cockroach Nerve Cord
The nerve cords of cockroach (Periplaneta americana) have also been used to prepare synaptosomes, which are functional vesicles containing the nervous terminals. The synaptosomal preparation is easy to obtain and has been applied in several pharmacological tests. As illustration, binding assays of many radiolabeled toxins have been successfully characterized with synaptosomes [24,42,43,68]. In addition, studies of photoaffinity labeling using 125 I TsVII as a ligand in synaptosomes of nerve cord from cockroach indicated for the first time the molecular weight of the scorpion toxin receptor from the insect nervous system which was suggested to be associated with voltage sensitive Na + channels [68]. More recently this preparation was also successful applied in binding studies involving radiolabelled spider toxins acting in sodium channels (De Lima, in preparation). Results obtained from electrophysiological experiments are often completed using synaptosomal preparation coming from cockroach nervous system.
Summary
Nervous system of the cockroach (Periplaneta americana) can be recognized as a remarkably useful model preparation in multiple electrophysiological techniques, which allows to perform pharmacological tests on diverse levels of nervous system organization. Confronting the results obtained reveals that a toxin may affect the activity of the same nervous structure with diverse effects, depending on the experimental conditions and this conclusion should be taken into account prior to any definite statement concerning the mode of action of any toxin. | 5,397 | 2012-05-14T00:00:00.000 | [
"Biology",
"Medicine"
] |
Synchrotron Radiation Refraction-Contrast Computed Tomography Based on X-ray Dark-Field Imaging Optics of Pulmonary Malignancy: Comparison with Pathologic Examination
Simple Summary The imaging method called refraction-contrast computed tomography using synchrotron radiation can achieve detailed images almost similar to those from a microscope. This study examined its ability to recognize lung cancer. We took pictures of lung cancer samples in Japan using this imaging method. These pictures showed the insides of the samples precisely, making it easy to distinguish healthy tissue from cancerous spots. They even allowed us to tell the difference between a main lung cancer and one that started somewhere else, like the colon. Fundamentally, this method could be a way to check for lung cancer without cutting into tissue. However, before doctors can use it, the machines involved need some major changes. Abstract Refraction-contrast computed tomography based on X-ray dark-field imaging (XDFI) using synchrotron radiation (SR) has shown superior resolution compared to conventional absorption-based methods and is often comparable to pathologic examination under light microscopy. This study aimed to investigate the potential of the XDFI technique for clinical application in lung cancer diagnosis. Two types of lung specimens, primary and secondary malignancies, were investigated using an XDFI optic system at beamline BL14B of the High-Energy Accelerator Research Organization Photon Factory, Tsukuba, Japan. Three-dimensional reconstruction and segmentation were performed on each specimen. Refraction-contrast computed tomographic images were compared with those obtained from pathological examinations. Pulmonary microstructures including arterioles, venules, bronchioles, alveolar sacs, and interalveolar septa were identified in SR images. Malignant lesions could be distinguished from the borders of normal structures. The lepidic pattern was defined as the invasive component of the same primary lung adenocarcinoma. The SR images of secondary lung adenocarcinomas of colorectal origin were distinct from those of primary lung adenocarcinomas. Refraction-contrast images based on XDFI optics of lung tissues correlated well with those of pathological examinations under light microscopy. This imaging method may have the potential for use in lung cancer diagnosis without tissue damage. Considerable equipment modifications are crucial before implementing them from the lab to the hospital in the near future.
Introduction
The pulmonary parenchyma has a unique composition and includes pulmonary arterioles, venules, bronchioles, alveolar sacs, and interalveolar septa [1].In living organisms, most pulmonary spaces are filled with air, which makes the viscera and vasculature appear light compared to the dark background in traditional X-ray roentgenography [2].
At the onset of certain pulmonary diseases, the interstitium and alveolar sac suffer from distortion as the cells in the alveolar septum or interstitial spaces proliferate inappropriately or produce secretions, often leading to consolidative lesions.Early malignancies arising from fine alveolar structures appear on imaging as ground-glass opacity (GGO) [3].GGO resembles a small hazy fog that does not involve other microstructures, including bronchioles and terminal venules.
These minute infiltrative changes in the interstitial area of normal lung tissue can be benign [4], inflammatory, and temporary; however, such lesions are often considered to represent early-stage lung cancer, especially when they persist for >3 months [5].Conventional chest tomography can detect and distinguish an abnormal from a normal parenchyma; however, pathological examination with sequential histology is required for diagnostic confirmation [6].
Although the histopathological examination of lung tissue under light microscopy has been the traditional method for definitive diagnosis, resected specimens must be divided into serial sections after fixation with formalin or alcohol because of the limited penetration of optical microscopy, creating irreversible damage to the tissue sample.The volume shrinkage after fixation is estimated to be 3-6 vol% in formalin and up to 20 vol% in alcohol.Only two-dimensional pathological images can be observed after histological processing [7], and the actual dimensions and shapes of the lesions are unknown.
Remarkable advances have been made in the structural analysis of alveolar sacs by using synchrotron radiation tomography of human lung tissue [8][9][10].Precise analysis was performed, including three-dimensional (3D) analysis of a normal alveolar sac.The specimens used for the experiments were prepared very carefully, maintaining the in vivo morphology during harvest and successfully reconstructing the 3D microstructure; however, the use of a fixative and a slight volume change were unavoidable.
The method of refraction-contrast computed tomography based on X-ray dark-field imaging (XDFI) using synchrotron radiation can be beneficial for understanding tissue organization, with 3D reconstruction images supported by higher contrast and spatial resolution than conventional X-ray imaging, comparable to pathological examination [11][12][13].The monochromatic and high-beam parallel characteristics of synchrotron X-rays produce images that can distinguish soft tissues with similar attenuation coefficients.This imaging method is efficient in soft-tissue discrimination, including articular cartilage, human breast, eyeballs, and coronary arteries [14][15][16][17].
We have accumulated tomographic images of various pulmonary lesions using this method since 2019 in the Photon Factory BL14B of the High-Energy Accelerator Research Organization in Tsukuba, Japan [18].In this study, synchrotron radiation tomographic images of two types of human lung cancer (two primary and two secondary adenocarcinoma samples) were acquired, reconstructed in 3D, and compared with images from pathological examinations, thereby identifying potential implications for clinical applications of the XDFI method.
The work of our experiments can be described as follows.(1) The acquisition of SR images using XDFI optics from human lung adenocarcinomas: either primary or secondary.(2) The specifications of secondary adenocarcinoma: the acquisition of SR tomographic images of secondary adenocarcinoma, originating from rectal cancer.(3) Comparison with pathology: We conducted a comparison of these images with pathological examinations.
(4) The exploration of potential: We explored the potential of refraction-contrast SR CT images in discriminating between different types of pulmonary malignancies.
Recently, we proposed new imaging methods for improving spatial resolution by placing a scintillator in close contact with a Laue angle analyzer (LAA), thus eliminating the distance required for X-ray interference and wavefront separation.Further experiments with lung cancer specimens will be conducted using this method [19].
Tissue Preparation
Lung tissues for imaging acquisition were donated by patients who had undergone surgery for the treatment of malignancy at Korea University Anam Hospital; written informed consent was obtained from all patients.Specimens were harvested under the supervision of a pathologist specializing in pulmonology (J.H. Lee) after all necessary diagnostic procedures were completed.The lung specimens used in this study are listed in Table 1.This study was approved by the Institutional Review Board of the Korea University Anam Hospital (IRB number: 2019AN0242).The usual pathological procedure included airway inflation, fixation with 10% neutralbuffered formalin solution, gross examination, the mapping of cancer lesions, serial sectioning in 3 mm increments, paraffin block fixation, hematoxylin and eosin staining, and immunohistochemistry. Specimens for SR imaging were harvested, placed in a 1% agarose gel solution for transportation, and brought to the BL14B Photon Factory of the High-Energy Accelerator Research Organization (Tsukuba, Japan).
X-ray Source and Experimental Setup
The refraction-contrast XDFI-based computed tomography (CT) imaging system was constructed on beamline BL14B at the High-Energy Accelerator Research Organization Photon Factory in Tsukuba, Ibaraki, Japan.In this system, synchrotron radiation generated from a 5-Tesla superconducting vertical wiggler of BL14 was used as the incident X-ray source.The X-ray beam was monochromatized using a double-crystal monochromator located outside the beamline and injected into the imaging system inside the beamline.
The imaging system consisted of an asymmetrically cut monochromator collimator (AMC) made from an asymmetric Si single crystal, which is denoted as (A) in Figure 1, an acrylic cylindrical filter (B), a sample rotation stage (C), a Laue-type angle analyzer (LAA) made from thin Si single-crystal plates (D), an X-ray scintillator (E), lens optics (F), and an X-ray camera (G).Because the X-ray beam was generated by a vertical wiggler, the polarization direction of the beam was perpendicular to the ground, allowing the crystal optics in the imaging system to be placed horizontal to the ground.The experimental setup is illustrated in Figure 1.
an acrylic block with a hole of the same diameter as that of the specimen container, which can flatten the absorption contrast produced by the specimen container, thereby significantly compensating for the large refractive effect at the cylinder limb.The specimen was fixed in a cylindrical container using agarose gel with an absorption coefficient close to that of the specimen, and the absorption contrast produced by the specimen was negligible.
Acquisition and Reconstruction of Images
The specimen was placed on a sample stage in an experimental hutch controlled by a step motor.Lung specimens were placed in a 2 cm diameter cylinder with 1% agarose gel to prevent movement during image acquisition (Figure 2).Table 2 lists the imaging conditions used in this study.Among the typical parameters, the X-ray energy was 19.8 keV, the fields of view of the X-rays after diffraction by the AMC were approximately 23 mm (horizontal) and 21 mm (vertical), and the horizontal divergence of the beam was approximately 0.5 arcsec.The thickness of the LAA was 166µm.The LAA was propped against a mirror-polished Si plate to prevent distortion.The pixel size of the X-ray camera was 5.5 µm.The spatial resolution of the system was approximately 10 µm.The use of an AMC led to an increase in the beam width of the X-ray beam and a reduction in the divergence angle of the diffracted beam compared with that of the incident beam.These effects can be controlled by the b factor (b = sin(θ B − α)/sin(θ B + α)) of the AMC, where θ B and α are the Bragg angle and the asymmetry angle, respectively.The X-ray beam propagating through the sample was split into two directions: forward diffraction and diffraction at the LAA.By placing an X-ray camera in the forward diffraction direction, a refraction-contrast image that expressed the refraction angle of the X-ray beam generated in the sample could be obtained.The acrylic cylindrical filter was an acrylic block with a hole of the same diameter as that of the specimen container, which can flatten the absorption contrast produced by the specimen container, thereby significantly compensating for the large refractive effect at the cylinder limb.The specimen was fixed in a cylindrical container using agarose gel with an absorption coefficient close to that of the specimen, and the absorption contrast produced by the specimen was negligible.
Acquisition and Reconstruction of Images
The specimen was placed on a sample stage in an experimental hutch controlled by a step motor.Lung specimens were placed in a 2 cm diameter cylinder with 1% agarose gel to prevent movement during image acquisition (Figure 2).Table 2 lists the imaging conditions used in this study.Among the typical parameters, the X-ray energy was 19.8 keV, the fields of view of the X-rays after diffraction by the AMC were approximately 23 mm (horizontal) and 21 mm (vertical), and the horizontal divergence of the beam was approximately 0.5 arcsec.The thickness of the LAA was 166µm.The LAA was propped against a mirrorpolished Si plate to prevent distortion.The pixel size of the X-ray camera was 5.5 µm.The spatial resolution of the system was approximately 10 µm.
Comparison with Pathologic Examinations
After image acquisition, the specimens were transported to the hospital and embedded in paraffin blocks for pathological examination.Pathological slides from paraffin blocks were stained with hematoxylin and eosin and observed under a light microscope by a specialized pathologist.Images obtained under a light microscope were compared with those obtained using refraction-contrast CT.Table 3 compares the imaging conditions among examinations conducted using light microscopy, μ-CT, and synchrotron radiation based on XDFI optics.Our imaging method can provide high-resolution volumetric images of lung specimens.We obtained 3D tissue images using the 3D image processing software Amira-Avizo (version 2020.3),developed by Thermo Fisher Scientific Inc. (Visualization Sciences Group, Burlington, MA, USA), and 3D-rendered tomographic images were produced using a graphic processing unit.
Comparison with Pathologic Examinations
After image acquisition, the specimens were transported to the hospital and embedded in paraffin blocks for pathological examination.Pathological slides from paraffin blocks were stained with hematoxylin and eosin and observed under a light microscope by a specialized pathologist.Images obtained under a light microscope were compared with those obtained using refraction-contrast CT.Table 3 compares the imaging conditions among examinations conducted using light microscopy, µ-CT, and synchrotron radiation based on XDFI optics.
Images from Primary Lung Adenocarcinoma
Two pulmonary adenocarcinoma specimens were used for the imaging.One was primary lung cancer without a lepidic pattern (specimen #1, Figure 3), and the other had a lepidic pattern (specimen #2, Figure 4).
Lung Adenocarcinoma without Lepidic Pattern
In the images of specimen #1, a similar consolidative lesion can be observed in both the pathological examination under light microscopy (×10 magnification, Figure 3a) and SR CT (Figure 3b).The spiculate margin between the malignant lesion and the normal area is clearly demarcated.The 3D volume rendering and segmentation images (Figure 3c,d) enabled the structural observation of the cancer (Supplementary Videos S1 and S2).
Cancers 2024, 16, x FOR PEER REVIEW In the images of specimen #1, a similar consolidative lesion can be observed the pathological examination under light microscopy (×10 magnification, Figure 3 SR CT (Figure 3b).The spiculate margin between the malignant lesion and the area is clearly demarcated.The 3D volume rendering and segmentation images (Fig and 3d) enabled the structural observation of the cancer (Supplementary Video Video S2).
Lung Adenocarcinoma with a Lepidic Pattern
Part of specimen #2 contained a lepidic pattern, which is a specific patho finding of early-stage adenocarcinoma.In this area, malignant changes in the n (irregular shape, multiple nucleoli, and enlarged nucleoli) were observed; howev framework of the alveolar structure could still be identified.Pathological exam revealed a thickened alveolar wall (Figure 4a, inner circle with blue dots); a similar was observed in the refraction-contrast CT image (Figure 4b).In the 3D volume ren
Lung Adenocarcinoma with a Lepidic Pattern
Part of specimen #2 contained a lepidic pattern, which is a specific pathological finding of early-stage adenocarcinoma.In this area, malignant changes in the nucleus (irregular shape, multiple nucleoli, and enlarged nucleoli) were observed; however, the framework of the alveolar structure could still be identified.Pathological examination revealed a thickened alveolar wall (Figure 4a, inner circle with blue dots); a similar shape was observed in the refraction-contrast CT image (Figure 4b).In the 3D volume rendering and segmentation images, the area of the lepidic pattern is identifiable with fog-like haziness containing the alveolar microstructure (Figure 4c,d).An irregular spiculate margin around the cancerous area can also be observed, similar to that observed in specimen #1.The 3D structure of this tumor is well expressed in the volume-rendered images (Supplementary Videos S3 and S4).haziness containing the alveolar microstructure (Figure 4c, and 4d).An irregular spiculate margin around the cancerous area can also be observed, similar to that observed in specimen #1.The 3D structure of this tumor is well expressed in the volume-rendered images (Supplementary Video S3 and Video S4).
Images from Secondary Lung Adenocarcinoma
Two specimens were harvested from patients with secondary lung adenocarcinoma (of colorectal origin).The margin between the malignant lesion and the normal area was round compared to that of the primary lesion.Irregular spikes were rarely observed in the images of the two lung tissues (sample #3; Figure 5 and sample #4; Figure 6).Secretions
Images from Secondary Lung Adenocarcinoma
Two specimens were harvested from patients with secondary lung adenocarcinoma (of colorectal origin).The margin between the malignant lesion and the normal area was round compared to that of the primary lesion.Irregular spikes were rarely observed in the images of the two lung tissues (sample #3; Figure 5 and sample #4; Figure 6).Secretions from cancer cells (the pinkish lesion in pathological examination, Figure 5a) appeared as multiple whitish small dots inside the tumor in the refraction-contrast CT image (Figure 5b), which was also observed in both 3D rendering and segmentation images (Figure 5c,d).
Cancers 2024, 16, x FOR PEER REVIEW 9 of 17 5b), which was also observed in both 3D rendering and segmentation images (Figure 5c, and 5d).In the image of specimen #4, multiple pore areas containing small malignant lesions can be observed (red arrows in Figure 6a).Similar findings can be observed in the 3D images (Figure 6c,d, blue and green arrows); however, the malignant area was not clearly demarcated because the section was thought to be inside the pore that melted away during the preparation process.The lepidic pattern observed in one of the primary lung adenocarcinomas was not observed in any of the secondary lung adenocarcinomas.
Three-Dimensional Volume Estimation
We performed 3D volume reconstruction using Amira-Avizo software (version 2020.3,Visualization Sciences Group, Burlington, MA, USA).Figures 3c, 4c, 5c, and 6c show one cross-section in the 3D volume-rendering images, and Figures 3d, 4d, 5d, and 6d show segmentation using this software.Two oblique views of the 3D images and were compared for each specimen to estimate the tumor volume (Figure 7).The exact depth and area occupied by the specimens were identified, which was not possible through pathological examination under a light microscope.In the image of specimen #4, multiple pore areas containing small malignant lesions can be observed (red arrows in Figure 6a).Similar findings can be observed in the 3D images (Figure 6c,d, blue and green arrows); however, the malignant area was not clearly demarcated because the section was thought to be inside the pore that melted away during the preparation process.The lepidic pattern observed in one of the primary lung adenocarcinomas was not observed in any of the secondary lung adenocarcinomas.
Three-Dimensional Volume Estimation
We performed 3D volume reconstruction using Amira-Avizo software (version 2020.3,Visualization Sciences Group, Burlington, MA, USA).Figures 3c, 4c, 5c and 6c show one cross-section in the 3D volume-rendering images, and Figures 3d, 4d, 5d and 6d show segmentation using this software.Two oblique views of the 3D images and were compared for each specimen to estimate the tumor volume (Figure 7).The exact depth and area occupied by the specimens were identified, which was not possible through pathological examination under a light microscope.
Discussion
In our study, we acquired refraction-contrast SR CT images of two different types of lung adenocarcinoma: primary and secondary (metastatic cancer originating from rectal cancer).Through comparison with pathological examinations, we explored the potential of refraction-contrast SR CT images for discriminating different types of pulmonary malignancies.
In the images of primary lung cancer (specimens #1 and #2, Figures3 and 4), the spiculate margin between the malignant lesion and the normal area was identified as one of the specific features for distinguishing malignant pulmonary lesions in conventional CT scans.This can be observed more clearly in the volume-rendering videos (Supplementary Video S1-S4).
Fog-like haziness, called GGO in conventional CT, was identified in the refractioncontrast SR CT images of specimen #2.This area was demarcated from the normal lung tissue; however, it contained identifiable alveolar structures (Figure 4, circles with yellow dots).The GGO was thought to be due to the lepidic pattern on pathological examination.The term lepidic refers to the tumor condition of neoplastic cells proliferating along the surface of the intact alveolar walls without stromal or vascular invasion [20].
A solitary lepidic pattern without a definite invasive component (usually shown as consolidation on X-ray imaging) is defined as adenocarcinoma in situ, which is a very early-stage lung cancer.It is less invasive; therefore, the treatment strategy tends to be less extensive [5,21].A mixture of lepidic and other invasive subtypes are often observed in lung adenocarcinoma [6,22].The proportions of the lepidic and other invasive
Discussion
In our study, we acquired refraction-contrast SR CT images of two different types of lung adenocarcinoma: primary and secondary (metastatic cancer originating from rectal cancer).Through comparison with pathological examinations, we explored the potential of refractioncontrast SR CT images for discriminating different types of pulmonary malignancies.
In the images of primary lung cancer (specimens #1 and #2, Figures 3 and 4), the spiculate margin between the malignant lesion and the normal area was identified as one of the specific features for distinguishing malignant pulmonary lesions in conventional CT scans.This can be observed more clearly in the volume-rendering videos (Supplementary Videos S1-S4).
Fog-like haziness, called GGO in conventional CT, was identified in the refractioncontrast SR CT images of specimen #2.This area was demarcated from the normal lung tissue; however, it contained identifiable alveolar structures (Figure 4, circles with yellow dots).The GGO was thought to be due to the lepidic pattern on pathological examination.The term lepidic refers to the tumor condition of neoplastic cells proliferating along the surface of the intact alveolar walls without stromal or vascular invasion [20].
A solitary lepidic pattern without a definite invasive component (usually shown as consolidation on X-ray imaging) is defined as adenocarcinoma in situ, which is a very early-stage lung cancer.It is less invasive; therefore, the treatment strategy tends to be less extensive [5,21].A mixture of lepidic and other invasive subtypes are often observed in lung adenocarcinoma [6,22].The proportions of the lepidic and other invasive compo-nents can vary in different cases.For example, specimen #1 showed cancers with only invasive components.
When GGO is found on chest CT, the proportion calculation is important because this numerical value can change the precise course of treatment.More than 50% involvement of GGO implies a better prognosis [23][24][25]; therefore, less extensive surgical excision can be considered compared to cases with less GGO or only invasive components [5,26,27].
The calculation was performed using two-dimensional findings of conventional chest CT, dividing the largest diameter of the consolidative lesion by the whole tumor size, including GGO, in the cross-section where the largest whole tumor diameter was observed [28].Because this information was obtained from planar data, it does not reflect the actual percentage of GGO inside the entire tumor.Hence, 3D volume-rendering images from synchrotron XDFI CT can provide a solution for acquiring substantial measurements.
XDFI images from specimen #2 showed qualified contrast between the solid component, GGO in the tumor area, and normal lung parenchyma, indicating good image quality.However, in the case of specimen #1, the refraction-contrast SR images did not display the same level of contrast.In these images, the boundary of the solid part was well demarcated; however, the normal parenchyma seemed to be crowded compared with that of specimen #1.We suspected that inflation had not been properly performed before fixation.Because the lungs collapse before resection during the surgical procedure, airway instillation is essential to comprehend the intact status of the pulmonary lesions.Our institute follows the inflation fixation technique described by Hausmann [29], which is effective and simple.However, errors can occur during the detailed processing of individual samples.
An irregular spiculate margin around the tumor border, eccentric calcification, air bronchogram or bubble-like lucencies inside the tumor, and pleural involvement of pulmonary nodules are typical findings suggesting primary malignancy on conventional chest CT [30,31].The SR images of the previous two specimens showed all these features and enabled volumetric assessment.
Specimens #3 and #4 displayed other types of lung malignancies, specifically secondary or metastatic lung cancer.We chose adenocarcinoma of colorectal origin because it is one of the most common secondary adenocarcinomas of the lung, and a differential diagnosis is often important for clinical decision making.Characteristic pathological findings of lung adenocarcinoma of colorectal origin include relatively smooth, lobulated margins, without air bronchograms or bubble-like lucencies inside the tumor area, and occasional cavitation [31,32].
Reconstructed images from specimen #3 revealed a well-demarcated, round, nonspiculate margin between the malignant and normal lesions (Figure 5b-d).The overall 3D morphology and depth in the lateral view were observed more distinctively in the volume-rendered images (Supplementary Videos S5 and S6).The bubble-like lucencies identified in specimen #2 were not observed in these images.Sporadic calcification was expressed as multiple small whitish irregular dots, which were thought to be traces of secretions from glandular cells in metastatic tumors and were observed as pinkish lesions on pathological examination (Figure 5a).
Images from specimen #4 contained large pore lesions with a small proportion of malignancies.We suspected that this cavitation was filled with secretions, probably produced by malignant glandular cells in the tumor, and washed away during the preparation process.Approximately one-fifth of metastatic lung cancers of colorectal origin show cavitary lesions on both imaging and pathological examination.
Refraction-contrast SR has proven to be an effective method for soft-tissue tomographic imaging.Our study demonstrated the possibility of differential diagnosis of pulmonary malignancies using this imaging modality.It can successfully yield imaging characteristics of each pulmonary malignancy with a higher resolution.The reconstructed images enable the estimation of volumetric information, including the stereotactic GGO proportion.Clinically significant lesions can be highlighted, magnified, and reconstructed into 3D images and are therefore helpful for treatment decision making.
For the diagnosis of early-stage lung cancer, the identification of GGO lesions using X-ray imaging is invaluable.Malignant lesions containing GGO are frequently expected to exhibit lepidic patterns and cancerous conditions but are non-invasive in character [33,34].The presence of such patterns indicates a less invasive behavior, making resection options (wedge resection or segmentectomy) feasible [34,35].A higher proportion of GGO is generally associated with better outcomes.The current calculation method for GGO proportion comprises only two-dimensional information in one section, which could lead to misleading clinical decisions.
To assess the possibility of GGO detection using refraction-contrast SR CT, we performed 3D reconstruction imaging using Amira-Avizo software (version 2020.3,Visualization Sciences Group, Burlington, MA, USA) from the images of specimen #2.Images from the this software showed a more clearly demarcated border between the malignant and normal lung parenchyma, as well as between the GGO and the consolidative lesion inside the tumor (Figure S1).
When a pulmonary nodule found on chest CT is small (usually less than 3 cm) and the patient has a history of extra-thoracic cancer, discriminating between primary and secondary malignancies is important for further clinical decisions [36][37][38].Surgical excision is considered when the nodule is highly suspicious for secondary lung cancer in these patients, and the diagnosis is made after surgical resection [39,40].
Histological examination was performed using a frozen biopsy analysis of a completely prepared specimen during or after surgery.Performing the analysis after surgery would ensure accurate conclusions; however, patients would have to undergo another surgery if the nodule is determined to be a primary malignancy [41].Frozen analysis can alleviate the patient's burden by reducing the chances of reoperation; however, there is a possibility of inconsistency between frozen and permanent diagnoses.The reported concordance between frozen permanent results ranges from 70 to 95% [42,43].Further refinement and application of refraction-contrast SR images based on XDFI optics to 3D virtual histology could provide solutions to these problems and assist in histological decision making in the future.
Our study delineated several salient limitations.Primarily, the substantial dimensions of current synchrotron imaging apparatus render its integration within hospital infrastructure impractical.A conceivable solution could involve the juxtaposition of a medical facility with a synchrotron installation.Nonetheless, this arrangement presents logistical complexities in the transfer of specimens.The duration of transit between these sites could significantly exceed the permissible time window for frozen section analysis.
Furthermore, the imaging protocol is characterized by notable latency during the phases of acquisition, reconstruction, and analytical processing.The tasks of three-dimensional volume rendering and segmentation are discrete and occasionally prolonged activities, at times extending over multiple days.Such delays pose considerable impediments to the clinical utility of this method.As it stands, our technique requires approximately three hours to process each specimen.For clinical relevancy, this timeframe necessitates reduction to surpass that of traditional frozen section analysis.
A further constraint is the dimension of the specimen.Typically, our samples are confined to dimensions of 2 cm in width, 4 cm in height, and a depth of 3 mm.Theoretically, our methodology is capable of accommodating specimens measuring up to 4 cm in width and 6 cm in height.While this may suffice for imaging smaller lung cancers, which were less than 2 cm in size and generally indicative of early-stage disease, the development of an extended field of view is imperative for larger samples.
However, this study had several limitations.First, current equipment required for synchrotron imaging is too large to be installed in hospital buildings.Constructing a hospital next to a synchrotron facility, or vice versa, would be a possible alternative; however, there would be problems related to specimen delivery.Transportation between the two facilities would be significantly time-consuming, probably exceeding the interval necessary for frozen analysis.Considerable costs for development and adaptation are required for the clinical use of this equipment, and these expenses should be taken into account before its practical implementation.
Moreover, there are pauses in imaging acquisition, reconstruction, and analysis.The processes of 3D volume rendering and segmentation are separate events.Occasionally, these processes take several days to complete, complicating their clinical application.Our method requires approximately 3 h to image each specimen.This should be reduced to less than the time interval required for frozen section analysis to have a comparable clinical value.
Obtaining an acceptable specimen size is a problem that must be addressed.The average size of our samples was 2 cm horizontally and 4 cm vertically, with a depth of 3 mm.Theoretically, our method can acquire images from a specimen approximately 4 cm horizontally and 6 cm vertically.For small-sized lung cancers less than 2 cm (clinically early-stage lung cancer), it may be sufficient to obtain images from the resected lung tissue; however, a more extensive field of view should be developed.
Additionally, the scope of our study was somewhat limited by the narrow range of experimental specimens, which potentially impacts its clinical relevance.Our research focused on only four types of lung adenocarcinomas, comprising two primary and two secondary variants.While the findings from this experiment offer valuable insights that could inform the trajectory of future research, drawing definitive conclusions about their broader clinical applicability is currently challenging.Furthermore, the potential for subjective bias in interpreting the imaging data from this study warrants consideration.To mitigate these limitations, enhancing the diversity of our sample pool and integrating an automated imaging analysis system, particularly one utilizing deep learning algorithms based on artificial intelligence, could prove beneficial.
Recent studies on ESFR have shown remarkably higher-resolution hierarchical tomographic imaging of donated human organs [44].The whole aspect of each human organ can be explored from the macro-to the subcellular level in one stage of imaging acquisition.Although our XDFI CT modality does not cover the entire human lung, we believe it possesses unique advantages.These include the proven comparability of the synchrotron imaging method with pathologic examination and the potential for 3D virtual histology [11,13,17,45].
Consequently, the clinical application of synchrotron radiation (SR) tomographic imaging techniques continues to face substantial challenges.Overcoming these constraints expeditiously remains a daunting endeavor.This research contributes to the ongoing efforts to ascertain the diagnostic efficacy of SR imaging in contrast to traditional X-ray imaging methods, delineating avenues for future scholarly inquiry.Notably, esteemed research institutions, including the European Synchrotron Radiation Facility (ESRF), are currently pioneering studies to adapt these technologies for clinical purposes.Consequently, it is anticipated that the near future will witness the emergence of more sophisticated and innovative approaches in this field.
Conclusions
Through dedicated and sustained research over several years, we have explored the possible potential for employing the Synchrotron XDFI-based refraction-contrast tomographic technique as a diagnostic instrument.This experimental endeavor has illuminated the technique's nuanced capacity to differentiate between healthy and pathological lung parenchymal structures.It offers a discerning approach to the diagnosis of various lung tumor types.A distinctive advantage of this method lies in its proficiency to render three-dimensional volumetric data of pathological lesions via a reconstruction process, an achievement not paralleled by conventional X-ray imaging or isolated pathological assessments.
Despite the presence of numerous challenges yet to be surmounted, the XDFI methodology exhibits possible potential for augmenting the realm of medical X-ray imaging applications.In our recent initiative, we introduced novel imaging techniques aimed at enhancing spatial resolution.This was accomplished by positioning a scintillator in imme-diate proximity to a Laue angle analyzer (LAA), effectively obviating the need for X-ray interference and wavefront separation distances.Pursuant to this development, we plan to conduct further experiments with lung cancer specimens utilizing this advanced method.
Patents
Specimens used in this study were harvested from the patients who underwent surgical resections for their disease.Donations were obtained with written informed content from patients before enrollment.The patient list is described in Table 1.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the Korea University Anam Hospital (IRB number: 2019AN0242).All patients provided written informed consent prior to participation.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study prior to participation.
Figure 1 .
Figure 1.Beamline setup (a) and schematic view (b) of BL 14B in Photon Factory.
Figure 1 .
Figure 1.Beamline setup (a) and schematic view (b) of BL 14B in Photon Factory.
2 ,Figure 2 .
Figure 2. Preparation of specimens from primary lung adenocarcinoma and secondary lung adenocarcinoma (colorectal origin).(a) Measurement of sample #1 (left) and sample contained in 2 cm diameter cylinder filled with 1% agarose gel solution (right); (b) those of sample #2.Measurement (left) and sample contained in 2 cm diameter cylinder filled with 1% agarose gel solution (right); (c) sample #3.Measurement (left) and sample contained in 2 cm diameter cylinder filled with 1% agarose gel solution (right) and (d) sample #4.Measurement (left) and sample contained in 2 cm diameter cylinder filled with 1% agarose gel solution (right).
Figure 2 .
Figure 2. Preparation of specimens from primary lung adenocarcinoma and secondary lung adenocarcinoma (colorectal origin).(a) Measurement of sample #1 (left) and sample contained in 2 cm diameter cylinder filled with 1% agarose gel solution (right); (b) those of sample #2.Measurement (left) and sample contained in 2 cm diameter cylinder filled with 1% agarose gel solution (right); (c) sample #3.Measurement (left) and sample contained in 2 cm diameter cylinder filled with 1% agarose gel solution (right) and (d) sample #4.Measurement (left) and sample contained in 2 cm diameter cylinder filled with 1% agarose gel solution (right).
Figure 3 .
Figure 3. Images of specimen #1.(a) Pathological examination under a light microscope (s view).A malignant lesion consisting of a solid component (red arrows).(b) One-slice imag reconstructed tomographic volume.The area of cancer (orange arrows) is similar to tha pathological specimen.Eccentric calcifications and bubble-like lucencies in the lesions.(c) 3D from reconstructed tomographic volume.Cancer lesions containing eccentric calcificatio bubble-like lucencies (blue arrows).(d) 3D image produced by volume rendering reconstructed tomographic volume.The border of the tumor with spiculated spikes (green is well demarcated from the normal area.The three-dimensional aspects of the tumor observed through whole video images (Supplementary Video S1 and S2).
Figure 3 .
Figure 3. Images of specimen #1.(a) Pathological examination under a light microscope (scanning view).A malignant lesion consisting of a solid component (red arrows).(b) One-slice image of the reconstructed tomographic volume.The area of cancer (orange arrows) is similar to that of the pathological specimen.Eccentric calcifications and bubble-like lucencies in the lesions.(c) 3D image from reconstructed tomographic volume.Cancer lesions containing eccentric calcifications and bubble-like lucencies (blue arrows).(d) 3D image produced by volume rendering of the reconstructed tomographic volume.The border of the tumor with spiculated spikes (green arrows) is well demarcated from the normal area.The three-dimensional aspects of the tumor lesions observed through whole video images (Supplementary Videos S1 and S2).
Figure 4 .
Figure 4. Images of specimen #2.(a) Pathological examination under a light microscope (scanning view).A malignant lesion consisting of a solid component (red arrows).Specimen exhibiting a lepidic pattern (blue dotted circle), which was thought to be the source of the ground-glass opacity (GGO) on tomography.(b) One-slice image of the reconstructed tomographic volume.The area of cancer (orange arrows) is similar to that observed on pathological examination.Eccentric calcifications and bubble-like lucencies are observed in the lesions.Ground-glass opacity in the lower parts of the cancerous lesion (yellow dotted circle).(c) 3D image from reconstructed tomographic volume.A cancerous lesion (blue arrow) containing the GGO portion (yellow arrow).(d) 3D image produced by volume rendering of the reconstructed tomographic volume.The border of the tumor with spiculated spikes (green arrows) and the ground-glass opacity (GGO) portion are well demarcated from the normal lung area.The three-dimensional aspects of the tumor lesions can be observed in whole video images (Supplementary Video S3 and Video S4).
Figure 4 .
Figure 4. Images of specimen #2.(a) Pathological examination under a light microscope (scanning view).A malignant lesion consisting of a solid component (red arrows).Specimen exhibiting a lepidic pattern (blue dotted circle), which was thought to be the source of the ground-glass opacity (GGO) on tomography.(b) One-slice image of the reconstructed tomographic volume.The area of cancer (orange arrows) is similar to that observed on pathological examination.Eccentric calcifications and bubble-like lucencies are observed in the lesions.Ground-glass opacity in the lower parts of the cancerous lesion (yellow dotted circle).(c) 3D image from reconstructed tomographic volume.A cancerous lesion (blue arrow) containing the GGO portion (yellow arrow).(d) 3D image produced by volume rendering of the reconstructed tomographic volume.The border of the tumor with spiculated spikes (green arrows) and the ground-glass opacity (GGO) portion are well demarcated from the normal lung area.The three-dimensional aspects of the tumor lesions can be observed in whole video images (Supplementary Videos S3 and S4).
Figure 5 .
Figure 5. Images of specimen #3.(a) Pathological examination under a light microscope (scanning view) of secondary lung cancer (of colorectal origin).Smooth lobulated margins can be observed.Multiple pinkish spots (red arrows) inside the tumor area are thought to be secreted by malignant glandular cells.(b) One-slice image of the reconstructed tomographic volume.A similar marginal appearance was observed upon pathological examination.Dark area with irregular margin (yellow arrows) were suspected of reflecting secretions inside the tumor.(c) 3D image from reconstructed tomographic volume.A relatively smooth margin is observed.No tentacle-like structures are observed.Multiple whitish spots which could be calcifies lesion inside secretion were observed (blue arrows).(d) 3D image produced by volume rendering of the reconstructed tomographic volume.The border of the tumor is well-demarcated from the normal area.Dark reddish area (green arrows) which were suspected to be a reflection of secretion in (a).The whole three-dimensional aspects of the tumor lesions can be observed in the video images (Supplementary Video S5 and S6).
Figure 5 .
Figure 5. Images of specimen #3.(a) Pathological examination under a light microscope (scanning view) of secondary lung cancer (of colorectal origin).Smooth lobulated margins can be observed.Multiple pinkish spots (red arrows) inside the tumor area are thought to be secreted by malignant glandular cells.(b) One-slice image of the reconstructed tomographic volume.A similar marginal appearance was observed upon pathological examination.Dark area with irregular margin (yellow arrows) were suspected of reflecting secretions inside the tumor.(c) 3D image from reconstructed tomographic volume.A relatively smooth margin is observed.No tentacle-like structures are observed.Multiple whitish spots which could be calcifies lesion inside secretion were observed (blue arrows).(d) 3D image produced by volume rendering of the reconstructed tomographic volume.The border of the tumor is well-demarcated from the normal area.Dark reddish area (green arrows) which were suspected to be a reflection of secretion in (a).The whole three-dimensional aspects of the tumor lesions can be observed in the video images (Supplementary Videos S5 and S6).
Figure 6 .
Figure 6.Images of specimen #4.(a) Pathological examination under a light microscope (scanning view) of a secondary lung cancer (of colorectal origin).A large cavitary area (red arrows) attached to a small malignant lesion can be observed.Cavitation is observed in one-fifth of metastatic cancers of colorectal origin and is thought to be caused by secretions from malignant cells.They were then washed during fixation.(b) One-slice image of the reconstructed tomographic volume.Cavity lesions (yellow arrows) can be observed more precisely.Contrast between the normal parenchyma was better than that in the pathological examinations.(c) 3D image from reconstructed tomographic volume.The demarcation of malignant lesions is relatively unclear; however, shadows of cavitary lesion were observed (blue arrows).(d) 3D image produced by volume rendering of the reconstructed tomographic volume.Cavitary lesion was also identified (green arrows).The threedimensional aspects of the tumor lesions can be observed in the video images (Supplementary Video S7 and S8).
Figure 6 .
Figure 6.Images of specimen #4.(a) Pathological examination under a light microscope (scanning view) of a secondary lung cancer (of colorectal origin).A large cavitary area (red arrows) attached to a small malignant lesion can be observed.Cavitation is observed in one-fifth of metastatic cancers of colorectal origin and is thought to be caused by secretions from malignant cells.They were then washed during fixation.(b) One-slice image of the reconstructed tomographic volume.Cavity lesions (yellow arrows) can be observed more precisely.Contrast between the normal parenchyma was better than that in the pathological examinations.(c) 3D image from reconstructed tomographic volume.The demarcation of malignant lesions is relatively unclear; however, shadows of cavitary lesion were observed (blue arrows).(d) 3D image produced by volume rendering of the reconstructed tomographic volume.Cavitary lesion was also identified (green arrows).The three-dimensional aspects of the tumor lesions can be observed in the video images (Supplementary Videos S7 and S8).
Figure 7 .
Figure 7. Volume rendering (L) and segmentation (R) images for each specimen.The oblique views are compared.In each image, the red dotted lines indicate the edge of specimen, and blue dotted line represents the depth of the cancerous area (L).The yellow dotted lines indicate the edge of specimen, and the green dotted lines indicate the depth of cancerous lesions (R).(a) Oblique slices of specimen #1 (lung adenocarcinoma without lepidic pattern).(b) Oblique slices of specimen #2 (lung adenocarcinoma with lepidic pattern).(c) Oblique slices of specimen #3 and (d) #4 (secondary lung adenocarcinomas from colorectal origin).
Figure 7 .
Figure 7. Volume rendering (L) and segmentation (R) images for each specimen.The oblique views are compared.In each image, the red dotted lines indicate the edge of specimen, and blue dotted line represents the depth of the cancerous area (L).The yellow dotted lines indicate the edge of specimen, and the green dotted lines indicate the depth of cancerous lesions (R).(a) Oblique slices of specimen #1 (lung adenocarcinoma without lepidic pattern).(b) Oblique slices of specimen #2 (lung adenocarcinoma with lepidic pattern).(c) Oblique slices of specimen #3 and (d) #4 (secondary lung adenocarcinomas from colorectal origin).
Table 1 .
Pathologic information of harvested specimens.
Table 3 .
Comparison of imaging conditions between microscopy, μ-CT 1 based on absorption contrast imaging, and XDFI phase-contrast imaging.
Table 2 .
Experimental conditions of BL 14B at Photon Factory.
Table 3 .
Comparison of imaging conditions between microscopy, µ-CT 1 based on absorption contrast imaging, and XDFI phase-contrast imaging. | 9,671.2 | 2024-02-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Type-II micro-comb generation in a filter-driven four wave mixing laser [Invited]
HUALONG BAO, ANDREW COOPER, SAI T. CHU, DAVE J. MOSS, ROBERTO MORANDOTTI, BRENT E. LITTLE, MARCO PECCIANTI, AND ALESSIA PASQUAZI* Emergent Photonics (Epic) Lab, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK City University of Hong Kong, Tat Chee Avenue, Hong Kong, China Centre for Microphotonics, Swinburne University of Technology, Hawthorn, VIC 3122, Australia INRS-EMT, 1650 Boulevard Lionel-Boulet, Varennes, Québec J3X 1S2, Canada Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China National Research University of Information Technologies, Mechanics and Optics, St. Petersburg, Russia Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China *Corresponding author<EMAIL_ADDRESS>
Most research to date has focused on an experimental setup in which an external continuous wave (CW) laser is used to pump a micro-resonator.Since the first demonstration of optical parametric oscillation in such devices [11], impressive progress has been made in realizing high-quality, wide spectrum optical frequency combs from micro-cavities [7,.These sources have been successfully generated in a variety of high-Q resonator technologies, ranging from bulk crystalline resonators [32][33][34][35] to integrated platforms such as silicon-nitride (Si 3 N 4 ) [30] and silicon-oxynitride (Hydex) [29], as well as silicon [36], aluminum-nitride (AlN) [37] and aluminum gallium arsenide (AlGaAs) [38].
Theoretically, the formation of Kerr combs has been effectively modeled with the Lugiato-Lefever mean-field equation [39][40][41], which describes many of the dynamic regimes observed in micro-combs, including solitons [13,31], chaos [42], and Turing patterns [24].The latter, in particular, results from the modulation instability (MI) of the pump [24] and, in several practical cases, the nonlinear interaction generates coherent lines spaced by more than one free-spectral range (FSR) of the micro-cavity.This kind of optical spectrum, generally referred to as a Type-I comb or a primary comb [1,8], has been demonstrated to be very resistant to perturbations [4], and it has been effectively used for wavelength division multiplexing (WDM) communications with bit transfer rates as high as 144 Gbit/s [3].Multiple sub-lines spaced by a single FSR around the primary comb lines, termed a Type-II comb [1,8], can be generated at higher excitation powers.Such spectra are particularly appealing, especially for WDM because they effectively provide a large number of densely spaced channels.In many practical scenarios, however, these combs are incoherent [12], i.e., the lines do not exhibit a fixed relative phase, leading to detrimental effects in applications such as communications.For this reason, effective approaches for the generation of coherent lines are highly desirable.Very recently, communications with over 179 channels have been demonstrated with solitonbased combs [5].Coherent Type-II combs, generated by using dual pumps with a micro-resonator [43], have also been shown to be a viable route for future WDM systems.
In this work, we demonstrate the generation of highly coherent Type-II micro-combs in a passively mode-locked laser configuration where a nonlinear high-Q micro-resonator is nested into a fiber laser cavity with a much more finely spaced FSR.Such a configuration is known as filter driven four-wave mixing (FD-FWM) [44][45][46][47][48].
Micro-combs generated with this approach are inherently unaffected by the detuning of the external pump laser due, for instance, to detrimental thermal effects [49].The systems based on this scheme are robust to external perturbations, can be self-starting, and are stable over long time periods without the need for an additional feedback mechanism.Stable oscillation of singly [44] and multiple spaced FSR (harmonic) generation [48] has been efficiently demonstrated with linewidths below 130 kHz [44].
We show that Type-II combs can be effectively generated in a ∼50 GHz high-index-doped silica glass ring resonator by properly controlling the gain and length of the fiber cavity.These multiple sub-line spectra are stable and coherent over long time periods.By implementing a frequency-comb-assisted laser scanning spectroscopic method [50,51], we identified the spectral detuning of the generated lines with respect to the ring cavity resonances, showing high coherence of the generated lines.Our measurements, in excellent agreement with numerical simulations, show that the position of the comb lines within the micro-cavity resonance can be controlled by adjusting the fiber cavity length, providing an important fine degree of control to the position of the lines.
EXPERIMENTS
Figure 1(a) shows the experimental setup of the FD-FWM laser, which was based on a nonlinear high-Q micro-ring resonator.The ring resonator was integrated in a CMOS compatible platform.
The waveguide core is low-loss, high-index-doped (n > 1.6) silica glass, buried within a silica cladding and fiber pigtailed to a standard single-mode fiber (SMF) with a typical coupling loss of 1.5 dB/facet.The characteristic resonance linewidth is below 100 MHz and its FSR is ∼49.1 GHz at the wavelength of 1550 nm.
The micro-cavity was nested in a fiber-laser-cavity loop, which consisted of an erbium-ytterbium-doped fiber amplifier (EYDFA), a free space delay line, a tunable passband filter, a Faraday isolator, and a polarization controller.The EYDFA was designed to provide relatively large gain and saturation power with a short fiber length (∼1.1 m).The free-space delay line controlled the frequency position of the main-cavity modes with respect to the ring resonances and the tunable passband filter (6 nm 3 dB bandwidth) shaped the gain profile and controlled the central oscillating wavelength.The total length of the fiber-laser was ∼2.7 m, corresponding to a main laser cavity mode spacing of ∼74 MHz.
To accurately extract the position of the oscillating comb lines, frequency-comb-assisted spectroscopy [50,51] was performed both with no amplification (cold micro-resonator) and during comb generation (hot micro-resonator).A scanning CW laser was split into two signals, S1 and S2.S1 was used to generate a "running" beat note with a reference frequency comb (Menlo Systems GmbH) featured by a repetition rate at 250 MHz and carrier envelope offset frequency at 20 MHz, which are fully stabilized to a GSP-disciplined frequency reference.The beat note was then detected by a photodiode (PD) and sent through one radio frequency (RF) bandpass filter (RFBF) to an oscilloscope, generating calibration markers to precisely extract the scanning laser frequencies.S2 was launched into the through port of the micro-resonator via an optical circulator, to simultaneously perform the spectroscopy of the micro-ring resonance.S2 propagated in the opposite direction to the oscillating micro-comb signal in the resonator and so it interfered with the low-energy, backscattered micro-comb signal, without affecting the system regime.An example for such a measurement can be seen in Fig. 1(c), showing a bell-shaped resonance profile (red) combined with the beat note between probe laser and oscillating comb line (blue).The micro-comb line location can be determined by assessing the position where the beating experienced an abrupt phase change [see Fig. 1(d)].It can be seen clearly from Fig. 1(c) that there exists only a single interference region within the microresonator resonance, indicating that only one main-cavity mode oscillates within the resonance.This shows that the system was unaffected by the well-known super-mode instability [52,53].
Coherent phase locking could be achieved in the laser system at a multiple of the fundamental repetition rate by careful adjustment of the system parameters, including the use of an appropriate pump power and an accurate positioning of the ring modes with respect to the center of the gain bandwidth, as well as proper phase detuning of the cavity modes relative to the ring resonator mode using the free-space delay line.The coarse dependence on the repetition rate on the fine adjustment of the delay line has been recently discussed in Ref. [48].
Figure 2 shows some of the typical output spectral shapes of the system, with both single [Fig.2(a)] and multiple FSR frequency spacings [Fig.2(b)].Their corresponding autocorrelation (AC) traces are presented in Figs.2(d) and 2(e).Finally, Fig. 2(c) shows a typical Type-II comb.In the AC [Fig.2(f )], it can be clearly seen that the pulses of the primary combs (∼6 FSR) have been modulated by pulses from the sub-combs (∼1 FSR).The calculated traces, shown as green dotted lines, were calculated under the assumption that all the oscillating modes were fully coherent and transform limited, a hypothesis that carries good agreement with the measurements.Remarkably, these oscillations could be stable over long time periods without any additional feedback mechanisms.
To better understand the formation of Type-II combs, we closely studied the transition between a Type-I and Type-II comb and determined the positions of the oscillating lines by performing frequency-comb-assisted spectroscopy in a hot micro-resonator.
Here, we first generated a mode locked state at ∼6 times the fundamental repetition rate of the micro-resonator (see the red dotted line in Fig. 3).Then, a Type-II sub-comb arose after increasing the gain of the amplifier and, consequently, the intra-cavity power, shown as solid lines (various colors).In the transition from Type-I to Type-II comb the intra-cavity power increased by 3.5%.
We then extracted the positions of the oscillating comb lines and micro-ring resonances by performing frequency-combassisted spectroscopy in a hot micro-ring resonator.The resonance profile and oscillating comb lines are shown in Figs.4(a)-4(h), numbered as in Fig. 3.The black dashed line represents the center of the micro-ring resonance.
The oscillating micro-comb line, indicated with a red arrow, corresponds to the region of the beating, exhibiting interference and a phase inversion point [51].Such lines are easily visible for the cases corresponding to higher intensity modes in Fig. 3, presented in Figs.4(a)-4(d), where we detected only a single interference point within the micro-cavity resonance.It is interesting to observe that the high-energy modes [Figs.4(a)-4(d)], were found at the same positions both in the case of the Type-I and Type-II combs in Fig. 3.These modes in general did not fall at the center of the micro-cavity resonance-their position could be finely tuned by adjusting the free-space delay line within the main fiber cavity.
The spectroscopy of the low-energy lines in Figs.4(e)-4(h) illustrates a larger number of features with additional dips/peaks in the scanning resonance.However, only a single line showing interference and a phase inversion point could be identified, which is indicated with the red arrow.The additional dips, indicated with black arrows, could also be detected when the laser lines in Figs.4(e)-4(h) were below threshold.
This suggests that only a single line was oscillating within the resonance.We attribute the presence of the additional peaks to a weak backscattering of the scanning laser seeding, non-oscillating main-cavity modes.Remarkably, such a measurement allowed us to also detect the spectral position of these non-oscillating lines.This measurement clearly demonstrates that some of the weak lines were oscillating outside the microcavity resonance, although there existed main-cavity modes falling within the micro-ring linewidth.
Better insight into the position of the micro-comb lines can be obtained by plotting the frequency positions of the lines as a function of the comb mode order, as seen in Fig. 5(a).The data can be fit with a straight line with coefficient 48.952 GHz per mode, which represents the average mode spacing of the sub-combs set and corresponds to approximately 660 times the FSR (∼74 MHz) of the main cavity.The residual plot in Fig. 5(b) shows that different sets of sub-combs shared almost the same mode spacing, but with a mutual offset with respect to each other.Unlike the generation of Type-II combs from micro-resonators via CW laser pumping, the comb lines here were forced by the main-cavity resonances, and, thus, the offset between neighboring sets of sub-combs was equal to an integer multiple n of the main-cavity FSR.
Here, the offset between the neighboring sets of the subcombs was 4 times the FSR (∼293 MHz) of the main cavity, which was much larger than the micro-resonator linewidth.
In this case, the system did not allow multiple RF beat notes due to the relatively large offset and narrow resonator linewidth, permitting long-term stability.
NUMERICAL SIMULATION OF TYPE-II MICRO-COMB GENERATION
The system model is based on the standard coupled equations representing the field evolution At; τ and Bt; τ within the micro-ring and fiber cavity, respectively, for the slow and fast cavities times t and τ [44]: Here, L f ;r and T f ;r are the length and the period, respectively, of the amplifying cavity and micro-cavity; g, Ω g , β r , and β f are the gain, gain bandwidth, and the second-order dispersion of the ring and of the fiber, respectively.Also, γ is the Kerr nonlinear coefficient in the ring.The saturable gain is expressed as where g 0 and P sat denote the fiber low-signal gain and gain saturation energy.The equations are solved in time t with a pseudo-spectral method.At the ports of the resonator, the fields are coupled via the relations: Bt; 0 κA t; T r 2 e −iϕ r ; where T r is the ring round trip, κ is the transmission coefficient [54] and is related to the linewidth and FSR of the ring, ΔF r and FSR r , by the standard relation κ π ΔF r ∕FSR r .The variables ϕ r and ϕ f are the phase shifts acquired by the field in half-trip of the ring and a full trip of the fiber, respectively.Equations ( 4)-( 6) are used as boundary conditions for the transverse variable τ.Here, ϕ f regulates the position of the main cavity modes with respect to the central ring mode, while ϕ r dictates the position of the ring modes with respect to the center bandwidth of the gain.Numerical modeling was performed for a nonlinear micro-ring with FSR 50 GHz and linewidth of 120 MHz, within the experimental range, resulting in a coupling constant κ 0.0078.To run the simulations in a reasonable time, we chose an FSR of the fiber cavity approximately 5 times the experimental value (390 MHz).We used dispersion values within our experimental range, −20 and −60 ps 2 •km −1 , for the ring and fiber, respectively.Low-power optical noise was used to seed the warmup process of the system.To match our observed small detuning value in experiment [see Fig. 3(b)], we used the parameter ϕ f 0, while, for simplicity, we fixed the first oscillating ring mode at the center of the gain bandwidth, resulting in ϕ r 0. As with the experiments, the repetition rate control was realized by changing the fiber laser cavity round trip.In this framework, it is convenient to express the period of the fiber cavity with the following relation: where N is an integer, which was fixed in the simulations to 128, and jδj < 1∕2.The parameter δ, hence, represents the shortening of the fiber round trip in units of ring period, indicating an effective cavity-period detuning.In the simulations, we focus on describing our experimental observation and studying the effect on the solution of the parameter δ, together with the intra-cavity power.
Figures 6(d)-6(f ) report the AC traces corresponding to Figs. 6(a)-6(c).In these cases, low amplifier saturation energy was chosen so that only primary comb lines appeared in the spectra, and super-modes could be suppressed.
With higher amplifier saturation energy, sub-comb lines were formed with a spacing of 1 FSR of the micro-ring.One typical case, for δ 0.125, is shown in Figs.7(a)-7(h).The simulation results, including output spectrum and intensity as well as AC, are plotted for the system as a function of increasing saturation energy.Here, Fig. 7(a) corresponds to the case of Fig. 6(c).
Saturation energies in Figs.7(b)-7(d) are 2, 2.5, and 3.5 times the corresponding value in Fig. 7(a), respectively.Initially, primary comb lines in the spectrum were gradually formed with increasing amplifier saturation energy, where the central wavelengths transferred their energy to side wavelengths through the FWM effect.With a higher amplifier saturation energy, sub-comb lines develop and fill in between the primary comb lines spaced by 1 micro-ring FSR.For this kind of generation, the AC traces do not always show high contrast although coherent pulses are generated.
CONCLUSION
We demonstrate the generation of Type-II micro-combs in a filter-driven four wave mixing laser.The system produces a robust comb that can be formed, starting from a primary comb, by adjusting the gain of the laser amplifier.This was confirmed by numerical simulations.Our experimental study shows that the spacing between the comb lines can be controlled by adjusting the fiber cavity length.Our study constitutes a significant contribution to understanding the dynamics of the FD-FWM laser, and a step toward the generation of stable and controllable optical combs from an integrated chip-based resonator.
Fig. 1 .
Fig. 1.(a) Experimental setup of the FD-FWM laser.In the setup, a nonlinear high-Q micro-ring resonator is nested in a fiber ring cavity, which contains an optical amplifier (EYDFA), a free-space delay line (DL), a tunable passband filter (TBF), a Faraday isolator (IS), and a polarization controller (PC).(b) Frequency-comb-assisted spectroscopy scheme, which consists of a tunable external cavity diode laser (ECDL) and a reference comb.A scanning CW laser was split into two signals, which are labeled S1 and S2, to perform frequency calibration and the spectroscopy of the micro-ring resonance, respectively.The beating of the tunable laser and the reference comb is extracted with a photodiode (PD) and radio frequency bandpass filter (RFBF).(c) A typical spectroscopy measurement of the micro-cavity resonances and (d) zoom in view.A Lorentzian-shaped resonance (red) combined with the beat note between the probe laser and the oscillating comb line (blue and green).The micro-comb line can be determined by assessing the position where an abrupt phase change takes place, reported in (d), by pink arrow.
Fig. 3 .
Fig. 3. Experimental optical spectra of the Type-II micro-comb laser output.A mode-locking state at ∼300 GHz repetition rate (6 times the fundamental repetition rate of the micro-resonator) is overlaid as a red dotted line.The subgroups of Type-II combs around the primary comb lines 1, 7, 13, and 19 are highlighted in black, blue, green, and magenta, respectively.
Fig. 4 .
Fig. 4. (a)-(d) Resonance profiles of the primary combs, as numbered in Fig. 3, and (e)-(h) one set of sub-combs around the primary comb line 13.
Fig. 5 .
Fig. 5. (a) Frequency line position versus mode number.Dots are measurements related to the subset around the primary lines 1, 7, 13, and 19 (as highlighted in Fig. 3 and plotted in black, blue, green, and magenta, respectively).Error bars are within the marker size.The results are fitted with a straight line with angular coefficient 48.952 GHz 3 MHz, which takes into account the error of the measured frequencies due to the accuracy of our frequency-combassisted spectroscopy system.(b) Residual plot.Error bars are within the marker size.It shows clearly that all the sub-combs have, within our accuracy, the same FSR but are detuned of 4 times the main-cavity FSR (∼293 MHz). | 4,265.8 | 2018-05-01T00:00:00.000 | [
"Physics"
] |
Islanded microgrid congestion control by load prioritization and shedding using ABC algorithm
Received Feb 15, 2019 Revised Mar 23, 2020 Accepted Apr 3, 2020 The continued growth in load demand and the gradual change of generation sources to smaller distributed plants utilizing renewable energy sources (RESs), which supply power intermittently, is likely to strain existing power systems and cause congestion. Congestion management still remains a challenging issue in open access transmission and distribution systems. Conventionally, this is achieved by load shedding and generator rescheduling. In this study, the control of the system congestion on an islanded micro grid (MG) supplied by RESs is analyzed using artificial bee colony (ABC) algorithm. Different buses are assigned priority indices which forms the basis of the determination of which loads and what amount of load to shed at any particular time during islanding mode operation. This is to ensure as minimal load as possible is shed during a contingency that leads to loss of mains and ensure a congestion free microgrid operation. This is tested and verified on a modified IEEE 30-bus distribution systems on MATLAB platform. The results are compared with other algorithms to prove the applicability of this approach.
INTRODUCTION
The electrical power market deregulation and the gradual evolution of the grids towards smart grid (SG) is currently being witnessed in an effort to supply power to the increasing population and flourishing industries. The RESs in the SGs are expected to be more resilient with the ability to survive system difficulties during contingencies [1,2]. However, this change from conventional monopolistic system to deregulated system may make the system to operate beyond its voltage and thermal limits hence transmission and distribution system congestion [3]. Congestion can be defined as difference in the scheduled and the actual power flow in a given line without violating the set limits [4]. It is a condition where more power is scheduled to flow across transmission, distribution lines and transformers than what those lines can carry [5]. Operation in islanded mode is recommended in order to prevent total blackouts in the power system [6]. This ensures power is supplied to customers reliably in a safe way and maintains the system security.
The methods used for congestion management (CM) can be classified into two; Cost free and noncost-free methods [7]. The cost-free method includes use of FACTS devices, outaging congested lines, and the operation of transformer taps. On the other hand, the non-cost-free methods include shedding of some loads, generator rescheduling among others. The increase in competitive pressure in the electrical power industry necessitates removal of overloads through load shedding bearing in mind the economic and the cost of shedding the load [8]. In addition to the above conventional methods, evolutionary algorithms like ABC, particle swarm optimization (PSO), ant colony optimization (ACO), cuckoo search algorithm (CS), harmony search algorithm (HS), shuffled frog leaping algorithm (SFLP), simulated annealing algorithms (SA) among others are also increasingly being used in CM [9][10][11].
A number of studies on CM have been done so far. For instance, reference [12] proposes two measures that can be used to control congestion; generator rescheduling and loss reduction of real power. In [13], the author proposes the use of FACTS devices in the management of congestion at transmission level. In [14], rescheduling of sensitive generators and load shedding is proposed for CM using PSO algorithm. In [15], the author used the sensitivities of overloaded lines together with the cost of power generation and load shedding in CM. The load-shedding schedule and new power generation for affected buses was calculated using line sensitivities and the cost.
It is challenging to control the power produced in an islanded MG by RESs because these sources produce power intermittently. Load shedding is the main method of controlling system stability in this case and avoid any danger within the MG [16]. Optimal shedding of connected loads assists in reducing the difference between the power that DGs can supply and the connected loads [17]. In [18], ABC algorithm is used for overload control by rescheduling generators. The author uses generator sensitivity factors in selecting the generators that participate in real power rescheduling. In [19], ABC and PSO algorithms were used in generator rescheduling based on bus and generator sensitivity factors for CM in a system with wind energy resources only. In [20], the authors used PSO in generation rescheduling and load shedding in CM. In [21], congestion alleviation in a deregulated power system is proposed using cuckoo search algorithm. This was tested on IEEE 30-bus system. A modification of ABC algorithm is applied in the determination of the DG power output and their location in reference [22]. This was tested on 33 bus system.
Through load shedding, the amount of power demand that can be curtailed so as to mitigate CM problem is determined. From the literature, extensive research on CM on MGs has been done. However, CM using load shedding algorithms still remains an open issue and needs further research. In this paper, the CM problem in an islanded MG supplied with RESs is analyzed. This is formulated as an optimization problem. ABC algorithm is used in the determination of the optimal amount of load to be shed and from which buses based on their priority index. This rest of this paper is organized as follows; section 2 discusses the proposed congestion management method, its mathematical formulation and how ABC algorithm will be applied; section 3 analysis the simulation results and 4 summarizes and concludes this paper.
PROPOSED CM SCHEME IN AN ISLAND WITH RESs
This simulation assumes that MG is suddenly switched to island operation mode due to a contingency that leads to loss of mains. After a utility grid disturbance that lead to islanding mode, the power in the MG is allocated to the loads as per their priority index. Figure 1 below is a sketch of a MG with RESs that was used for CM analysis in this study. The loads and buses within the islanded MG were given priority indices, and then ABC algorithm was modified and applied in shedding loads in order to ensure the system is not stretched beyond its limits. Figure 2 is the proposed load shedding power flow for this study.
Mathematical problem formulation
The main objective of this paper is to alleviate elements of system congestion in an islanded MG through minimizing overloads by optimal load shedding using ABC algorithm. This is expressed the expressions 1-3 below; (1) where is the system real power loss, is the maximum capacity of line i, is the number of the lines that are overloaded , ℎ , is the amount of load to be shedded at bus k, is the power generated by generator i, is the power flow on line i, is the number of participating generators, , is the minimum power generated by generator i, and are the generator coefficients for generator. Subject to the equality and inequality constraints (4-10) below: Equality constraints, where | | | | are voltage magnitudes. The distributed generation RESs constraints where are Real power generation limits of the RES i and is the VAR generated by RES i Voltage constraint where and is voltage magnitude at generator and load bus i respectively.
where is the real power load at bus k, and are the sets of generator and load buses respectively.
ABC algorithm
Swarm intelligence algorithms are the currently preferred approaches in solving dynamic optimizationNP-hard problems in engineering [23]. ABC algorithm is a meta-heuristic optimization algorithm that was proposed by Karaboga from Erciyes University of Turkey [24]. This algorithm mimics the foraging movement of a swarm of bees in search of nectar around the hive as shown in Figure 3 [22]. In this algorithm, the location of food sources represents a possible solution to an optimization problem being solved. The quantity and quality of nectar indicates the fitness of that particular solution.
It has a superior performance in solving engineering problems when compared to other algorithms. For instance, it can handle both continuous and discrete variables. It is mostly applied to optimization problems in power systems that are not smooth. It is divided into three groups as observed from the behavior of a swarm of bees where each group performs a particular function as highlighted below [25]; The employed bees. They are equal to the possible solutions of the problem. This group continuously updates the rest of the bees in the hive about the quantity, quality and the direction of the food source through the performance of a waggle dance. The duration of the waggle dance depends on the quality and quantity of food source. The onlooker group of bees. These bees pick on a food source to exploit based on the information provided by the employed bees waggle dance. More onlookers move to food sources with high fitness and few to food sources with lesser fitness values. The scout bees. These bees work is to look for new sources of food for exploitation. They randomly choose food sources around the hive.
ABC algorithm steps
The initial phase where random population is generated. The solution size is taken to be equal to the number of employed bees. Each of the above solution is a vector of dimension D. This dimensional vector D corresponds with the number of parameters being optimized. The fitness (quality) of a food source can be expressed by (11): where objective function is the problem formulation target. The above generated initial population of possible solutions is then sequentially subjected through the three categories of bees in cycles till the specified maximum cycle number (MCN) is reached. Employed bees keeps on modifying the memories of the food locations (solutions) based on the visual observations and nectar quality. The employed bees share the food source information with the rest of the bees in the hive through waggle dance.
The onlooker bee will choose a preferred food source based on the amount of nectar available and as per the probability guided by the equation below [26]; where is the number of employed bees (= number of the sources of food) and is the fitness value of the ℎ solution.
The onlooker bee then compares the food sources from the information given by employed bees and then chooses a neighbor food source if it is better than the current one using the following equation; where , is a random number that lies between -1 and 1, , is the neighbouring food source that can be randomly selected and and are old and new food sources respectively. If the food source does not improve after a number of trials, it is abandoned and the associated bee becomes a scout. These steps are continuously repeated until the maximum cycle or stopping criteria is met. Then, the food with the highest fitness value is selected and printed.
RESULTS AND DISCUSSIONS
IEEE 30-bus is used to test applicability of this approach in congestion management. This system has a total of six generators, 41 lines, 24 load buses and 21 loads. It has a total of 283.400MW active and 126.200 MVAR reactive loads connected to the system. This is shown in Figure 4. The simulation was carried out on MATLAB/SIMULINK platform on an AMD 4C+6G 2.10 GHz processor with 4GB RAM. The control parameters for ABC algorithm were set as shown in the Table 1. Two test cases were simulated and Newton Raphson power flow run to test the congestion status in the island. The test cases done are shown in Table 2. In this case, congestion is simulated by creating an outage on lines 1-2 and 1-7 and overloading the lines. Outage of line 1-2 2 Outage of line 1-7 with increase in load at all buses by 50%
Case 1
The outage of line 1-2 brings congestion between lines 1-7 and 7-8 of the modified system. The total system loss increases to 16.069 MW from 7.376 MW. This is compared with the CM using firefly algorithm as shown in the Table 3. From the load flow, the power flowing between the two congested lines becomes 147.509 and 136.348 MW respectively against the set limit of 130MW. This condition is alleviated by shedding 20.552 MW load from the buses with least priority indices using ABC algorithm. After load shedding, the power flow between the two congested lines comes down to 122.537 MW and 114.093 MW which is well within the limit. This is as shown in Table 4. The Figure 5 shows the voltage profiles at various buses for normal, during the fault and after load shedding using ABC algorithm. There is a great improvement in the voltage profiles on the system buses after load shedding.
Case 2
In this section, the system loads were increased by 50% and then line 1-7 outaged. This caused congestion on lines 1-2, 2-8 and 2-9 of 313.914MW, 98.411MW and 104.769 MW respectively against the set limits of 130MW for line 1-2 and 65MW for lines 2-8 and 2-9. In this study, the total power Flow violation on the lines due to congestion is 257.088MW while it is 251.794MW in the study by reference [4]. This is shown in the Table 5.
The system losses were also monitored during and after load shedding using ABC algorithm. Applying ABC algorithm, 41.268 MW was shed to mitigate congestion levels. The total losses were recorded as shown in Table 6. The total losses decreased from 38.164 during the contingency to 17.485 MW after load shedding. This is compared by the congestion management approach using firefly algorithm in reference [4]. As can be observed from the figure, the total power losses for the two approaches is almost the same. This is elaborated in Figure 6. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 The CM by load shedding using ABC algorithm was compared with CM by generator rescheduling using FFA, PSO, RSM and SA as reported in [4]. The power flows comparison is as shown in Table 7. The voltage profiles before, during the contingency and after CM by load shedding using ABC algorithm were monitored. The CM by load shedding using ABC algorithm improves the voltage profiles within the islanded MG. This is shown in Figure 7. Convergence characteristics for this case was also monitored before the contingency, during the contingency and after load shedding. The iteration converged 14 iterations before the contingency, after 35 iterations during the contingency and converged after 29 iterations after load shedding. This is shown in the Figure 8.
CONCLUSION
This paper has presented an approach for congestion management on an islanded MG with RESs. Here, load shedding using ABC algorithm has been successifully employed for congestion management in an islanded microgrid. Various loads and buses were chosen for shedding based on their priority index to help mitigate the congestion problem. This was tested and validated on a modified IEEE 30 bus system with contingencies and sudden load increase in the micrgrid introduced. The results were compared with those from other algorithms like firefly algorithm and PSO as reported in the literature. The results show a great superiority of this approach over these algorithms. As it can be observed, this proposed method can greatly improve the system stability of the islanded MG through the minimization of the load shed and maintain the voltage profile within the required limits. This is due to superior convergence characteristics of ABC algorithm. This approach can therefore berecommeded in the solution of optimization problems in engineering and other fields as well. | 3,644.2 | 2020-10-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |