text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Crystallographically Oriented Nanorods and Nanowires of RF-Magnetron-Sputtered Zinc Oxide
The formation of nanoscaled one-dimensional structure constituting the thin films of ZnO via a RF magnetron sputtering process is demonstrated. A detailed analysis of these films has been carried out by exploiting the techniques of ellipsometry, scanning and transmission electron microscopy (SEM, TEM), high-resolution TEM, and scanning tunneling microscopy (STM). The importance of substrate materials due to the nanomorphologies as rods and wires on the substrates as amorphous quartz and silicon, respectively, has been elucidated. It has been exhibited that these fascinating nano-objects (rods, wires) are grown directionally along c-axis of hexagonal lattice of ZnO. The nucleation and growth mechanisms of these nano-objects have been discussed to interpret the present results.
Introduction
Semiconducting nanorods and nanowires are indispensable components for the realization of nanoelectronics and exhibit highly tunable optical properties that make them attractive for several applications [1][2][3][4][5].Among these, ZnO (band gap ∼3.34 eV) is an important nanostructure with one dimensional (1D) morphology, as it could be the next most important material after the carbon nanotubes.However, unlike the carbon nanotube which can be semiconducting as well as highly metallic, ZnO has the advantage of always being a semiconductor for all its applications.ZnO is a wurtzite hexagonal structure (space group: C6mc), with the alternating planes of tetrahedrally coordinated O 2− and Zn 2+ stacked along the c-axis [6][7][8][9].Due to the unique combination of being piezoelectric, pyroelectric, and a wideband-gap semiconductor, ZnO is one of the most important MEMS materials for integration in microsystems such as electromechanical coupled sensors and transducers [10][11][12][13][14]. Moreover the material has a potential applicability in monitoring and controlling the environmental conditions, such as the presence of ozone or carbon monoxide in the atmosphere [15,16].Keeping in view of its gamut of applications, it is of importance to fabricate ZnO-1D nanostructures with large surfaces compared to the volume of the material.In the present work, we have grown ZnO films constituted of nanorods/particles on silicon (Si) and nanowires on amorphous quartz, using RF magnetron sputtering.
Experimental
Film deposition was carried out on Si (diameter: 2 inch, thickness: 280 μm) and amorphous quartz (diameter: 1 inch, thickness: 0.5 mm) substrates by RF magnetron in "sputter-up configuration" using a stoichiometric target of ZnO (diameter: 3 inch, thickness: ∼5 mm) having 99.99% purity.The Si substrates in the present studies were of prime grade, same as those used in semiconductor integrated circuit fabrication.These were chemical-mechanical polished (CMP) by the manufacturer.The average surface roughness (Ra) value, measured using AFM, is in the range of 2-3 Å.Before loading the substrates in the vacuum chamber for sputter deposition, these were cleaned using standard cleaning steps which include: ultrasonic rinsing in IPA, DI water rinse, H 2 SO 4 -H 2 O 2 , and dip in dilute HF.The amorphous-quartz substrates were also of the same surface roughness.However in the cleaning procedure for quartz, the dip in dilute HF was omitted to ensure that the surface does not become rough due to etching.The sputtering pressure was maintained at 1 × 10 −2 Torr for all the depositions.No external substrate heating was done during deposition.However, substrate temperature rises due to heating in plasma normally up to approximately 110 • C. The details of growth conditions of these films are published elsewhere [17].During deposition, the sputtering gas: oxygen and argon in 1 : 1 ratio, RF power: 100 W at a frequency of 13.56 MHz, substrate to target distance: 45 mm, and deposition rate: 140 Å/min, were maintained.Optical absorption data was measured using Shimadzu UV3101 PC spectrophotometer.Elliposmetric parameters were recorded by a Rudolph Research manual null-type ellipsometer (at wavelength: 546.1 nm).X-ray diffractograms were recorded in grazing incidence angle (1.
Results and Discussion
The ellipsometric parameters for the two types of films (Table 1) show that in case of the silicon substrate, the parameters measured at two different surface locations were nearly uniform.The film on quartz substrate has higher value of refractive index and lower thickness for the same duration of deposition.These observations are important for the optical performance of these films.
XRD patterns revealed that the films were crystalline with c-axis oriented along 0002 planes (interplanar spacing, d = 0.26 nm) of hexagonal crystal structure (lattice constants: a = 0.32, c = 0.52 nm, JCPDS file # 21-1486) on both substrates (Figure 1).The size of the crystallite as estimated by Scherrer formula for films on amorphous quartz and Si substrates was 42 and 31 nm, respectively.
In general SEM topographs have shown a very smooth homogeneous texture of the film with a globular grains consisted of subgrain microstructure.As an illustrative micrograph, (Figure 2(a)) delineates dense grains of size between 10 and 30 nm constituted the entire film on Si substrate.STM measurements with suitable z-contrast of Figure 2(b) delineated the contours of energy pits of quantum well type with an average separation of ∼400 nm between each pit.
TEM studies carried out on the ZnO grown on quartzsubstrate exhibited that the film is constituted of fine grain nanoparticles (10-30 nm) coexisting with nanorods of diameter about 20 nm (Figures 3(a), 3(b), 3(c)).The overall appearance of such microstructure definitely states that these films have large surface area, because of the nano-objects present on it and the nano and sub-nanoscale porosity dispersed throughout the film.Selected area electron diffraction patterns (SADPs) exhibited that although the nanoparticles are randomly arranged with polycrystalline nature, the nanorods are single crystalline.A set of single crystalline SADPs recorded from the nanorods are displayed as Figures 3(d), 3(e).SADP recorded along [0001] zone axis (Figure 3(d)) shows that the growth of nanorod is along the c-axis of the hexagonal unit cell.The three important planes 1100, 0110, and 1010 are marked as 1, 2, and 3, respectively, on SADP (Figure 3(d)).The growth direction (c-axis of hexagonal unit cell) is observed frequently on nanorods.An adequate tilting of a nanorod also resulted in a single crystal SADP along [21 10] zone axis (Figure 3(e)).The three important planes 0001, 0111, and 0110 are marked as 4, 5, and 6, respectively, on SADP (Figure 3(e)).
In contrast to quartz substrate, the microstructural features evolved on Si lead to interconnecting facetted nanoparticles of size about 20-40 nm (Figures 4(a), 4(c)).The individual edges/facets of the particles have length of ∼10-20 nm with the common boundaries of length ∼20-30 nm.The boundaries between the particles are also clean.The nanoparticles are normally aligned in length, resulting as short interconnecting coir-type morphology.Such type of nanoparticles with faceted morphology may be understood on the basis of kinetics of the process during growth, due to a limited period of sputtering.A detailed study in this direction is underway.It is possible that these nanoparticles are originally aligned in the film, and their alignment is disturbed while taking out from the substrate for preparing electron beam transparent thin specimen for examination under the TEM.The nanowired morphology of these microstructures has been observed in the cross-sectional mode, examined under SEM (Figure 4(b)).The diameter of these nanowires measured from SEM micrograph is ∼30 nm, closely similar to the dimensions of individual nanoparticles resolved under TEM (Figure 4(c)).In a high-resolution TEM mode, it was clearly seen that the growth direction of these wires is along c-axis due to the stacking of 0002 planes (d = 0.26 nm) along this direction (Figure 4(d)).
It is worth noting that the two different types of microstructures of ZnO evolved on different substrates (amorphous-quartz and Si) under similar process conditions.The nanoparticles of a narrow size distribution delineated on Si substrate are useful for obtaining a uniform quality of thin film, which would lead to a consistent performance of any device fabricated by using these films.Such uniform particle size distribution with nanoscaled facetted morphologies originates only when there are a large number of nuclei processing with physically constrained growth.The growth is constrained, and the particles are facetted, which reflects that the preferred planes are developed on individual particles, with limited growth.The fascinating morphology of nanoparticles evolved on Si substrate may be understood in the following way.Since the Si is cubic, and the ZnO is hexagonal, the lattice constants of the two materials are therefore very different; the epitaxial growth of nanoparticle on Si is not possible.Under such conditions, large numbers of heterogeneous nuclei can evolve with no directionality governed by the lattice-incoherent substrate; the growth is limited and constrained.However, since the nanoparticles are facetted, it is possible to minimize the surface energies of the individual facets and to make the system thermodynamically stable; the planes with different surface energies may align to form an aggregate of nanowired morphologies, as observed in the present investigation.On the contrary, the quartz substrate with amorphous structure at lattice scale lacks any type of ordering at atomic configuration, and therefore a complete mismatch with the freshly deposited hexagonal ZnO is obvious.This phenomenon leads to exceptionally high fraction of ZnO nuclei with no directionality.Such random growth of large number of nuclei leads to very fine particles embedded in fine grained thin film of ZnO.It is surprising to note that these nanoparticles are coexisting with nanorods.Although, there is no understanding about the evolution of these nanorods along with nanoparticles, however, these nanorods are possible only when some of these nano-ZnO nuclei are well oriented in the beginning itself.These oriented nuclei grow in thin films, preferably along c-axis of the hexagonal unit cell leading to single crystalline rod-like morphology.
Surface changes evolved on these rods may be associated with accumulation of microscale disturbances in the lattice during growth.Since the two kinds of substrates (Si and amorphous quartz) resulted in different types of nanostructured ZnO, the desired electro-optical and sensing properties of these films may also be tailored accordingly.
Conclusions
RF sputtered growth of ZnO films reveal two important pieces of information, namely, (i) growth of the nanograins along preferred directions to form nanorods on amorphous quartz, and (ii) faceting on individual grains at nanoscale leading a nanowired morphology on Si substrate.A definite correlation between these two nanoscopic features at lattice scale is the preferred growth direction of c-axis of hexagonal ZnO in both cases.It is possible that the growth in preferred direction may be accompanied with texturing to establish the nanodimensional stability in the microstructure during growth of the film.
Figure 1 :
Figure 1: XRD patterns of ZnO thin films deposited on (a) amorphous quartz and (b) silicon.
Figure 4 :
Figure 4: TEM micrographs of ZnO film deposited on silicon; ((a), (c)) nanoparticles aligned in wired-morphology, (b) cross-sectional SEM image showing growth of nanowires and (d) a high-resolution TEM image revealing the growth direction of a wire to be c-axis of hexagonal crystal.
Table 1 :
Ellipsometric data of n, k, and d for the thin films of ZnO. | 2,473.8 | 2009-01-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
The Model and Quadratic Stability Problem of Buck Converter in DCM
Quadratic stability is an important performance for control systems. At first, the model of Buck Converter in DCM is built based on the theories of hybrid systems and switched linear systems primarily. Then quadratic stability of SLS and hybrid feedback switching rule are introduced. The problem of Buck Converter’s quadratic stability is researched afterwards. In the end, the simulation analysis and verification are provided. Both experimental verification and theoretical analysis results indicate that the output of Buck Converter in DCM has an excellent performance via quadratic stability control and switching rules.
Introduction
Hybrid systems (HS) is defined as a unitized dynamic system interacted by discrete and continuous parts.DC-DC converters are typical HS because the operation of each mode can be regard as the continuous dynamic subsystems and the turn-on or turn-off of power switch as the discrete dynamic subsystems.
Switched linear systems (SLS), an important type of HS, have attracted considerable attention in modeling, analysis and design.Quadratic stability is one of the important problems of SLS.This problem is more complex since it depends on the switching rules as the stability of all the subsystems.
Looking from the existing literatures [1]- [5], Lyapunov theories are the dominant approaches used in study of stability.For example, the stability of DC-DC Converters in CCM (Continuous Current Mode) was analyzed in [6].The aim of this paper is to study the quadratic stability of DC-DC converters via using Lyapunov function based on the model in DCM (Discontinuous Current Mode).
Buck converter in DCM
The topology structure of Buck converter is shown in Fig. 1.Assume all of the components are perfect.Three work modes of it in DCM are shown in Fig. 2.
SLS Model of Buck Converter in DCM
Supposing all of the components are ideal and the state vector is x( t) = [ i L u C ] T ,the output vector is y( t) = u C , consequently, the state equations of Buck Converter in SLS model are [7] °°® The parameters matrixes of Buck Converter in DCM are expressed as follows according to Fig. 2.
3 Quadratic stability of SLS
Propaedeutics
Given a stable equilibrium point x .Since any other equilibrium point can be shifted to the origin via a change of variable x ~ =x -x , we can assume the switched equilibrium is the origin x =0 without loss of generality.
Definition: if and only if there exist a matrix P = P T > 0 and a constant ε > 0 such that for the quadratic function system trajectories, the switched equilibrium x = 0 can be said quadratically stable [8].(a) when subsystem i is active, , a convex combination of the subsystems is defined as
Quadratic stability
Theorem 1: As for system (l), the point x = 0 is a quadratic stable switched equilibrium if there exist Since the convex combination A eq in Eq. 4 is stable, there exist two positive definite symmetric matrices P and Q such that According to Eq. 4, Eq. 6 can be rewritten as From Eq. 5, we can also get the null term 0 Now, a new equation is obtained as follows where 0<ε d λ min and λ min is the smallest positive real eigenvalue of Q.Then Eq. 9 is equivalent to Eq. 10 as follows.
Hybrid feedback switching rule
In this rule, a lower bound on the decay rate )) ( ( min with ε > 0 is presented primarily, then the active subsystem is switched off only when it no more satisfies the required constraint.
The procedures of hybrid feedback switching rule is listed as follows.
(initialization) at time t = 0 activate the subsystem
Switching control of buck converter
Buck Converter in DCM includes three subsystem, consequently there exist three coefficients α 1 , α 2 and α 3 in the convex combination of Eq.3 .According to Eq.The load resistor R has two salutations during the process of simulation, one is from 10Ω to 20Ω when t=0.2s, the other is from 20Ω to 10Ω when t=0.4s.According to Fig. 4,we can see the output voltage reach its steady state with some overshoots and it spends more time to stabilize compared with the converter in Fig. 3 when the load resistor changed.So the hybrid feedback switching rule appropriates good performance of transient and steady state dynamic responses.
Conclusion
In this paper, the model of Buck Converter in DCM, which is based on the concept and theory of SLS, is built first and foremost.Then the quadratic stability and switching rule of it are studied.Afterwards, simulation results are presented.This approach can also be extended to other converters and port controlled Hamilton linear switched system.The research results are beneficial to the development of nonlinear control strategy and the practical applications of power electronic systems.
2 ¦ 3 ¦
:w=1, v=0 (Switch tube is off and diode is on) :w=0, v=0 (Switch tube and diode are all off) 3, we have α 3 =1-(α 1 +α 2 ).Then the convex combination can be written as semi-definite positive.Q can be chosen as a semi-definite matrix because ) ( x V is not identically vanishing along all system trajectories for DC-DC converters.Let λ min =2/R is the smallest positive real eigenvalue of Q and 0<ε d λ min .So the regions of subsystems can be divided by hybrid feedback rule.
Figure 3 .
Figure 3. Simulation results of buck converter on hybrid feedback switching control rule.
Figure 4 .
Figure 4. Simulation results of buck converter in openloop control.
stability of buck converter
on x U t (ρ off < ρ on ) .5 Quadratic
Coordinate Transformation of Buck Converter
The state equations after coordinate transformation is given by Eq. 11 and Table1.
Table 1 .
Parameters matrixes of buck converter after coordinate transformation | 1,358 | 2016-01-01T00:00:00.000 | [
"Engineering"
] |
Drivers behind energy consumption by rural households in Shanxi
Biomass is widely used by households for cooking and heating in rural China. Along with rapid economic growth over the last three decades, increasing rural households tend to use less biomass and more commercial energy such as coal and electricity. In this paper, we analyzed the key drivers behind energy consumption and switching by rural households based on survey data of energy consumption by rural households in ten villages of Shanxi province in China. Our econometric results show that income growth can induce less use of biomass and more use of coal and modern fuels. However, no evidence shows that even wealthy households has abandoned biomass use in Shanxi, mainly due to the “free” access to land and agricultural resources in these villages. Previous wealth of a household represented by house value can lead to more time spent on biomass collection. Access to land resources has positive effects on biomass use and collection. Other key variables include education, household size, the number of elderly members, and coal price. We also find huge differences between villages, indicating the importance of access to agricultural resources and markets.
Introduction
Although per capita consumption of energy in China is still relatively low, China overtook the U.S. in 2010 and is now the world's largest energy consumer [1].In China, 32% of total population use some form of traditional biomass energy and most of them live in rural areas [2].China has a rural population of about 750 million, of which, 377 million (50%) use traditional biomass.The widely used solid fuels such as coal and biomass can have environmental and health consequences.Will the rural households reduce the biomass use and increase commercial fuel use such as coal and electricity in the near future?What factors play a key role to drive the switching process?In the present paper, we provide new evidence to answer these questions by analyzing the key drivers behind energy consumption of rural households in Shanxi.
One of the major concerns pertaining to traditional biomass burning has been the increased deforestation resulting from fuelwood collection.However, fuelwood use in Shanxi is negligible and this should not be an issue.On the other hand, biomass in the form of crop residues and straw, as well as coal, are important energy sources for cooking and heating.Although deforestation is not an issue for these types of fuels, there are still reasons to be concerned with indoor burning of such fuels, as it has several additional negative effects.The use of biomass and coal for cooking and heating can have severe effects on environment, climate, agricultural production, productivity and health, to mention a few.By contrast, crop residues, such as corn stove and straw, help minimize soil erosion when left on the land, and the residues contain important elements that increase soil fertility.Hence, removing crop residues from the land can have detrimental effects on soil quality, such as reduced soil organic matter content, increased soil erosion, and increased land desertification [3].
Solid fuel burning has important health consequences due to the emission of health-damaging pollutants [4].Almost 2 million people die prematurely from illness attributable to indoor air pollution from household solid fuel use every year [4].Women and young children are particularly exposed as they spend more time indoors.
It is also worth mentioning that shifting to modern energy sources can improve the productivity of poor people.For instance, labor, biomass and land resources can be used for income-generating activities, and time will be saved if less time is used for collecting biomass, cooking and cleaning.
For these reasons, as well as the fact that China is the world largest energy user and CO2 emitter, it is of utmost importance to understand the factors driving the growth of Chinese energy use.The energy transition of rural households in the coal-rich province is typical.Our study contributes to this understanding by exploring the determinants of fuel use and fuel switching in rural areas of Shanxi, a coal-rich Chinese province that possesses 260 billion metric tons of known coal deposits, about one third of China's total deposits.Rural households in the province tend to consume more coal and may transition slowly to modern energy.We collected survey data on energy used by rural households from 10 villages in the Shanxi province and estimated regression models for use of biomass, coal, and modern energy, as well as for labor input into biomass collection.
Theories of household energy use and energy transition
The "poverty-environment hypothesis" (PEH) was first proposed by the 1987 Brundtland Commission and the Asian Development Bank.According to this hypothesis, poor households rely more on environmental resources than non-poor do.Poor households have no option but to use local environmental resources such as biomass, and this hypothesis predicts that biomass use will be reduced when income grows [5].This hypothesis is supported by Dé murger and Fournier [5] who find that household economic wealth is a significant and negative determinant of firewood consumption in rural households in 10 villages in the Labagoumen township in northern China.
A related hypothesis to the PEH is the "energy ladder" hypothesis, which states that households switch from lower to higher quality fuels as their income increases [6][7][8][9].According to Heltberg [8], the model conceptualizes fuel switching in three distinct phases, and transition takes place as a response to higher income, urbanization and scarcity of biomass.The first phase is characterized by full reliance on biomass.In the second phase, households switch to using "transition fuels" such as kerosene, coal and charcoal, while in the third and final phase, they switch to energy types such as LPG, natural gas or electricity.There may be several reasons why households would want to switch to fuel types higher up on the energy ladder, for instance higher efficiency and reduced indoor air pollution, but at the same time these fuel types are often more expensive than the lower quality fuels.According to Masera, Saatkamp and Kammen [10] the energy ladder model implicitly assumes that fuel types higher up on the ladder carries with them a certain status, and that households desire to move up the ladder to demonstrate an increase in socioeconomic status.
The energy ladder hypothesis predicts that low quality fuel types are replaced by fuel types higher up on the energy ladder as income increases, i.e. linear or unidirectional fuel switching, while in reality, we often see that multiple fuel types are used [8,10].There may be several reasons why households choose to use multiple fuel types, for instance risk minimization (fuel security), or that the different fuels are not perfect substitutes such that using multiple fuels is advantageous to only using one type.
Masera, Saatkamp and Kammen [10] introduce an alternative fuel model, called the "energy stacking" model.Their model tries to account for the empirical observation that many households adopt new fuels without abandoning the old ones, and hence this model is an alternative to the "energy ladder" model.Their model also recognizes that there are several other factors than income, influencing fuel use.
Lastly, the Environmental Kuznets Curve (EKC) hypothesis postulates that there is an inverted U-shaped relationship between income and environmental degradation.This implies that initially, rising income or living standards increase pollution, while later on it decreases [11,12].The EKC hypothesis has been proposed, and tested, for different pollutants, for instance CO2, SO2, CO, NOx and SPM 1 .Evidence of the existence of the EKC is mixed.Only some air quality indicators (especially local ones) show evidence of an EKC, and the turning points, i.e. when the emissions start decreasing, have been found to vary between the different pollutants [13].
Obviously, these theories of fuel use and fuel transition lead to different predictions of how energy use responds to an increase in income.According to the PEH and the "energy ladder" hypothesis, biomass use will decrease, and while the energy ladder hypothesis have clear predictions for which fuels biomass will be substituted for, the PEH does not.The "energy stacking" model suggested by Masera, Saatkamp and Kammen [10], on the other hand, predicts that households adopt new fuels without necessarily abandoning the old one, but its predictions of what will happen to biomass use are not clear.The EKC hypothesis predicts that emissions first will increase and later decrease as a response to higher income, but it does not predict at which income level the turning point for emissions will occur.In addition, since its prediction is pollutant-specific, the implications for biomass and coal burning are not obvious as combustion of these results in several pollutants, and emissions may differ between coal and biomass.A theory of wave transition in household energy use has been proposed to capture the features described by the above models [14].The wave transition theory assumes a bell-shaped curve for an energy type used by households and main energy use is changing over time and along with income.
After a review of previous studies on household energy use, Kowsari and Zerriffi [15] proposed a conceptual framework for studying the household energy use.In their framework, all economic, cultural, and behavioral factors affecting household energy use are organized and associated with three dimensions of energy services, devices and carriers.While this framework is useful to understand the causal relations between key factors affecting household energy use, it demands considerable data and multiscale approaches.
Empirical studies
The empirical literature on factors influencing fuel use is quite extensive.Still, evidence is mixed regarding which model best describes actual fuel use and fuel switching.We will concentrate on the part of the literature focusing on rural households, as rural households' energy use can differ considerably from urban households' energy use.Rural households often face additional constraints to their use of commercial energy as markets for energy and energy appliances can be limited or non-existing.Their fuel use is often to a larger extent determined by local availability as well as transaction and opportunity costs in collecting the fuel rather than budget constraints, prices and costs [16].
Income (wealth or expenditures in some studies) has been found to be an important determinant of total energy demand in several studies [5,8,[17][18][19][20], but as Jiang and O'Neill [18] points out, income may have to rise substantially in order for absolute biomass use to fall, and other determinants are also important.In fact, most studies recognize that there are several factors influencing fuel use in addition to income, however there is no consensus regarding which factors determine biomass and/or coal use and fuel switching.Examples of other factors that have been found to have an influence are geography/topography and climate, access to different fuel types, access to markets, infrastructure, fuel prices, household size, education level, size of land area, and distance to forest (in the case of fuelwood) [17,[21][22][23].According to Jiang and O'Neill [18], the consensus is limited to income and household size as almost all studies find that these are important determinants of fuel switching.For the other suggested factors, evidence is mixed.In the present paper, we will provide new empirical evidence on key determinants of energy used by rural households in China by taking Shanxi as an example.
Data
We use data from two surveys conducted in October 2010 in 10 villages of Shanxi: the 2010 Rural Household Survey and the Questionnaire on rural energy consumption in China.The Rural Household Survey is a yearly survey carried out in all provinces in China by the Ministry of Agriculture2 .In addition to demographic data, they collect data on employment and income, land situation, assets, agricultural production and sales as well as household expenditures.The Questionnaire on rural energy consumption was carried out for the first time in 2010, and only in the Shanxi province.Both surveys interviewed 954 rural households from Shanxi and were simultaneously conducted by the local office of the Ministry of Agriculture.The selected ten villages are evenly located in Shanxi to represent the overall rural households in the province.
Unfortunately, the data on rural energy consumption were not complete in all villages and we had to exclude three villages completely.In addition, we omitted households with no income information or zero household members.We ended up using a dataset of 571 households.
Shanxi province and the villages
The Shanxi Province is one out of five provinces that constitute the North China Region with area of 156,300 km 2 .The province is bordered by the Yellow River (Huang He) in the west, Beijing and Heibei Province to the east and Inner Mongolia to the north.The province contains 119 counties with its capital set in Taiyuan City.The total population of Shanxi is 35.7 million in 2010, of which 54% live in rural areas [24].Generally, the central axis of the province has the highest population density, the most populated counties being Taiyuan and Datong.
Shanxi is a major coal provider.The province possesses 260 billion metric tons of known coal deposits, about one third of China's total deposits.As a result, the coal production in Shanxi represents about a quarter of total coal production in China, two fifths of coke production and a seventeenth of total power generation [25].Even though, Shanxi, together with other central and western provinces, has been lagging behind in the economic growth since the reforms in 1978.In 2010, income per capita in Shanxi was 81% of the average in China [24].The province is largely mountainous and has a continental monsoon climate, which means that most of the precipitation falls in summer.Most of the province lies within the Loess Plateau.The fine texture of the loess soils makes the area highly susceptible to wind and water erosion [26], and high rates of erosion has caused problems for agricultural production for at least 3000 years [27].The loess soil is still highly fertile and suitable for agricultural production [28].Winter wheat and maize used to be the main agricultural crops in the region [29], but according to our data wheat is not commonly grown anymore.Today, the most important crops are maize, fruits and vegetables.
Income sources in rural Shanxi
Even in rural areas of Shanxi, agriculture is still a major income source, but its importance is declining and production systems vary from village to village.In villages close to urban centers more valuable, but perishable, products such as fruits and vegetables are becoming economically more important than staple grains.In our sample (Table 1), two villages (7 and 8) produce fruits as the economically most important agricultural product.Both villages are situated closer than 5 kilometers from urban centers.In these relatively urban areas, household incomes are also above the average for all villages.The poorest villages (2, 6 and 9) are located in remote mountainous areas where transport is not convenient and the agricultural conditions are poor.Source: Household survey conducted by the authors.The last-year data were collected in October 2010.Standard deviations are reported in parentheses for household income and land area.
Energy use
Our data show that on average, a rural household in Shanxi consumes 1863 kg standard coal equivalents (kgce) energy sources.Since the household size is 3.72, one person consumes 500 kgce energy sources.This is almost 10% higher than the national average of 464 kgce per capita in rural China in 2008 [30].Yao, et al (2012) found that biomass use decreased from 80% of residential energy use in rural China in 2001 to 70% of residential energy use in 2008.This picture is rather different in rural Shanxi where coal is the most important energy source and accounts for 60% of total energy use, while biomass accounts for only 21% according to our survey.Of the households using biomass, 55% use only straw, mainly from own land, and 36% use branches, while the others use a combination of branches and straw.Only 3% of households use solely biomass collected from public land.Households using biomass spend on average 18 days collecting biomass per year.Elderly household members collect close to 30% of the biomass, while other adult members in the household collect the rest.
Biomass is mainly used for cooking and only a small proportion is used for heating.In addition to biomass, people also use coal, electricity and gas for cooking to prepare food.Almost 80% of the households use biomass for cooking, 63% use coal, 54% use electricity and 26% use gas.Electricity is commonly used for cooking rice and 48% have an electric rice cooker, while 13% of the households own an electric stove.Almost 80% of the households use a combination of at least two of these energy sources for cooking and 40% use three or more energy sources for food preparation (Table 2).Source: Household survey conducted by the authors.The last-year data were collected in October 2010.
Table 2. Energy sources used for cooking.
Our data indicate that domestic energy use per capita is stable at lower-income levels, but increases at high-income levels (see Table 3).Coal use per capita seems to increase in both absolute and proportional levels as income increases.The use of modern, more efficient energy sources such as electricity and gas is very low in all income categories, but it seems to increase in both absolute and proportional levels as income increases.In the lowest income quartile, electricity and gas accounts for 4% of all energy use, while it increases to 6% in the highest income quartile.Whereas the use of coal and modern energy sources increases with income, the use of biomass decreases in both absolute and relative terms.In the lowest income quartile, 32% of all domestic energy comes from biomass, while it is only 16% in the highest income quartile.
Methodology: Theoretical and empirical specification
In the previous section, we saw that people use biomass mainly for cooking.coal, electricity and gas are alternative sources of cooking energy.We therefore focus on biomass, coal and modern energy sources (electricity and gas).While coal and modern energy is traded in the market, there are no buying or selling of biomass in the villages.Households collect biomass only for own consumption and markets for biomass are missing.Thus, the joint production and consumption of non-commercial fuels suggests the use of a non-separable household model for analyzing household energy choices [31].Chen, Heerink and van den Berg [17] show that optimal labor allocation leads to a specific amount of biomass collected and a certain monetary income from agriculture and wage employment.The biomass is used for consumption, while the money earned is spent on coal, modern energy and other goods.
We aim to investigate factors determining the use of biomass, coal, modern fuels and a possible switch from biomass to other energy sources in rural Shanxi.Therefore, we focus on four dependent variables: quantity of biomass used (Qbm), time spent on the biomass collection (Tbm), quantity of coal used (Qc) and the quantity of "modern fuels" (electricity and gas) used (Qmf).Each of these variables (represented by Q) can be expressed by a function of explanatory variables in a reduced form, = (, , , … ).
In the function, household characteristics related to consumption (xc) are represented by household size, the share of labor-age members, the number of elderly members, mean education of adult household members and household wealth, which includes two indicators of income 3 and the value of the house owned by the household.Farming endowment (xf) is represented by the amount of land owned by the household.Coal price (Pco) is included in our model because households in a village can face different coal prices after selective subsidies from village collectives are deducted from the market coal prices.Prices of other goods are not included since they are assumed to be the same for all households living in a village and will be captured by village dummy variables.Table 4 shows the statistical description of these variables.
The specific functional form of the reduced-form equations cannot be derived analytically as long as the model is non-separable.Hence, we assume that the functions are linear and test for second-order effects of all the explanatory variables.
Table 5 shows the expected signs of all the explanatory variables used in the regression analysis.Household characteristics such as education, household size and wealth variables will have a direct effect on consumption preferences.They might have either positive or negative effects on consumption goods that require energy inputs.
Households with higher education levels are expected to lead a more modern lifestyle and therefore be using less biomass and more commercial energy sources.The same argument holds for more wealthy households.We believe they will have higher alternative value of their time and spend less time on collecting and consuming biomass, while they also have more money to spend on energy purchased from the market including as coal, gas and electricity.We assume that larger households consume more food and therefore need more energy for preparing food.Since biomass is mainly used for cooking, we assume that bigger households will tend to use more biomass.Bigger households will also demand more use of appliances using electricity and therefore we expect a positive effect on modern fuel consumption.Coal is used for both cooking and heating, and we expect a positive correlation between coal consumption and household size.
Forests are more or less absent in our study area and people mainly use biomass from their own land.We therefore expect to find a positive correlation between access to land and both consumption of biomass and time used to collect biomass.On the other hand, we do not believe we will find any correlation between access to land and use of commercial fuels other than biomass.
The elderly members in a household may prefer a more traditional lifestyle, meaning that a household with one additional elderly member may use more traditional biomass and have more time for biomass collection, and use less commercial energy.Hence, we expect positive effects of the number of elderly members on biomass use and its collection time, but negative effects on coal use and modern fuel use.
Household labor endowment should have a positive effect on the use of labor (and leisure), including labor for biomass collection.We therefore expect a positive effect on biomass consumption of the share of labor-age members in a household.
In all villages, households could choose between several energy sources.Biomass is the most commonly used fuel for cooking, but 25% of the households do not use biomass at all.Coal is used (both for cooking and heating) by almost 90% of the households and gas is also frequently used.Electricity is the only energy source that is used by all households and hence, the modern fuel model will be estimated by the OLS regression method.Possible heteroscedasticity in the regression will be dealt with by the robust version with the regress command in Stata 4 .Because many households do not use biomass and coal, meaning zero values for coal and biomass consumption and the time for biomass collection.Thus, Tobit (truncated) regression methods are applied to the three models other than the modern fuel model.
We have considered squared terms of all the explanatory variables listed in Table 5.We found that only squared income and land area show statistically significant in the regressions of biomass and total labor.All other squared terms did not improved the regression results and were removed from the regressions.
Results and Discussion
Table 6 reports the regression results for biomass consumption, time used to collect biomass in the survey households, coal consumption and modern fuel consumption.
Household income has significant effects in all of the regressions.As expected, higher household income leads to reduced use of biomass and increased use of both coal and modern fuels.The same income can induce higher rates of increase in coal and modern fuel use than rates of decrease in biomass use for the rural households.This shows that wealthier households increase total energy use and switch from biomass to coal and modern fuels, which is in line with the "energy-ladder" theory of fuel-switching away from biomass as income increases.The reduction in biomass use can be explained by the increasing labor cost associated with income growth.This is confirmed by negative coefficient of income in the regression of time used for biomass collection, indicating that higher income reduces the time for biomass collection used by a household.Furthermore, the reduction in biomass use and time for its collection are diminishing with income growth as shown by the positive coefficients of the squared income.This may indicate that biomass may not be abandoned for a long period until household income reaches a considerable high level.
This phenomenon is consistent with the energy stack theory and was called a "floor effect" by Dé murger and Fournier [5].Based on a larger dataset on energy use of three provinces of Shanxi, Zhejiang and Guizhou, Zhang, Wei, Glomsrød, et al. [20] have found similar evidence on the "floor effect." House value is the other wealth variable we use in our regressions.Surprisingly, this wealth variable was only positively and significantly correlated with the time for biomass collection.Moreover, the house value seems positively correlated with the use of all the three energy types even though the correlations are not statistically significant.This makes us doubt if the house value is a suitable indicator of wealth owned by the households.A regression shows that the house value can only explain 10% of variations in income based on the same dataset.If income is taken an indicator of current and potential ability to increase wealth of a household, then the house value can be taken as an indicator of previous wealth owned by the household.When a household owns a high value house, the household may spend a large share of its savings to buy the house and become short of current savings, implying a stricter cash constraint for current consumption.Hence, the household may use more time for biomass collection, and use more biomass as well, to reduce current cash spending to some extent.This can be an explanation for the unexpected positive effects of the house value on the time for biomass collection and biomass use.Source: Household survey conducted by the authors.The last-year data were collected in October 2010.Standard errors in parentheses * p < 0.10, ** p < 0.05, *** p < 0.01.
Access to land is a highly significant factor when it comes to consumption of biomass and collecting time.Households with more land also use more time for biomass collection and tend to use more biomass as most of the biomass is either straw or branches collected in their own fields.Forests and other public fuelwood sources are almost nonexistent in this area.Moreover, the positive effect of land on biomass use and time for its collection seems diminishing with larger land area owned by the households since the coefficients of squared land area are negative in both regression models.Zhang, Wei, Glomsrød et al. [20] also found a positive effect of land area on biomass use, but did not find evidence on the diminishing effect with larger land area.On the other hand, access to land does not significantly affect the consumption of coal and modern fuels.
As expected, the estimated coefficient of coal price is negative in the coal use model and positive in the other three models, even though it is not statistically significant for the models of coal use and time for biomass collection.When coal price becomes higher, the households would use more alternative energy sources including modern fuel and biomass rather than coal to reduce energy costs.The substitution of energy sources may indicate that the price of an energy source can have considerable effects on energy transition of the rural households.Given the low costs of biomass collection, we would expect that rural households will continue to use biomass for a long period in the future.
All the estimated coefficients of education have the expected signs expect for the coal use model.Households with a higher mean education level spend less time collecting biomass and consume less biomass and coal, but more modern fuel.Even though the coefficient on education were significant only in the modern fuel and the time collecting biomass models, our results indicate that educated households switch to modern clean fuel from "dirty" coal and traditional biomass in recent years with serious air pollution problems in North China.Interestingly, Chen, Heerink and van den Berg [17] found the same result for the biomass collection time, but the opposite result for the use of biomass and coal in two villages of Jiangxi province in South China where it was possible to switch between fuelwood and coal consumption.In their study, it seemed educated households were more efficient in the biomass collection and still used more coal.This was around 2000, when air pollution was still not a big issue in China, particularly not in South China.
Household size has positive effects on the use of all the three energy sources and biomass collection time, statistically significant for the cases of coal and modern fuel use.Bigger households seem to use more energy, particularly coal and modern fuels, than smaller households do.Furthermore, households with more labor relative to the household size use significantly more coal and spend more time for biomass collection than households with more dependents do.This could indicate that available time, or the cost of labor, could be a limiting factor when it comes to collecting and using biomass in 2010, after three decades of family plan policy implemented in China.This result was not found in Chen, et al. ( 2006)'s study based on biomass data of around 2000 in three villages of Jiangxi.
Our results show that the composition of the households, particularly the number of elderly members, may also affect the choice of energy use.More elderly members in a household have positive effects on coal and biomass use and the time for biomass collection.It seems that the elderly in the households prefer to use traditional biomass and coal rather than modern fuel.In the villages, the elderly may spend significantly more time for biomass collection to continue their traditional life style of energy use and help the households reduce their spending on commercial energy sources.We also checked if the number of children had effects on our dependent variables, but the estimated coefficients were not statistically significant even at the 10% confidence level.
Finally, we find big, systematic and significant differences between the villages when it comes to the use of different energy sources.Villages 1, 3 and 6 use significantly less biomass and more coal than Village 2, while Villages 7 and 8 use more biomass and less coal after controlling for other key determinants such as access to land resources and income.These differences could be due to differences in relative prices between the villages.Another important difference in this regard might arise from the fact that the households in Villages 7 and 8 are fruit producers, and have a higher share of income from agriculture than households have in all other villages.Having access to branches from trees, compared to straw from maize as in the other villages, probably gives more secure and plentiful access to biomass.Households in these two villages also spend more time collecting biomass.We also found that in Village 9, households spend significantly more time collecting biomass than in Village 2, even though they do not consume more, to some extent because Village 9 is a very poor, remote village in the mountains where land is dry and barren.In this village, it requires more labor to collect the same amount of biomass as in productive areas.
Conclusion
In this paper, we have reported results of a survey on household energy use in ten villages of Shanxi conducted in the fall 2010 and analyzed the key determinants of their energy use and transition.Our findings support that energy use of these households switches from traditional biomass, via coal, to modern fuels including gas and electricity along with income growth.During the energy transition, biomass is not abandoned as coal and other fuels are adopted, supporting the "energy stack" model.Hence, people do not necessarily abandon biomass use when their income increases, or when they gain access to modern fuel types.
Households with high house value may spend more time on biomass collection as indicated by our results.This may implies that house value as a previous wealth indicator plays a different role from a current wealth indicator such as income.Coal price is also an important positive factor for biomass use and modern fuel use.Higher coal price can promote more use of biomass and modern fuels.
Another key determinant of the biomass use is access to land.Our results show that labor cost can be a constraint to collecting biomass.This has probably changed over the last 10-15 years as an increasing number of working-age members in the households have migrated to the cities.However, the households may still consider biomass a "free" by-product from agriculture and even the wealthy households continue to use biomass in the villages.
We also found significant effects on energy use of several household characteristics such as education, household size, and the number of elderly members.Educated households tend to use more modern fuels and less traditional biomass and coal, while more elderly members in a household have the opposite effects.Larger households tend to use more coal and modern fuels.
Finally, we noticed considerable differences between villages.This may indicate the importance of access to agricultural resources and markets.Hence, we probably require place-and context-specific knowledge to design optimal policies for reducing indoor burning of biomass and coal.
Table 3 . Per capita use of yearly energy sources in income quartiles.
Household survey conducted by the authors.The last-year data were collected in October 2010.Standard deviations are reported in parentheses.
Table 4 . Summary statistics of the regression variables.
Source: Household survey conducted by the authors.The last-year data were collected in October 2010. | 7,744.8 | 2015-09-29T00:00:00.000 | [
"Economics"
] |
Cone Beam CT Imaging of the Paranasal Region with a Multipurpose X-ray System—Image Quality and Radiation Exposure
: Besides X-ray and fluoroscopy, a previously introduced X-ray scanner offers a 3D cone beam option (Multitom Rax, Siemens Healthcare). The aim of this study was to evaluate various scan parameters and post-processing steps to optimize image quality and radiation exposure for imaging of the parasinus region. Four human cadaver heads were examined with different tube voltages (90–121 kV), dose levels (DLs) (278–2180 nGy) and pre-filtration methods (none, Cu 0.2 mm, Cu 0.3 mm and Sn 0.4 mm). All images were reconstructed in 2 mm slice thickness with and without a metal artifact reduction algorithm in three different kernels. In total, 80 different scan protocols and 480 datasets were evaluated. Image quality was rated on a 5-point Likert scale. Radiation exposure (mean computed tomography volume index (CTDI vol ) and effective dose) was calculated for each scan. The most dose-effective combination for the diagnosis of sinusitis was 121 kV / DL of 278 / 0.3 mm copper (CTDI vol 1.70 mGy, effective dose 77 µ Sv). Scan protocols with 121 kV / DL1090 / 0.3 mm copper were rated sufficient for preoperative sinus surgery planning (CTDI vol 4.66 mGy, effective dose 212 µ Sv). Therefore, sinusitis and preoperative sinus surgery planning can be performed in diagnostic image quality at low radiation dose levels with a multipurpose X-ray system.
Introduction
Sinusitis is a frequent disorder and one of the most common conditions treated by primary care physicians [1]. Each year in the United States, sinusitis affects one in seven adults, and is diagnosed in 31 million patients [2]. The direct costs of sinusitis, including medications, outpatient and emergency department visits, ancillary tests and procedures, are estimated to be $3 billion per year in the United States. Sinusitis is the fifth most common diagnosis for which antibiotics are prescribed [2,3] and can occur as acute or chronic infection. Multi-slice computed tomography (MSCT) is frequently used for imaging of the parasinus region [4,5]. It provides essential information for planning the surgical approach, image-guided navigation, and robotic surgery [6,7]. According to the current guidelines, it is mandatory before functional endoscopic sinus surgery (FESS) to visualize individual anatomic variants and the extension of the disease [8]. The computed tomography (CT) scan of the paranasal sinuses have superseded the conventional standard radiography as it offers more precise anatomic information to the surgeon on the complex anatomy of the sinus cavities and their drainage pathways, in particularly the ostiomeatal complex [9,10]. To adhere to the ALARA principle ("as low as reasonably achievable") different approaches to lower radiation exposure in parasinus CT have been proposed like low kV scanning and spectral shaping [11,12]. Alternatively, cone beam CT (CBCT) has been shown to be an efficient alternative to conventional CT scans to identify sinusitis disease and serves as a guide for surgical intervention [13]. With its flat-panel detector technology it is characterized by high spatial resolution to provide excellent detail of the bony anatomy with the drawback of limited soft tissue information [14]. Because of the relatively lower costs in comparison with modern helical CT scanners, CBCTs with small detectors, also named as digital volume tomography (DVT), are increasingly used for imaging of the paranasal sinuses. Originally developed for dentistry and orthodontic diagnostics, DVTs with increasing field of view (FoV) sizes due to bigger flat detectors and further advances in technology are becoming increasingly popular for ENT specialists [15,16]. They are even able to perform tomography independently using DVT for diagnostic and preoperative imaging [17] assuming they have the appropriate technical qualifications in radiation protection. For this reason, an X-ray machine that has a 3D CBCT option would be interesting for X-ray practices, in order to be able to offer CBCTs in addition to classic X-rays with a small fleet of equipment.
A new type of X-ray machine developed primarily for digital X-ray and fluoroscopic examinations is additionally equipped with flat-panel detector technology for CBCT [18]. With this multipurpose device, 3D imaging of the paranasal sinuses can be performed. Different pre-filtration options like copper and tin, metal artefact reduction algorithms (as metal implants can also affect image quality in CBCT as in MSCT) and different kV levels are available. As imaging of the sinuses is among the most common scans requested in the field of otorhinolaryngology, this additional option would be a considerable advantage in terms of cost efficiency as fluoroscopic examinations are becoming increasingly rare.
The aim of this study was to define and evaluate dose-optimized protocols for the diagnosis of sinusitis and preoperative sinus surgery planning using different kV levels, pre-filtration options and metal artefact reduction algorithms.
Image Acquisition
In total, four cadaver heads were scanned with a multipurpose X-ray system (Multitom Rax, Siemens Healthcare). The 3D CBCT has an amorphous silicon flat detector with a cesium iodide scintillator of 43 × 43 cm area and has an image matrix of 1420 × 1436 pixels with a pixel size of 296 µm (both after binning of 2 × 2). The reconstructed voxel size was 0.5 mm (isotropic). The maximum diameter of field of view for reconstruction was 23 cm. Reconstruction is based on a filtered back projection algorithm (Feldkamp, Davis, Kress) with additional correction methods to compensate for noncircular movement [19]. The X-ray tube (OPTITOP; Siemens Healthcare) has a 0.6 mm focal spot. Various tube voltages (90, 100 and 121 kV) and dose levels (DLs) can be selected as setting parameters resulting in different scan protocols. Preset and selectable DLs are 278, 548, 1090, 2180 nGy. The X-ray tube target material is tungsten. The DL serves as the target DL for the automatic exposure control. The aim is to keep the dose at the image receptor constant for all projections. The tube current is modulated by the system and was between 44-374 mAs. The detector and tube are attached to two telescopic arms with a hinge. While moving along a predetermined trajectory, fluoroscopic images are acquired. The trajectory is defined by its position and scanning angle range. All scans were performed after geometric calibration of the trajectory. For the upright trajectory, a set of 512 images were acquired with a framerate of 8 fr·s -1 , resulting in a scan-time of 20 sec over a 187-degree arc-travel (system version VE30B, product status 2017).
First, one cadaver head was examined with all available scan parameters to obtain an impression of image quality and radiation exposure. Three different tube-voltage settings (90, 100 and 121 kV) and four different DLs (278, 548, 1090 and 2180) were evaluated, each DL with and without pre-filtration Appl. Sci. 2020, 10, 5876 3 of 10 of the X-ray beam with a 0.2 mm copper filter ("Cu") to constrict the energy spectrum. Images were reconstructed in 2 mm axial and coronal slice thickness and post-processed with three different image "kernels" ranging from soft to hard ("smooth", "medium", "bone"). Additional post-processing was performed with and without the metal artifact reduction (MAR) algorithm [20].
The subsequent scans of the three remaining heads were performed to further optimize image quality for selected protocols (121 kV) based on the results of the first cadaver head. Four different DLs (278, 548, 1090 and 2180), pre-filtration with 0.2 and 0.3 mm copper and tin (Sn) (0.4 mm) were evaluated.
To find protocols with the best compromise between image quality and radiation exposure for preoperative surgery a mean computed tomography volume index (CTDI vol ) of 8 mGy was considered the upper limit [21].
Image Quality Assessment
All datasets were stored in DICOM format and protocol-related information was removed. A 3D post-processing platform (syngo.via, Siemens Healthcare) was used to display all datasets. Two board-certified radiologists (9 and 10 years of experience in head and neck radiology) analysed the images. The window setting could be changed by the raters at their own discretion. The default setting was W 2000 C 0. For the first cadaver head overall impression of the images was assessed according to the subjectively perceived image impression and rated (sufficient image quality vs. insufficient/noisy image quality).
Image quality for the remaining three cadaver heads was measured by using an ordinal performance scale with 5 levels (5-point Likert scale). The demarcation of 10 anatomic regions (lamina papyracea, lamina cribrosa, nasal septum, ethmoid air cells, sinus walls, orbital floor, lacrimal duct, carotid canal, tympanic cavity, mastoid cells) and incidental "pathologic" conditions (paranasal and mastoid fluid collections) was rated as follows: 5 excellent, 4 good, 3 moderate, 2 sufficient, 1 insufficient image quality.
The overall rating for each protocol was defined by the worst rating of these 10 anatomic structures and two "pathologic" conditions. Image quality of 3 was considered "sufficient" for pre-operative evaluation for sinusitis surgery, and image quality of 2 to detect or rule out sinusitis [11]. Noise measurements were not performed since noise values do not adequately reflect the diagnostic value in high contrast objects [11]. For the evaluation of MAR a 3-point Likert scale was used, 3 = no beam hardening despite metal implants, 2 = moderate beam hardening artifacts, cortical structure of maxilla/mandible definable, but not reliably assessable everywhere, 1 = strong beam hardening artifacts, impairment of anatomical structures. Cohen's K was calculated for inter-rater reliability by using SPSS Statistics 21.0 (IBM Corp).
Estimation of Radiation Exposure
CTDI vol was calculated after a modified (to account for geometric inequality) scanning of a 16 cm standard head phantom (data not shown, see [22]). CTDI vol was calculated following the IEC (International Electrotechnical Commission) 60601-2-44 A1 standard method [23]. DAP (dose area product) was specified as logged by the device. The effective dose was based on the dose length product (DLP) and calculated using the following formula according to the International Commission on Radiological Protection Publication 103 (ICRP 103):
Results
Cohen's K for inter-rater reliability calculated from the raw data was 0.74.
Variation of Tube Voltage
Three protocols with a DL of 278 at 90 kV, 100 kV and 121 kV and two protocols with a DL of 548 at 100 kV and 121 kV had a CTDI vol < 8 mGy. Additional copper pre-filtration further reduced the radiation dose without deteriorating image quality ( Figure 1A,B). Pre-filtration of the beam with 0.2 mm copper resulted in an average reduction in radiation exposure of 32%. With copper pre-filtration, a DL of 548 at 90 kV and a DL of 1090 at 121 kV additionally achieved a CTDI vol < 8 mGy. In total 12 protocols with a CTDI vol < 8 mGy were compared and the best compromise between radiation exposure and subjectively perceived image quality delivered the 121 kV setting.
Variation of Tube Voltage
Three protocols with a DL of 278 at 90 kV, 100 kV and 121 kV and two protocols with a DL of 548 at 100 kV and 121 kV had a CTDIvol < 8 mGy. Additional copper pre-filtration further reduced the radiation dose without deteriorating image quality ( Figure 1A,B). Pre-filtration of the beam with 0.2 mm copper resulted in an average reduction in radiation exposure of 32%. With copper prefiltration, a DL of 548 at 90 kV and a DL of 1090 at 121 kV additionally achieved a CTDIvol < 8 mGy. In total 12 protocols with a CTDIvol < 8 mGy were compared and the best compromise between radiation exposure and subjectively perceived image quality delivered the 121 kV setting.
Metal Artifact Reduction Algorithm
All four cadaver heads had dental implants. The metal artifact reduction algorithm was beneficial in all examined data sets ( Figure 2) (mean Likert scale value 2 with MAR vs. 1 without MAR). Less beam hardening artifacts were visible due to MAR, the cortex of the mandible and maxilla and the bony structures were better visible (Figure 2A,B). MAR did not miscalculate or disturb the datasets ( Figure 2C). Beam-hardening artifacts of metal implants were visible even in the vertebral bodies in the images without MAR, but not in images with MAR. Despite MAR, the bony fixation of the implants was not visible in some places, depending on the angle of rotation and the direction of the beam hardening artifacts (Figure 2A).
Metal Artifact Reduction Algorithm
All four cadaver heads had dental implants. The metal artifact reduction algorithm was beneficial in all examined data sets ( Figure 2) (mean Likert scale value 2 with MAR vs. 1 without MAR). Less beam hardening artifacts were visible due to MAR, the cortex of the mandible and maxilla and the bony structures were better visible (Figure 2A,B). MAR did not miscalculate or disturb the datasets ( Figure 2C). Beam-hardening artifacts of metal implants were visible even in the vertebral bodies in the images without MAR, but not in images with MAR. Despite MAR, the bony fixation of the implants was not visible in some places, depending on the angle of rotation and the direction of the beam hardening artifacts (Figure 2A).
Variation of Tube Voltage
Three protocols with a DL of 278 at 90 kV, 100 kV and 121 kV and two protocols with a DL of 548 at 100 kV and 121 kV had a CTDIvol < 8 mGy. Additional copper pre-filtration further reduced the radiation dose without deteriorating image quality ( Figure 1A,B). Pre-filtration of the beam with 0.2 mm copper resulted in an average reduction in radiation exposure of 32%. With copper prefiltration, a DL of 548 at 90 kV and a DL of 1090 at 121 kV additionally achieved a CTDIvol < 8 mGy. In total 12 protocols with a CTDIvol < 8 mGy were compared and the best compromise between radiation exposure and subjectively perceived image quality delivered the 121 kV setting.
Metal Artifact Reduction Algorithm
All four cadaver heads had dental implants. The metal artifact reduction algorithm was beneficial in all examined data sets ( Figure 2) (mean Likert scale value 2 with MAR vs. 1 without MAR). Less beam hardening artifacts were visible due to MAR, the cortex of the mandible and maxilla and the bony structures were better visible (Figure 2A,B). MAR did not miscalculate or disturb the datasets ( Figure 2C). Beam-hardening artifacts of metal implants were visible even in the vertebral bodies in the images without MAR, but not in images with MAR. Despite MAR, the bony fixation of the implants was not visible in some places, depending on the angle of rotation and the direction of the beam hardening artifacts (Figure 2A).
Kernels
Hard kernels substantially increased image noise ( Figure 3A), whereas soft kernels (smooth) led to a blurred visualisation of the bony edges yet with significantly less image noise ( Figure 3C). In terms of image impression, the medium kernel ( Figure 3B) delivered the best compromise between image noise and sharp bony edges.
Kernels
Hard kernels substantially increased image noise ( Figure 3A), whereas soft kernels (smooth) led to a blurred visualisation of the bony edges yet with significantly less image noise ( Figure 3C). In terms of image impression, the medium kernel ( Figure 3B) delivered the best compromise between image noise and sharp bony edges.
Pre-Filtration with 0.2 vs. 0.3 mm Copper Filter
Based on the results of cadaver head 1 121 kV protocols were further evaluated with additional copper pre-filtration. Image quality was comparable between different data sets scanned without, with 0.2 mm or 0.3 mm copper filters. The radiation dose could be reduced to 40% compared to the dose without pre-filtering by using a 0.2 mm thick copper filter, and to 42 to 60% of the original dose by using a 0.3 mm thick copper filter. See Table 1 for detailed results and Figure 4 for representative images with copper pre-filtration.
Pre-Filtration with 0.2 vs. 0.3 mm Copper Filter
Based on the results of cadaver head 1 121 kV protocols were further evaluated with additional copper pre-filtration. Image quality was comparable between different data sets scanned without, with 0.2 mm or 0.3 mm copper filters. The radiation dose could be reduced to 40% compared to the dose without pre-filtering by using a 0.2 mm thick copper filter, and to 42 to 60% of the original dose by using a 0.3 mm thick copper filter. See Table 1 for detailed results and Figure 4 for representative images with copper pre-filtration. Table 1. Display of the minimum achieved scores of the four skulls depending on the scan protocol (mean value of both readers, stars * mark a difference in the minimum scores of the two readers). Additional information: average radiation exposure values of each protocol (DAP = Dose Area Product, CTDI vol = mean computed tomography volume index).
Pre-Filtration with 0.4 mm Tin Filter
Pre-filtration with a tin filter led to higher image noise which decreased image quality. The bony walls of mastoid cells were virtually invisible at a scan level of DL 278 nGy and 548 DL of at least 2180 was required for sufficient visualisation of the anatomic structure, which considerably increased the radiation dose by a factor of 5 (see Table 1). Data sets with tin pre-filtration were not adequate for preoperative imaging (see Figure 5 and Table 1).
Pre-Filtration with 0.4 mm Tin Filter
Pre-filtration with a tin filter led to higher image noise which decreased image quality. The bony walls of mastoid cells were virtually invisible at a scan level of DL 278 nGy and 548 DL of at least 2180 was required for sufficient visualisation of the anatomic structure, which considerably increased the radiation dose by a factor of 5 (see Table 1). Data sets with tin pre-filtration were not adequate for preoperative imaging (see Figure 5 and Table 1).
Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 11 Figure 5. Display of image quality at 121 kV and pre-filtering of X-rays with 0.4 mm tin filter, rising dose levels from (A-D). Data sets with tin pre-filtration were rated as insufficient for preoperative imaging.
Statistical Evaluation: Preoperative Imaging vs. Imaging of Sinusitis
As shown in Table 1, the scan protocol with 121 kV/DL 278 was sufficient to reliably detect sinusitis (minimum score of 2 was achieved, see reference [11]). Pre-filtering with a 0.3 mm copper filter achieved an average CTDIvol of 1.70 mGy.
A sufficient image quality for preoperative imaging was achieved within all four skulls of both readers at 121 kV/1090. Important structures for the FESS, such as the bony walls of the carotid canal or that of the orbit [7], were only rated with a "3" above a DL of 1090, which, by definition, was defined as the minimum requirement for preoperative planning of sinus surgery ( Figure 6). When pre-filtering the beam with a 0.3 mm thick copper filter, the average resulting radiation dose was 4.7 mGy (CTDIvol). A further increase of the DL to 2180 did not change regarding image quality (overall minimum score on Likert Scale remained "3", see Table 1), but the CTDIvol increased to an average radiation dose of 5.6 mGy (CTDIvol), when pre-filtered with a 0.3 mm copper filter. Thus, a scan protocol with 121 kV/DL 1090 and a pre-filtering of the beam with 0.3 mm is the protocol of choice for preoperative imaging.
Statistical Evaluation: Preoperative Imaging vs. Imaging of Sinusitis
As shown in Table 1, the scan protocol with 121 kV/DL 278 was sufficient to reliably detect sinusitis (minimum score of 2 was achieved, see reference [11]). Pre-filtering with a 0.3 mm copper filter achieved an average CTDI vol of 1.70 mGy.
A sufficient image quality for preoperative imaging was achieved within all four skulls of both readers at 121 kV/1090. Important structures for the FESS, such as the bony walls of the carotid canal or that of the orbit [7], were only rated with a "3" above a DL of 1090, which, by definition, was defined as the minimum requirement for preoperative planning of sinus surgery ( Figure 6). When pre-filtering the beam with a 0.3 mm thick copper filter, the average resulting radiation dose was 4.7 mGy (CTDI vol ). A further increase of the DL to 2180 did not change regarding image quality (overall minimum score on Likert Scale remained "3", see Table 1), but the CTDI vol increased to an average radiation dose of 5.6 mGy (CTDI vol ), when pre-filtered with a 0.3 mm copper filter. Thus, a scan protocol with 121 kV/DL 1090 and a pre-filtering of the beam with 0.3 mm is the protocol of choice for preoperative imaging. Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 11
Discussion
For imaging the parasinus region with a previously introduced multipurpose X-ray device with 3D cone beam option [18], various scan parameters and post-processing steps were examined to optimize image quality and radiation exposure. The best compromise between radiation exposure and image quality for the diagnosis of sinusitis was found at 121 kV/ DL 278 and for preoperative imaging at 121 kV/DL 1090. In our study, the effective dose for ruling out sinusitis was 0.077 mSv and for preoperative imaging 0.212 mSv.
In modern medicine, examination protocols tailored to the specific patients' anatomy and clinical indication are mandatory. The scan range of the parasinus region includes radiosensitive organs like the eye lens. Due to the typically young age of the patients and repetitive scans, radiation exposure is a relevant topic for this cohort. Today numerous manufacturers offer a wide range of CBCT and DVT units that differ both in their technical characteristics and in terms of radiation exposure [25]. In a dental and maxillofacial study with a large cohort of 500 patients examined with CBCT mean CTDIvol was 9.11 mGy [26]. In other studies with CBCT, the radiation exposure is recorded as the effective dose. In a meta-analysis of 23 dental CBCT units the average effective dose was 0.212 mSv using large FoVs [27]. In a recently published review for radiation dose in non-dental CBCT applications the overall mean effective dose for imaging the paranasal sinuses was 119.0 µSv [28] with comparability limited due to the smaller FoVs of the devices used for the studies (i.e., 15 × 12 cm, 13 × 10 cm). The effective dose is in between our results for the low-dose scanning protocol (77 µSv) and the protocol recommended for preoperative planning (212 µSv). The slightly higher dose required for preoperative imaging than for the diagnosis of sinusitis is mainly due to the carotid canal. The carotid canal is surrounded by soft tissue and requires higher dose levels for preoperative assessment.
Discussion
For imaging the parasinus region with a previously introduced multipurpose X-ray device with 3D cone beam option [18], various scan parameters and post-processing steps were examined to optimize image quality and radiation exposure. The best compromise between radiation exposure and image quality for the diagnosis of sinusitis was found at 121 kV/ DL 278 and for preoperative imaging at 121 kV/DL 1090. In our study, the effective dose for ruling out sinusitis was 0.077 mSv and for preoperative imaging 0.212 mSv.
In modern medicine, examination protocols tailored to the specific patients' anatomy and clinical indication are mandatory. The scan range of the parasinus region includes radiosensitive organs like the eye lens. Due to the typically young age of the patients and repetitive scans, radiation exposure is a relevant topic for this cohort. Today numerous manufacturers offer a wide range of CBCT and DVT units that differ both in their technical characteristics and in terms of radiation exposure [25]. In a dental and maxillofacial study with a large cohort of 500 patients examined with CBCT mean CTDI vol was 9.11 mGy [26]. In other studies with CBCT, the radiation exposure is recorded as the effective dose. In a meta-analysis of 23 dental CBCT units the average effective dose was 0.212 mSv using large FoVs [27]. In a recently published review for radiation dose in non-dental CBCT applications the overall mean effective dose for imaging the paranasal sinuses was 119.0 µSv [28] with comparability limited due to the smaller FoVs of the devices used for the studies (i.e., 15 × 12 cm, 13 × 10 cm). The effective dose is in between our results for the low-dose scanning protocol (77 µSv) and the protocol recommended for preoperative planning (212 µSv). The slightly higher dose required for preoperative imaging than for the diagnosis of sinusitis is mainly due to the carotid canal. The carotid canal is surrounded by soft tissue and requires higher dose levels for preoperative assessment.
One drawback of scanning patients on CBCT is its inferior soft tissue visualisation compared to CT [29] and magnetic resonance imaging (MRI) [30]. Even modern CBCTs lack sufficient soft tissue display, so using the multi-purpose device for soft tissue evaluation cannot be recommended. In case of unclear soft tissue lesions or if soft tissue lesions are suspected the method of choice is MRI [30]. MRI is also capable of ruling out sinusitis but fine bony structures cannot be visualized. Another disadvantage of MRI is the long examination time and the significantly higher costs.
In summary, the multi-purpose X-ray machine equipped with a 3D CBCT option offers the opportunity to examine patients with dedicated protocols based on their clinical indication for the diagnosis of sinusitis and for preoperative imaging. The radiation dose for both protocols is within the range of DVT and other CBCTs.
Some limitations of our study need to be addressed. First, four cadaver skulls were examined to evaluate image quality and to determine optimal scan protocols as well as post-processing steps. These dedicated protocols need to be verified in a larger cohort.
Second, all four cadaver heads were examined in an upright position, which should correspond to the posture of a seated patient. Tilting the jaw to improve image quality or to lower radiation exposure in CBCT by reducing the impact of metal implant artifacts as proposed in [31] has not been attempted.
Third, as cadaver heads were examined no motion artifacts occurred. The examination time for CBCT is approx. 20 s and movement artifacts could significantly influence image quality [32]. Further studies on human subjects are recommended to validate our results and suitable head fixation aids must be developed to minimize motion artifacts.
Conclusions
The multifunctional X-ray unit used in this study is suitable for the diagnosis of sinusitis and can also be used for preoperative imaging. The 3D option of this primarily for X-ray technology developed machine offers a significant advantage over conventional X-rays of the paranasal sinuses. The resulting radiation exposure is in the range of published DVT and CBCT examinations of the paranasal sinuses. | 6,169.8 | 2020-08-25T00:00:00.000 | [
"Medicine",
"Physics"
] |
Feedback stabilization of nonlinear discrete-time systems via a digital communication channel
We deal with the stabilization problem for a class
of nonlinear discrete-time systems via a digital
communication channel. We consider the case when the control
input is to be transmitted via communication channels with a
bit-rate constraint. Under an appropriate growth condition
on the nonlinear perturbation, we establish sufficient
conditions for the global and local stabilizability of
semilinear and nonlinear discrete-time systems,
respectively. A constructive method to design a feedback
stabilizing controller is proposed.
Introduction
In recent years there has been a significant interest in the stabilization problem of dynamical systems (see, e.g., [1,3,4,6,11,12,14,15,16] and the references therein).In the classical stabilization theory of dynamical systems, the standard assumption is that all data transmission required by the algorithm can be performed with infinite precision.However, in some new models, it is common to encounter situations where observation and control signals are sent via communication channel with a limited capacity.This problem may arise when large number of mobile units need to be controlled remotely by a single decision maker.Since the radio spectrum is limited, communication constraints are a real concern.All these new engineering applications motivated the development of a new chapter of control theory in which control and communication issues are combined together, and all the limitations of the communication channels are taken into account.Communication requirements, especially regarding bandwidth limits, are often challenging obstacles to control systems design (see, e.g., [8,17,18]).Furthermore, the focus has been on memoryless coding, in which the pant output is quantized without reference to its past.The paper [2] deals with a stabilization problem of a linear system with quantized state feedback and shows that if time-invariant system is passed through a fixed, memoryless quantizer, then controllability property of the system is impossible.In [7], the asymptotic stabilizability problem is investigated for a linear discrete-time system with a real-valued output when the controller, which may be nonlinear, receives observation data at a fixed known data rate.Petersen and Savkin [9,10] present a feedback scheme for linear stabilization over a digital communication channel (DCC) using a nonlinear dynamic state feedback controller which can be applied to an arbitrary linear discrete-time system subject to standard assumptions of controllability and observability.However, the stabilization method used in these papers requires a significant amount of on-line computations and is hardly implementable in real time, especially in many control models described by a system of nonlinear equations.
In this paper, we investigate the stabilizability problem for a special class of nonlinear discrete-time control systems via DCC.The system considered in the paper consists of a linear discrete-time system and a linearly bounded nonlinear perturbation.Based on the state space quantization method used in [2,7,8,9], we establish sufficient conditions for the global stabilizability of semilinear discrete-time systems under an appropriate growth condition on the nonlinear perturbation.The feedback stabilizing coder-controller is designed based on the measure of controllability matrix of the system.The approach enables us to derive a sufficient condition for the local stabilizability of a more general class of nonlinear discrete-time systems, where the right-hand side function is assumed to be smooth.
Problem statement
Consider a nonlinear control discrete-time system of the form where given nonlinear function; Z + is the set of all nonnegative integers; R n is the n-dimensional Euclidean space.Throughout the paper, R denotes the set of all real numbers; R n×m denotes the space of all real (n × m)-matrices; x ∞ and A ∞ denote the infinity norms of the vector x ∈ R n and the matrix A = [a i j ] ∈ R n×m , defined as B(a) denotes the closed hypercube in R n with radius a > 0 defined as B(a) = {x ∈ R n : x ∞ ≤ a}.
In control system (2.1), the way of communicating information from the measured state x(k) to the control input u(k) is via DCC.The feedback stabilization procedure for system (2.1) will involve two components.The first component, called the coder, takes the measured state signal x(k) and produces a corresponding codeword h(k) which is transmitted on the channel.The second component, called the controller, takes the received codeword h(k) and produces the control input u(k).The codeword h(k) is restricted to a finite number of admissible codewords.The number of admissible codewords is determined by the data rate of the channel (see Figure 2.1).We use a multirate coder-controller in which the control input u(k) is applied at every time step but a codeword is transmitted on the channel only once every p time steps, that is, we assume that the coder and controller are defined by the following equations.(i) Coder where a(pk) ∈ R is the quantization scaling which is updated every p time steps.We assume that the coder and controller have available the initial value of the quantization scaling a(0) = a 0 and they haveaccess to the quantization scalings a(pk) and u(pk).
Definition 2.1.The system (2.1) is said to be globally stabilizable via a DCC, if there exists a coder-controller of the form (2.3)-(2.4)with a quantization scaling a 0 , and for every initial condition x(0) = x 0 , the corresponding solution x(k) of the closed-loop system satisfies In this case it is also said that coder-controller (2.3)-(2.4)globally stabilizes system (2.1) via a DCC.If the above assertion holds for all x 0 belonging to some neighborhood of the origin in R n , it is said that coder-controller (2.3)-(2.4)locally stabilizes system (2.1) via DCC.
The objective of this paper is to construct the coder-controller which globally (locally) stabilizes uncertain system (2.1) via DCC.
Semilinear discrete-time systems and state quantization
In this section, we start considering a semilinear discrete control system of the form where A, B are constant matrices, f (•) is a nonlinear function.In order to describe and analyze our multirate control system, it is convenient to consider an equivalent singlerate system described as follows.For this, we use a multirate coder-controller in which the control input u(k) is applied at every time step but a codeword is transmitted on the channel only once every p time steps.Let p ≥ 1 be the smallest integer such that rank[B,AB,...,A p−1 B] = n.From the discretetime system (3.1),we have or in the discrete p-delay time system where We denote y(k) = x(pk).Then the system (3.3) is rewritten in the single-rate form where V. N. Phat and J. Jiang 47 We quantize the state space of system (3.5) as follows.For consistency we denote ā(k) = a(pk), h(k) = h(pk).Given sequence ā(k), we quantize the state space of system (3.5) by dividing the set B( ā(k)) into q n hypercubes.For each i ∈ {1, 2,...,n} we divide the corresponding state coordinate y i ∈ R into q intervals of the form . . .
(3.7) Thus, for any vector y ∈ B( ā(k)), there exist unique integers i 1 ,i 2 ,...,i n ∈ {1, 2,..., q} such that y ∈ I i, j ( ā(k)).According to the integers i 1 ,i 2 ,...,i n , we define the vector This vector is the center of the corresponding hypercube I i, j ( ā(k)) containing the original point y and (3.9) Note that the region R n \ B( ā(k)) partition the state space into q n + 1 regions.Then, in our proposed coder-controller, for y ∈ B( ā(k)), the transmitted codeword will correspond to the integers {i 1 ,i 2 ,...,i n }.For y ∈ R n \ B( ā(k)) we assign the codeword {0}.Moreover, note that our method of quantization of the state space of system (3.5)depends on the scaling parameter ā(k), which is available to both the controller and coder at any step time k, and the update law for ā(k).Note that the regions I i, j together with the region R n \ B( ā(k)) partition the state space into q n + 1 regions (see Figure 3.1).Remark 3.1.Note that the number p > 0, as mentioned before, is defined as a finite time at which the linear discrete-time system [A,B] is globally controllable.Following the wellknown controllability criterion of discrete-time system (see, e.g., [5,13]), this number is the smallest integer such that rank[A|B] = [A,B,...,A p−1 B] = n.Since the matrix [A|B] has (n × nm)-dimension, n ≥ m, we have p = 1, if matrix B has full rank m = n.Otherwise, this number can be defined as the smallest number of the linearly independent columns of matrix [A|B].
Main results
Consider the discrete-time system (3.1).Assume the following.
Assumption 4.2.The function f (x,u) satisfies the following growth condition: Introduce the following notations: We first prove the stabilizability of single-rate system (3.5).For this, we construct the coder-controller of the form (2.3)-(2.4)for system (3.5) as follows.Consider the sequence V. N. Phat and J. Jiang 49 and for a given integer q > 1, by the quantization method described in the previous section we quantize the state space of system (3.5) by diving the set B( ā(k)) into q n hypercubes of the form (3.7).Note that for any k ∈ Z + , if y(k) ∈ B( ā(k)), then there is a vector where the positive integer r ∈ Z + is chosen so that Then the coder-controller (4.5)- (4.6) globally stabilizes single-rate system (3.5)via DCC.
Proof.The proof consists of two parts.(A) We first prove that if there is an integer k 0 ∈ Z + such that y(k 0 ) ∈ B( ā(k 0 )), then y(k) ∈ B( ā(k)), for all k ≥ k 0 .Indeed, if y(k 0 ) ∈ B( ā(k 0 )), then by the state space quantization of system (3.5),there is a vector η(i 1 ,i 2 ,...,i n ) ∈ R n such that Substituting the controller (4.6) in the last inclusion, we find and hence Using condition (4.1), we have Since u(pk 0 + i) ≤ v(k 0 ) , and note that η(•) is the center of the hypercube I i, j (•) ⊂ B( ā(k 0 )), we have Moreover, from the system of discrete-time equations (3.3), we can verify by induction the following estimation: for all i = 1,2,..., p − 1, p > 1.Therefore, V. N. Phat and J. Jiang 51 Thus, we obtain that which means that y(k 0 + 1) ∈ B( ā(k 0 + 1)).(B) We show that there is at least k 0 ∈ Z + such that y(k 0 ) ∈ B( ā(k 0 )).Assume to the contrary that for all k ∈ Z + , y(k) / ∈ B( ā(k)).In this case, taking v(k) = 0, from (3.5) it follows that Hence On the other hand, from the system (3.1) with v(k) = 0, we can verify by induction that then By the definition of the sequence ā(k), for the case y(k) / ∈ B( ā(k)), taking any ā(0) = a 0 > 0, we have ā(k) = r k a 0 , k ∈ Z + , and hence Using the condition (4.7) gives Thus, there is a number k 0 such that y(k 0 ) ∈ B( ā(k 0 )), which contradicts the contrary assumption.Finally, combining (A) and (B) gives and hence the solution x(•) at the time rate p goes to 0 as k goes to ∞.For any time instant pk + j, j = 1,2,..., p − 1, the solution x(pk + j), using the estimation (4.13), satisfies the following estimation: for some positive number M > 0, and hence the solution also goes to 0, because of x(pk) → 0. The proof of the theorem is completed.
We now consider nonlinear discrete-time system (2.1),where the nonlinear function F(x,u) is continuously differentiable in (x,u) and F(0,0) = 0.In this case, nonlinear function F(x,u) can be linearized as where and the nonlinear perturbation function f (x,u) satisfies the following bounded growth condition: for all x + u < δ.Using the proof of Theorem 4.3 letting c = d = , if Φ /q < 1, we can choose a number > 0 such that The simulation result of the above coder-controller applied to system (4.34) with x(0) = 2 is shown in Figure 4.1.
.32) V. N. Phat and J. Jiang 53 Therefore, as a consequence of Theorem 4.4, we obtain the following result, which gives a sufficient condition for the local stabilizability of nonlinear system (2.1). | 2,861.6 | 2005-01-01T00:00:00.000 | [
"Mathematics"
] |
A Multifunctional Coating on Sulfur-Containing Carbon-Based Anode for High-Performance Sodium-Ion Batteries
A sulfur doping strategy has been frequently used to improve the sodium storage specific capacity and rate capacity of hard carbon. However, some hard carbon materials have difficulty in preventing the shuttling effect of electrochemical products of sulfur molecules stored in the porous structure of hard carbon, resulting in the poor cycling stability of electrode materials. Here, a multifunctional coating is introduced to comprehensively improve the sodium storage performance of a sulfur-containing carbon-based anode. The physical barrier effect and chemical anchoring effect contributed by the abundant C-S/C-N polarized covalent bond of the N, S-codoped coating (NSC) combine to protect SGCS@NSC from the shuttling effect of soluble polysulfide intermediates. Additionally, the NSC layer can encapsulate the highly dispersed carbon spheres inside a cross-linked three-dimensional conductive network, improving the electrochemical kinetic of the SGCS@NSC electrode. Benefiting from the multifunctional coating, SGCS@NSC exhibits a high capacity of 609 mAh g−1 at 0.1 A g−1 and 249 mAh g−1 at 6.4 A g−1. Furthermore, the capacity retention of SGCS@NSC is 17.6% higher than that of the uncoated one after 200 cycles at 0.5 A g−1.
Introduction
Over the past decade, the energy crisis, environmental degradation, and political orientation across the world have combined to fire the explosive growth of the new energy vehicle market [1,2]. However, limited by barren lithium resources, Lithium-ion batteries (LIBs) cannot support the rapid development of electric vehicles and large-scale energy storage alone [3]. The high crustal abundance, low standard reduction potential (−2.71 V vs. SHE), and inert nature of sodium when it encounters aluminum make sodium-ion batteries (SIBs) a cheaper and more widely available alternative to LIBs [3][4][5]. More importantly, the operating principles and battery components of SIBs are similar to LIBs, which means the advanced manufacturing line and technique of LIBs can be applied to the production of SIBs without resistance [2,3,5].
Currently, the biggest obstacle to the commercial production of SIBs is the lack of suitable electrode materials. Although various anode materials have been intensively studied, such as conversion materials (oxide [6], sulfide [7]), alloy materials (Sn [8], Sb [9], and Ge [10], etc.), organic compounds (Schiff base polymer [11] and polyamide [12]), and intercalation materials (carbon-based material [13] and titanium-based material [14], etc.), a carbon-based material is still the most promising one for its low cost, chemical inertness, adjustable structure, and abundant sources (Table S1). For example, Kang et al. reported that graphite as a host for Na + -solvent complexes could provide a sodium storage specific
Results and Discussion
Glucose-derived carbon spheres (GCS) of about 150 nm were firstly prepared according to the reports elsewhere ( Figure 1a). After p-GCS were fully dispersed in the polyacrylonitrile (PAN) solution, the homogeneous suspension was titrated into the high-speed rotating mixed alcohol solution to encapsulate p-GCS into a cross-linked 3D PAN network. SGCS@NSC, NSC, and SGCS samples were finally obtained by annealing a mixture of the corresponding precursor and sublimated sulfur. As shown in Figures 1b,c and S1a1-a3, both SGCS@NSC and NSC maintain the cross-linked network structure that facilitates fast electron transport. The difference is that there are more sphere-like particles in SGCS@NSC due to the encapsulation of SGCS into the network. The TEM image in Figure 1d further reveals the detailed structure of SGCS@NSC and validates the conclusion drawn from SEM images. The HRTEM (Figure 1e) shows that SGCS particles are tightly encapsulated with NSC of uneven thickness. The NSC layer acts as a physical barrier to prevent the diffusion of soluble polysulfides from the SGCS particles to a certain extent. Energy dispersive X-ray spectroscopy (EDS) demonstrates the coexistence and uniform distribution of C, N, and S elements in SGCS@NSC. The sulfur element in the NSC layer in the form of C-S covalent bonding also plays an important role in eliminating the shuttling effects of polysulfides, which will be discussed later. X-ray diffraction (XRD) patterns, nitrogen adsorption/desorption curves, Raman spectra, and XPS spectra were obtained to understand the structural characteristics and chemical properties of obtained samples. The XRD patterns of prepared samples all show broad (002) peaks around 25° corresponding to characteristics of amorphous carbon (Figure S2a) [32,33]. The interlayer distance (d002) of SGCS@NSC, NSC, SGCS, and GCS samples are 0.361 nm, 0.352 nm, 0.367 nm, and 0.389 nm, respectively. The interlayer distance of SGCS is lower than that of GCS due to the low thermal treatment temperature and the promoting effect of molten sulfur on the uniformity of the thermal field. This also implicitly suggests that the sulfur species in SGCS are not primarily presented in the form of chemical bonds, a view that will be discussed in detail in conjunction with subsequent XPS characterization. On the other hand, it is completely reasonable that the interlayer distance of SGCS@NSC composite is between SGCS and NSC itself. Nitrogen adsorption/desorption tests indicate that GCS owns the highest Brunauer-Emmett-Teller (BET) X-ray diffraction (XRD) patterns, nitrogen adsorption/desorption curves, Raman spectra, and XPS spectra were obtained to understand the structural characteristics and chemical properties of obtained samples. The XRD patterns of prepared samples all show broad (002) peaks around 25 • corresponding to characteristics of amorphous carbon ( Figure S2a) [32,33]. The interlayer distance (d 002 ) of SGCS@NSC, NSC, SGCS, and GCS samples are 0.361 nm, 0.352 nm, 0.367 nm, and 0.389 nm, respectively. The interlayer distance of SGCS is lower than that of GCS due to the low thermal treatment temperature and the promoting effect of molten sulfur on the uniformity of the thermal field. This also implicitly suggests that the sulfur species in SGCS are not primarily presented in the form of chemical bonds, a view that will be discussed in detail in conjunction with subsequent XPS characterization. On the other hand, it is completely reasonable that the interlayer distance of SGCS@NSC composite is between SGCS and NSC itself. Nitrogen adsorption/desorption tests indicate that GCS owns the highest Brunauer-Emmett-Teller (BET) surface area (56.1 m 2 g −1 , Table S4). After the introduction of sulfur, the BET surface area of SGCS decreases to 18.7 m 2 g −1 , corresponding to a severe loss of pores between 10 and 50 nm (Figure 2a, obtained from the adsorption curve). Considering the BET surface of surface area (56.1 m 2 g −1 , Table S4). After the introduction of sulfur, the BET surface area of SGCS decreases to 18.7 m 2 g −1 , corresponding to a severe loss of pores between 10 and 50 nm (Figure 2a, obtained from the adsorption curve). Considering the BET surface of NSC is 31.2 m 2 g −1 , its reasonable that the BET surface area and pore volume exhibit the sequence of GCS > SGCS@NSC > SGCS. The Raman spectra of SGCS@NSC and NSC show peaks at 175, 312, 375, and 804 cm −1 in the 100-1000 cm −1 wavenumber region, indicating the stretching, bending, and deformation of the C-S bond (Figures 2b and S2b) [33,34]. The stretching of the S-S bond contributes to the two peaks at 476 and 941 cm −1 [35]. In addition, the two peaks near 1480 cm −1 also correspond to the stretching of C-S/S-S bonds [33,36]. The characteristic peaks of C-S/S-S bonds in SGCS are hardly observed. Sulfur in SGCS is mainly stored in the pores as an independent molecular state, making C-S/S-S bonds difficult to detect on surface, The Raman spectra of SGCS@NSC and NSC show peaks at 175, 312, 375, and 804 cm −1 in the 100-1000 cm −1 wavenumber region, indicating the stretching, bending, and deformation of the C-S bond (Figures 2b and S2b) [33,34]. The stretching of the S-S bond contributes to the two peaks at 476 and 941 cm −1 [35]. In addition, the two peaks near 1480 cm −1 also correspond to the stretching of C-S/S-S bonds [33,36]. The characteristic peaks of C-S/S-S bonds in SGCS are hardly observed. Sulfur in SGCS is mainly stored in the pores as an independent molecular state, making C-S/S-S bonds difficult to detect on surface, and the few sites in GCS where sulfur can form covalent bonds cause the absence of C-S/S-S bonds. In the high wavenumber region, all the prepared samples show two main peaks: one centered at 1326 cm −1 corresponds to the D band caused by defects or disordered structures, and the other around 1580 cm −1 represents the G band generated by the in-plane stretching vibration of sp 2 hybridized carbon [37,38]. The exact wavenumber of the D band of SGCS and SGCS@NSC are close to each other, while the G band of SGCS (1542 cm −1 ) displays a red shift compared to SGCS@NSC (1556 cm −1 ), which may be due to more edge defects induced by a larger interlayer distance of SGCS. The Raman characteristic peaks of SGCS@NSC are consistent with NSC, suggesting that SGCS particles are well wrapped by the NSC layer. The D-band to G-band intensity ratio of SGCS@NSC (1.203) is much higher than SGCS (0.937) and GCS (0.168), indicating N/S co-doping can introduce more structural defects, which is conducive to improving the sodium storage performance [39].
The high content of the C-S bond in SGCS@NSC is also reflected in XPS spectra. Figure S3a and Table S5 confirm the coexistence of C, O, and S elements in SGCS, NSC, and SGCS@NSC, and the latter two also contain significant amounts of the N element. Figure 2c shows the fitted S 2p high-resolution XPS spectra of SGCS@NSC. The two main peaks at 162.7 and 164.0 eV are contributed by C-S and S-S bonds, respectively [27,36], and another peak at 160.8 eV can be assigned to HS X C-by-products produced by PAN during the vulcanization process [40]. According to the quantitative analysis, the ratio of the C-S bond to S-S bond in SGCS@GCS is 6.2, much higher than 1.9 in SGCS (Table S2). As for C 1s spectra, the area percentage of C-N/C-S bonds in SGCS@GCS (18.0%) far exceeds the percentage of C-S in SGCS (12.5%, Table S3). These two comparisons demonstrate that SGCS@NSC possesses much more C-S bonds, which can serve as anchoring sites of polysulfide. In addition, the abundant pyridine nitrogen (389.9 eV), pyrrole nitrogen (400.7 eV), and oxidized nitrogen (402.7 eV) in SGCS@NSC exhibit strong dipole-dipole electrostatic interactions with polysulfides, preventing polysulfides from shuttling ( Figure S3c) [41,42]. Therefore, the chemical adsorption of the NSC layer is helpful to enhance the cyclic stability of SGCS@NSC.
The effectiveness of the NSC coating layer is verified by the electrochemical performance in sodium ion half-cells. Figure 3a shows the initial five cycles cyclic voltammetry (CV) curves of SGCS@NSC in the range of 0.01-3.0 (V vs. Na/Na + ) with a sweep speed of 0.1 mV s −1 [43]. After the first cycle, two pairs of peaks at 2.239/1.855 and 1.903/1.137 V in the subsequent curves correspond to the step-by-step redox reaction of sulfur [44,45], and the curves overlap well, indicating the excellent cycling stability of the electrode. However, the cathodic peaks current of SGCS decreases rapidly from the second cycle, and the redox peaks' intensities are weaker than SGCS@NSC despite the higher sulfur content of SGCS ( Figure S4c).
As shown in Figure 3b, the initial discharge/charge specific capacity of SGCS@NSC, SGCS, and GCS at 0.1 A g −1 is 942.9/798.2, 928.1/661.3, and 417.1/226 mAh g −1 , respectively. SGCS@NSC has the highest initial coulombic efficiency (ICE, 84.7%) due to the moderate BET surface area and the suppressed polysulfides dissolution. Although GCS has the largest interlayer distance, the high BET surface area and the lack of highly reversible active sites introduced by N, S co-doping resulted in an ICE as low as 54.2% for GCS. The 5th, 10th, and 30th galvanostatic charge and discharge curves of SGCS@NSC overlap each other (Figure 3c), exhibiting the same electrochemical phenomena as the CV curves. After 60 cycles at 0.1 A g −1 , the discharge specific capacity of SGCS@NSC is 609.8 mAh g −1 , far exceeding that of SGCS (164.9 mAh g −1 ) and GCS (215.4 mAh g −1 ). The significant improvement in sodium storage performance can be attributed to N, S co-doped active sites, which can not only act as sodium storage sites themselves, but also enhance the reversibility of polysulfide intermediates in SGCS. After 200 cycles at 0.5 A g −1 (Figure 3d), the discharge specific capacity of SGCS@NSC, SGCS, and GCS are 294.1, 105.1, and 148.3 mAh g −1 , corresponding to the capacity retentions of 46.1%, 28.5%, and 74.1%, respectively (all the electrodes were tested at 100 mA g −1 for 5 cycles to fully activate the electrode materials, and the capacity retention was calculated based on the 6th discharge specific capacity). GCS has a better cycling stability but lacks sufficient active sites for sodium storage, so the specific capacity is lower than SGCS@NSC by up to 146 mAh g −1 .
The inability of SGCS to maintain the reversibility of the sulfur-containing active sites Molecules 2023, 28, 3335 6 of 12 resulted in the capacity retention and specific capacity to be 17.6% and 189 mAh g −1 lower than SGCS@NSC, respectively. Benefiting from the synergistic effect of the physical barrier and chemical adsorption of NSC layer, SGCS@NSC displays high sodium storage specific capacity and enhanced cycling stability. As shown in Figure 3b, the initial discharge/charge specific capacity of SGCS@NSC, SGCS, and GCS at 0.1 A g −1 is 942.9/798.2, 928.1/661.3, and 417.1/226 mAh g −1 , respectively. SGCS@NSC has the highest initial coulombic efficiency (ICE, 84.7%) due to the moderate BET surface area and the suppressed polysulfides dissolution. Although GCS has the largest interlayer distance, the high BET surface area and the lack of highly reversible active sites introduced by N, S co-doping resulted in an ICE as low as 54.2% for GCS. The 5th, 10th, and 30th galvanostatic charge and discharge curves of SGCS@NSC overlap each other (Figure 3c), exhibiting the same electrochemical phenomena as the CV curves. After 60 cycles at 0.1 A g −1 , the discharge specific capacity of SGCS@NSC is 609.8 mAh g −1 , far exceeding that of SGCS (164.9 mAh g −1 ) and GCS (215.4 mAh g −1 ). The significant improvement in sodium storage performance can be attributed to N, S co-doped active sites, which can not only act as sodium storage sites themselves, but also enhance the reversibility of polysulfide intermediates in SGCS. After 200 cycles at 0.5 A g −1 (Figure 3d), the discharge specific capacity of SGCS@NSC, SGCS, and GCS are 294.1, 105.1, and 148.3 mAh g −1 , corresponding to the capacity retentions of 46.1%, 28.5%, and 74.1%, respectively (all the electrodes were tested at 100 mA g −1 for 5 cycles to fully activate the electrode materials, and the capacity retention was calculated based on the 6th discharge specific capacity). GCS has a better cycling stability but lacks sufficient active sites for sodium storage, so the specific capacity is lower than SGCS@NSC by up to 146 mAh g −1 . The inability of SGCS to maintain the reversibility of the sulfur-containing active sites resulted in the capacity retention and specific capacity to be 17.6% and 189 mAh g −1 lower than SGCS@NSC, respectively. Benefiting from the synergistic effect of the physical barrier and chemical adsorption of NSC layer, SGCS@NSC displays high sodium storage specific capacity and enhanced cycling stability.
Meanwhile, Figure 3e presents the rate performance of SGCS@NSC, NSC, and SGCS measured at various current densities. At the current densities of 0.1, 0.2, 0.4, 0.8, 1.6, and 3.2 A g −1 , the discharge specific capacity of SGCS@NSC are 818.6, 710.2, 674.8, 623.6, 536.4, and 407.9 mAh g −1 . At 3.2 A g −1 , the discharge profiles of SGCS@NSC show an obvious Meanwhile, Figure 3e presents the rate performance of SGCS@NSC, NSC, and SGCS measured at various current densities. At the current densities of 0.1, 0.2, 0.4, 0.8, 1.6, and 3.2 A g −1 , the discharge specific capacity of SGCS@NSC are 818.6, 710.2, 674.8, 623.6, 536.4, and 407.9 mAh g −1 . At 3.2 A g −1 , the discharge profiles of SGCS@NSC show an obvious step-by-step sodiation phenomenon, implying the excellent electrochemical kinetics (Figure 3f). Even when the current density is increased to 6.4 A g −1 , a high specific capacity of 248.9 mAh g −1 can still be provided, much higher than the 39.1 mAh g −1 of SGCS and 72.9 mAh g −1 of GCS. In addition, when the current density is reset to 0.1 A g −1 , the SGCS@NSCC electrode still provides a reversible specific capacity of 609.8 mAh g −1 , exhibiting excellent high-rate cycling performance. These results indicate that the successful incorporation of S and N elements into the carbon framework can effectively improve the reactivity and electronic conductivity of GCS by producing external defects, and the resulting defects can enhance the transport rates of sodium ions. More contact sites are provided between the active material and the electrolyte, which is beneficial to the high-rate performance.
To explore the reaction kinetics of the prepared electrodes, CV curves at different sweep speeds of 0.1-1.0 mV s −1 were obtained (Figures 4a and S4a-c). The area of the closed CV curves represents the total charge storage for Faraday and non-Faraday processes [46], which are usually divided into capacitive-control and diffusive-control charge storage [33]. The contribution of the two charge storage modes can be calculated according to the following equations [47]: where a and b are variable parameters, v means the sweep speed, and i is the response current. When the b value is close to 0.5 or 1, the diffusion-controlled process or the surface capacitance-controlled process dominates the electrochemical reaction [48]. The value of b is determined by the slope of the log (i) vs. log (v) plot of the redox peaks. As can be seen from Figure 4b, the b value of peak A of SGCS@NSC is between NSC and SGCS, which are both below 0.7, indicating a diffusion-controlled process. However, the b value of peak C of SGCS@NSC is around 0.8, higher than that of NSC (0.61) and SGCS (0.57), manifesting a surface capacitance-controlled discharge process. To quantify the contribution of the diffusion and surface capacitance-controlled processes at various sweep speeds, Equation (1) can be rewritten as: where k 1 and k 2 are constants [49]. Figure 4c shows The galvanostatic intermittent titration technique (GITT) was performed to investigate the Na + diffusion coefficient in prepared electrodes during the discharge and charge process. The calculation based on equation S1 shows that SGCS@NSC inherits the electrochemical kinetics process of NSC with similar Na + diffusion coefficient and variation trends. The average Na + diffusion coefficient of SGCS@NSC and NSC is higher than SGCS in the whole process, and the difference in the discharge process reaches an order of magnitude (Figure 4d,e). As shown in Figure 4f, the Nyquist plots of all samples were composed of two parts: one is a depressed half circle, which is composed of two half-circles in the high and middle frequency areas; and the other is an oblique linear in the low frequency area. Furthermore, the corresponding equivalent circuit model in Figure 4f is used to simulate the experimental data, where RS stands for the resistance of electrolyte solution obtained from the high frequency region data and Rct represents the charge transfer re- The galvanostatic intermittent titration technique (GITT) was performed to investigate the Na + diffusion coefficient in prepared electrodes during the discharge and charge process. The calculation based on Equation (S1) shows that SGCS@NSC inherits the electrochemical kinetics process of NSC with similar Na + diffusion coefficient and variation trends. The average Na + diffusion coefficient of SGCS@NSC and NSC is higher than SGCS in the whole process, and the difference in the discharge process reaches an order of magnitude (Figure 4d,e). As shown in Figure 4f, the Nyquist plots of all samples were composed of two parts: one is a depressed half circle, which is composed of two half-circles in the high and middle frequency areas; and the other is an oblique linear in the low frequency area. Furthermore, the corresponding equivalent circuit model in Figure 4f is used to simulate the experimental data, where R S stands for the resistance of electrolyte solution obtained from the high frequency region data and R ct represents the charge transfer resistance fitted from the intermediate frequency region data [50]. The electrochemical impedance spectroscopy (EIS) results illustrate decreased semicircle diameters at high-frequency regions and increased slopes of the inclined lines at lowfrequency regions for SGCS@NSC. As we can see from the fitted EIS parameters in Table S6, the R s (4.9 Ω) and R ct (637.2 Ω) values of SGCS@NSC are significantly reduced compared to other samples. Profiting from the cross-linked conductive network constructed by NSC coating, the charge transfer between the SGCS@NSC electrode and the electrolyte is easier. Thus, it is demonstrated that the electron and sodium ion transfer rates in SGCS@NSC are both improved [50]. The fast Na + diffusion coefficient and electron transport in the 3D cross-linked NSC coating layer account for the high sodium storage specific capacity and excellent rate capability of the SGCS@NSC electrode.
Materials and Methods
Synthesis of glucose-derived carbon spheres (GCS): typically, 8.0 g glucose (D-(+)glucose, 99.5%, Aladdin) was dissolved in 80 mL deionized water. Then, the obtained transparent solution was placed in a sealed 100 mL Teflon-lined stainless-steel autoclave and heated to 160 • C for 8 h. After cooling naturally to room temperature, the precipitate was separated from the supernatant by centrifugation at 10,000 r/min for 10 min. The resulting solid was purified three times with deionized water and, finally, once with absolute ethanol. The obtained product was vacuum dried at 80 • C overnight. Finally, the dry powder was calcined at 200 • C for 12 h in a muffle furnace to obtain the glucose-derived carbon spheres precursor (p-GCS), which was then carbonized at 500 • C in an argon atmosphere for 2 h to obtain glucose-derived carbon spheres (GCS).
Synthesis of p-GCS@H-PAN and H-PAN: generally, 0.1 g of polyacrylonitrile (PAN, M w = 150,000) was dissolved in 3 mL of N, N-dimethyl formamide (DMF), followed by the addition of 0.05 g of p-GCS. The suspension endured ultrasonic treatment for 2 h to fully disperse p-GCS. A total of 3 mL glycerin and 30 mL isopropyl alcohol were stirred at 500 rpm for 1 h in a 50 mL Teflon polytetrafluoroethylene (PTFE) beaker. Next, the speed was increased to 700 rpm and the as-prepared DMF suspension was added dropwise into the mixed alcohol system. After stirring for another 10 min, the reactor was transferred into 180 • C electrical oven for 6 h. The resulting brown precipitate was thoroughly cleaned with deionized water and dried by freeze-drying. The hydrothermally treated PAN coated p-GCS was abbreviated as p-GCS@H-PAN. The H-PAN was prepared by the same procedures as p-GCS@H-PAN but without the addition of p-GCS.
Synthesis of SGCS@NSC, NSC, and SGCS: the above-prepared p-GCS@H-PAN, H-PAN, and p-GCS were mixed with sublimed sulfur in a ratio of 1:5 and heated to 500 • C for 2 h at 3 • C min −1 in an argon atmosphere. The obtained black products were labeled as SGCS@NSC, NSC, and SGCS.
Materials Characterization: S4800 Field emission scanning electron microscopy (SEM Hitachi, Tokyo, Japan) and JEM2100 Transmission electron microscope (TEM) tests were used to characterize the microstructure of the samples. The X-ray diffraction (XRD) powder diffraction pattern of the samples was recorded on a D8 ADVANCE DAVINCI X-ray diffractometer (Cu k α radiation, λ = 0.154 nm BRUKER AXS, Bremen, Germany). Axis Ultra DLD X-ray photoelectron spectroscopy (XPS, Kratoms Britain) were used to analyze the chemical composition of the samples. The structure of carbon component of the prepared samples was characterized by Renishaw India Reflex confocal Raman microscopy (Renishaw, Kingswood, UK). ASAP 2020 surface area and a porosity analyzer were used to analyze the pore structure of the sample.
Electrochemical measurements: The synthesized products were mixed with a conductive carbon (Super P) and sodium carboxymethyl cellulose (CMC) binder with a mass ratio of 8:1:1 in deionized water. The electrodes were fabricated by spreading the slurry on Cu foil and then dried at 80 • C in a vacuum overnight. Whatman glass fiber was used as the separator, sodium metal foil as the counter electrode, and 1 M NaPF 6 dissolved in ethylene (EC) and dimethyl carbonate (DMC) (1:1, vol/vol) as the electrolyte. The Cyclic Voltammetry (CV) curves were obtained by an electrochemical workstation (CHI 660E) at different sweep speeds between 0.01 and 3.0 V (vs. Na + /Na). On the LAND instrument, the constant current charge and discharge experiments with different current densities were performed in the same voltage range. Electrochemical Impedance Spectroscopy (EIS) tests were conducted on an electrochemical station (Model 1470E multi-channel electrochemical workstation, Solartron Metrology) in the frequency range of 100 kHz to 10 mHz at room temperature.
Conclusions
In summary, glucose-derived carbon spheres were encapsulated into a N, S co-doped 3D cross-linked network for the construction of high-performance sulfur-containing anode for sodium-ion batteries. The tight coating relationship of the NSC layer on SGCS acting as a physical barrier and abundance of C-S/C-N bonds serving as chemical adsorption sites together protect SGCS@NSC from the shuttling effect of polysulfides. In addition, the doped sites can provide additional sodium storage sites and improve the sodium diffusion coefficient of the composite to ensure the comprehensive electrochemical performance of SGCS@NSC. Thus SGCS@NSC still delivers 293.7 mAh g −1 sodium storage specific capacity after 200 cycles at 0.5 A g −1 , higher than 107.4 mAh g −1 of SGCS, demonstrating improved cycling stability. With the assistance of a 3D high-speed electron and ion transport network of the NSC layer, SGCS@NSC illustrates enhanced pseudocapacitive charge storage and rate performance. SGCS@NSC provides a high specific capacity of 248.9 mAh g −1 at 6.4 A g −1 and the pseudocapacitive contribution, of which is as high as 90% at 1.0 mV s −1 . This multifunctional coating greatly improves the electrochemical performance of SGCS and is also applicable to other sulfur-containing electrodes.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,099.8 | 2023-04-01T00:00:00.000 | [
"Materials Science"
] |
Tilted Photovoltaic Energy Outputs in Outdoor Environments
: The direction and environment of photovoltaics (PVs) may influence their energy output. The practical PV performance under various conditions should be estimated, particularly during initial design stages when PV model types are unknown. Previous studies have focused on a limited number of PV projects, which required the details of many PV models; furthermore, the models can be case sensitive. According to the 18 projects conducted in 7 locations (latitude 29.5–51.25N) around the world, we developed polynomials for the crystalline silicon PV energy output for di ff erent accessible input variables. A regression tree e ff ectively evaluated the correlations of the outcomes with the input variables; those of high importance were identified. The coe ffi cient of determination, indicating the percentage of datasets being predictable by the input, was higher than 0.65 for 14 of the 18 projects when the polynomial was developed using the accessible variables such as global horizontal solar radiation. However, individual equations should be derived for horizontal cases, indicating that a universal polynomial for crystalline silicon PVs with a tilt angle in the range 0 ◦ –66 ◦ can be di ffi cult to develop. The proposed model will contribute to evaluating the performance of PVs with low and medium tilt angles for places of similar climates.
Introduction
There is an increasing concern regarding energy resources, energy use, and its probable effects on the environment. Urban areas require a large amount of energy to operate, and buildings consume a significant portion of this energy. Hong Kong, for example, utilizes imported nuclear power and fossil fuels [1], and most of the energy is used by the residential and commercial buildings [2]. The combustion of fossil fuels is the leading cause of air pollution, respiratory illnesses, and greenhouse gases [3]. Renewable energy resources can be a clean and safe alternative to conventional energy resources with the increasing energy demands and an eco-protection consensus. Solar energy is abundant in many high-altitude and subtropical regions [4], and can be used as a clean, renewable resource in the city environment via building integrated photovoltaic (PV) panels [5] at various tilt angles and azimuthal directions. PV panels installed on vertical or inclined building facades and overhangs can with the climatic variables using the feature selection and mutual information methods [24]. However, data of only one place was used, and the results gave the correlation factors only. The regression tree (RTree) approach [25] can be used to identifying the input variables of high importance. The RTree is a classical and effective approach used to correlate the target output with the other readily accessible inputs. The contributions of each input in explaining the output can be interpreted by the structure of the RTree model [26]. This work correlated the real-time PV energy output with the simultaneously recorded meteorological data of as much as 17 silicon crystalline PV projects over 7 worldwide regions. Specially, importance of the input variables to the PV performance estimation were evaluated to remove those variables of a low significance using the RTree approach. This saved the cost of measurement, model development, and curve fitting. The performances of the polynomials in the first and second orders using the identified input variables were evaluated, and their advantages and limitations are discussed.
Data Collection of PV and Solar Radiation
This study used the meteorological data and the PV energy output field measurements. The PV performance data included 15 American projects and 2 German projects from the PVOutput website [27]. All projects used silicon crystalline PV cells that shared similar responses to the climatic conditions [14]. The meteorological data obtained from five different locations in the USA were recorded by the Measurement and Instrumentation Data Centre (MIDC) of the National Renewable Energy Lab (NREL), USA. Weather data of the two German (DE) locations were acquired from the server of the Deutscher Wetterdienst Climate data centre (CDC) [28]. Data from the Centre for Sustainable Energy Technologies (CSET) of the University of Nottingham, Ningbo, China (UNNC), consisted of an independent database for model testing that included the PV energy output, solar radiation, and air temperature. Table 1 lists the weather station details of the pyranometer and pyrheliometer accuracies. The stations covered a wide range of climate zones from humid to arid. Most of the stations measured the solar irradiance using the high accuracy thermopile meters in the secondary standard or the first class. Scanning pyrheliometer and pyranometer (SCAPP) represents a low-cost silicon meter that measures the diffuse and direct solar irradiance with moderate accuracy [29]. The dry-bulb air temperature and wind speed, as contributors to the PV cell temperature variation, were acquired as well. Two of the MIDC measurement stations (CO and AZ) in west USA were characterised by desert or continental climate. The weather station in Oregon (OR) was of a marine climate, and the station in Tennessee (TN) was of a subtropical climate. The weather data measurements by the two German stations (NW and HH) were in the temperate maritime climate zone. The USA weather data were recorded every minute, whereas the weather data from Germany and UNNC were recorded in 10 min intervals. Table 2 lists the system size, panel brand, tilt angle, azimuth angle, and weather data of the PV projects in different places. The majority of the PV energy measurements were performed in 5 min intervals, whereas the Germany project data were averaged over 10 min for consistency with the weather data. The PV energy output of UNNC was in the 2 min interval and averaged over 10 min. The tilt angles of the PV projects ranged from 0 (horizontal) to more than 60 • , covering most of the PV installation routines. The tilt angles of many PV panels under study were different from the site latitude, and their azimuth directions were not in line with the equator direction. This was due to the site restrictions, especially when the panels were installed on buildings. The tilt angles of 14 projects were less than 40 • , and the azimuth angles of most of the PV panels ranged from 140 • to 225 • for harvesting solar energy in the northern hemisphere. The PV panels considered in this study thus represented projects in various worldwide climate zones and various tilt angles. The energy outputs of each project were normalized by its capacity in STC. There may be inaccurate recordings in the raw data measurements that may have resulted from the pyranometer cosine response, improper shadow band or shadow ball positioning, or even a bird nesting. Thus, the data quality was evaluated by referring to a guide of the International Commission of Illumination (CIE) [34]. The global, direct, and diffuse solar irradiance were essential variables for calculating the solar energy on the PV panels, which contributed to the power production. The missing irradiance component among direct, diffuse, or global, if any, was calculated using the other two components. The testing criteria comprised five levels that are listed as follows: Level 0 provided the amount of data recorded during the daytime when the solar altitude was higher than 0. For German sites that had only the diffuse and direct measurements (yet recorded as diffuse and global) by SCAPP, quality control Level 3 was skipped. Levels 4 and 5 removed the power output rates and PV panel efficiencies that were unrealistically high. A relatively flexible criterion was set for Level 5 because the efficiency was relevant to the PV panel size and solar energy on the PV panels that may have resulted from the erroneous panel information. Table 3 specifies the data quantity and the results of quality control for each site. From the PVOutput website, the PV panel performance data from the end of 2017 to June 2018 were acquired. The data available covered half a year from winter to summer. There were roughly 11,800-18,900 PV performance data samples for most of the United States PV projects, and 2600-3150 data samples in the 10 min interval from the PV projects in Germany. The significant data quantity reduction from Levels 3 to 5 for Projects 4 and 11 were because their PV outputs were measured with an interval of 15 min. In total, 250,788 datasets of PV projects in different climate zones were used for the analysis.
Level 0: Solar altitude angle α S should be greater than 0 • . Level 1: α S should be greater than 4 • ; Horizontal global solar irradiance (E HG ) should be greater than 20 W/m 2 . Level 2: E HG should be greater than 0 and less than the extraterritorial horizontal solar irradiance (E HE ); The horizontal diffuse sky irradiance (E HD ) should be greater than 0 and less than 0.5 E HE ; The direct beam irradiance (E NB ) should be greater than or equal to 0 and less than the extraterritorial beam irradiance (E NE ).
Level 3: For sites with direct, diffuse, and global measurements, E HG should be within (E HD + E NB sinα S ) ± 15%; For sites with global and diffuse measurements only, E HD should not be greater than E HG . Level 4: The ratio of PV energy output to its capacity (r) defined as the ratio of energy output to energy output at STC should be greater than 0.01 and less than 1.
Level 5: The efficiency of the PV panel defined as the ratio of the energy output to estimated solar irradiance on a panel should be less than 0.3. Figure 1 summarizes the overview of the current research. Firstly, the structure complexity of the RTree, determined by L min , was optimized to avoid overfitting. The importance of each potential input variable was studied by the RTree in the optimized complexity levels. The contributions of the input variables to the output estimation were quantified, and the model performance by different input combinations was tested. The selected variables of high importance were used to develop polynomials to estimate the PV energy output by multi-variable regressions.
Methodologies
The RTree algorithm used a sequence of binary partitions (splits) to separate the datasets into various groups according to the input variables (x 1 , x 2 , . . . , x n ). Figure 2 illustrates a split that divided the N A datasets of Node A into two child groups of Nodes B and C by the threshold of a variable (x j ).
x j and its threshold were determined to minimise the variance of the output. Equation (1) defines the reduction of variance owing to the split, where Var indicates the variance of the datasets in each node. r A , r B and r C are the average energy output rates for the datasets of Nodes A, B, and C, respectively. In the case of missing data, a substitute variable for x j can be determined as the surrogate. The variance of a node denotes how far the datasets are from their averages, which can be reduced by repeating the binary split several times. The approach classifies the datasets with a similar output of r into the same terminal node and represents them using their average value. The splitting stops when certain criteria, such as the datasets in the terminal node (leaf) being less than a minimum size (L min ), are met. A lower L min would lead to a more complicated RTree, which performs in-depth classifications for less output variance in the terminal node. However, an overly complicated model may be over-fitted and misled by the measurement errors and features that are not universal. Therefore, the RTree performance was tested by setting L min as 20, 40, . . . , 100, 200, . . . , 500, 1000, . . . , 2500, 5000, . . . , 10,000 for the model performances at different complexity levels.
when certain criteria, such as the datasets in the terminal node (leaf) being less than a minimum size (Lmin), are met. A lower Lmin would lead to a more complicated RTree, which performs in-depth classifications for less output variance in the terminal node. However, an overly complicated model may be over-fitted and misled by the measurement errors and features that are not universal. Therefore, the RTree performance was tested by setting Lmin as 20, 40, …, 100, 200, …, 500, 1000, …, 2500, 5000, …, 10,000 for the model performances at different complexity levels.
(1) Selecting the appropriate input variables for estimation is another critical issue. Using fewer input variables can reduce the model complexity and save the data measurement cost for other users. It is essential to develop the RTree model using input variables carrying equivalent "knowledge" of the PV at different tilt angles and directions to ensure that the RTree model can adapt to a maximum range of projects. Fortunately, the variable importance can be estimated by the developed RTree models according to the variance reduction given in Equation (1). A variable xi (or its surrogate) may determine various splits of the developed RTree, and the total variance reductions by such splits indicate the contribution of xi (or its surrogate) to the RTree. This implies that all the surrogating variables can gain importance when they contribute to a split. A variable can be more critical to the RTree if it is the criteria of many splits and contributes to significant output variance reductions. Alternatively, testing the model performance using a part of the input variable can be a more straightforward way to evaluate the variable importance. Table 4 presents the input variable combinations for the test, where Ecell represents the global solar irradiance on the PV panel, and Kcell is the diffuse fraction of the global irradiance on the PV panel (Ecell). Ecell and Kcell were determined by the well-acknowledged Perez 1990 model [35]. ZS is the solar zenith angle and σ is the solar incidence angle on the PV panel. Variables of Case 1 in Table 4 are irrelevant to the PV panel direction, which (1) when certain criteria, such as the datasets in the terminal node (leaf) being less than a minimum size (Lmin), are met. A lower Lmin would lead to a more complicated RTree, which performs in-depth classifications for less output variance in the terminal node. However, an overly complicated model may be over-fitted and misled by the measurement errors and features that are not universal. Therefore, the RTree performance was tested by setting Lmin as 20, 40, …, 100, 200, …, 500, 1000, …, 2500, 5000, …, 10,000 for the model performances at different complexity levels.
(1) Selecting the appropriate input variables for estimation is another critical issue. Using fewer input variables can reduce the model complexity and save the data measurement cost for other users. It is essential to develop the RTree model using input variables carrying equivalent "knowledge" of the PV at different tilt angles and directions to ensure that the RTree model can adapt to a maximum range of projects. Fortunately, the variable importance can be estimated by the developed RTree models according to the variance reduction given in Equation (1). A variable xi (or its surrogate) may determine various splits of the developed RTree, and the total variance reductions by such splits indicate the contribution of xi (or its surrogate) to the RTree. This implies that all the surrogating variables can gain importance when they contribute to a split. A variable can be more critical to the RTree if it is the criteria of many splits and contributes to significant output variance reductions. Alternatively, testing the model performance using a part of the input variable can be a more straightforward way to evaluate the variable importance. Table 4 presents the input variable combinations for the test, where Ecell represents the global solar irradiance on the PV panel, and Kcell is the diffuse fraction of the global irradiance on the PV panel (Ecell). Ecell and Kcell were determined by the well-acknowledged Perez 1990 model [35]. ZS is the solar zenith angle and σ is the solar incidence angle on the PV panel. Variables of Case 1 in Table 4 are irrelevant to the PV panel direction, which Selecting the appropriate input variables for estimation is another critical issue. Using fewer input variables can reduce the model complexity and save the data measurement cost for other users. It is essential to develop the RTree model using input variables carrying equivalent "knowledge" of the PV at different tilt angles and directions to ensure that the RTree model can adapt to a maximum range of projects. Fortunately, the variable importance can be estimated by the developed RTree models according to the variance reduction given in Equation (1). A variable x i (or its surrogate) may determine various splits of the developed RTree, and the total variance reductions by such splits indicate the contribution of x i (or its surrogate) to the RTree. This implies that all the surrogating variables can gain importance when they contribute to a split. A variable can be more critical to the RTree if it is the criteria of many splits and contributes to significant output variance reductions. Alternatively, testing the model performance using a part of the input variable can be a more straightforward way to evaluate the variable importance. Table 4 presents the input variable combinations for the test, where E cell represents the global solar irradiance on the PV panel, and K cell is the diffuse fraction of the global irradiance on the PV panel (E cell ). E cell and K cell were determined by the well-acknowledged Perez 1990 model [35]. Z S is the solar zenith angle and σ is the solar incidence angle on the PV panel. Variables of Case 1 in Table 4 are irrelevant to the PV panel direction, which represented the initial project stage when the PV installation details could not be fully specified. Cases 2 and 3 compared the performance of models that were developed using the solar irradiance and clearness index on the PV panel against those using the variables on the horizontal ground. Because the weather data may not be fully available for many places, Cases 4 to 8 tested the model performance when several variables of the weather data were removed during the RTree development. Cases 4 to 6 tested the accuracy of the model that was developed without either the air temperature T air or the wind velocity, or both as the input variable. Case 7 evaluated the model performance when only the global horizontal irradiance was available as the fundamental solar radiation measurement. The solar irradiance on the tilted surface could not be determined accurately in such a case. Case 8 evaluated the model when the solar radiation data was not entirely available. For all tests, the solar altitude angle was assumed to be always accessible and was determined by the local time, latitude, and longitude.
Note: Y indicates that the variable was used in the case, and N indicates that the variable was not used in this case.
One issue faced by the RTree was the model validity for new data, which may be lower than expected if the training data was insufficient. A database for training should be comprehensive enough so that the developed model can perform well for the new data. The PV panels may have various installation angles in different climate zones and operate in different seasons. It is essential to study the RTree performance for new PV panels at angular directions that are different from those in the projects under study. This work used cross-validation to evaluate the model accuracy. For each of the 17 PV projects, the energy output rate (r) was estimated by the RTree model that was developed using the other 16 projects. Model performance evaluations for different L min and input variable combinations were enhanced by bootstrapping tests [36] for less uncertainty due to the random input and output database selection. The performance of the model was evaluated by the ratio of the root mean square error to the measurement average (%RMSE) given in Equation (2) to the coefficient of determination (R 2 ) given in Equation (3). R 2 shows the percentage of the output variance that can be estimated from the input data using the derived models. R 2 can take zero as minimum and one as maximum, and it identified the model accuracy in a straightforward manner.
The RTree with the optimised r variance in the terminal nodes can still be highly complex, consisting of many splits and coefficients. Pruning the developed RTrees may remove the excessive branches that contain overwhelming coefficient quantities but make few contributions to the model accuracy. A classical approach to remove an RTree branch is to balance out the RTree model complexity reduction against the potential error. Reduction of the RTree complexity was denoted as the number of terminal nodes in the branch to be removed. The ratio of the extra error to number of terminal nodes for an RTree branch to be removed was defined as the complexity parameter α for the node. The prune starting with the low α would remove RTree branches that were more complex, thereby resulting in minimal error in the output. Figure 3 illustrates the R 2 and %RMSE of the PV energy output rate (r) for different L min settings. The bottom and top box edges in the figure represent the 25th percentile (q 1 ) and 75th percentile (q 3 ) [37] of the 100 model developments by bootstrapping for each L min setting. The bottom and top whisker edges represent the far outside boundaries of the bootstrapping results, which are defined as q 1 − 3(q 3 − q 1 ) and q 3 + 3(q 3 − q 1 ) [38], respectively. Such a boundary definition will cover more than 99.5% of the results of the bootstrapping tests if the R 2 and %RMSE values are in a normal distribution. Thus, results outside the Whisker edges can be considered as outliers and were not plotted. The figure indicates an improvement of the model accuracy when L min increased from 20 to approximately 1000, and then the accuracy decreased gradually when L min increased further. The models developed by L min = 500 and 1000 were similar in accuracies, yet the later was simpler. The 1000 datasets accounted for 0.373% of the entire database. R 2 was approximately 0.745 for the RTree developed by L min of 1000, indicating that approximately 74.5% of the data could be explained. The variation trend of %RMSE for different hidden neurons was opposite to that of R 2 . The minimum %RMSE of the RTree developed by setting L min as 1000 was approximately 37.7% considering an average r of 0.3546 for the datasets of all 17 stations. The figure implies that L min = 1000 is appropriate for testing the subsequent RTree developments and performance evaluations. terminal nodes for an RTree branch to be removed was defined as the complexity parameter α for the node. The prune starting with the low α would remove RTree branches that were more complex, thereby resulting in minimal error in the output. Figure 4 illustrates the contributions of the input variables in estimating the PV energy output rate according to the RTree with and without surrogates. All input variables were assumed to be available and the process was repeated by conducting 5000 bootstrapping tests. The contributions, as observed in the two figures, were different but exhibited a few consistencies. Figure 4a,b indicates that Ecell provided the highest contributions to the RTree model, which were 94.5% (Figure 4a) and 22% (Figure 4b). This disparity implies that Ecell can be partly replaced by other variables, such as EHG and ENG, whose importance was less than 0.4% in Figure 4a in comparison with that of Ecell in Figure 4b. It was not surprising that the contributions of Ecell and Kcell were higher than those of EHG, EHD, and ENB because the former two were more closely related to the PV panel. The solar incidence angle cosσ was of a lower importance compared to other variables in Figure 4a. The variable for the RTree with a surrogate in Figure 4b was of moderate importance, probably because Ecell was not directly available. The contributions of EHD and v were low for the RTree models developed either with or without surrogates. Tair was of good accessibility by routine measurements; however, its contribution Figure 4 illustrates the contributions of the input variables in estimating the PV energy output rate according to the RTree with and without surrogates. All input variables were assumed to be available and the process was repeated by conducting 5000 bootstrapping tests. The contributions, as observed in the two figures, were different but exhibited a few consistencies. Figure 4a,b indicates that E cell provided the highest contributions to the RTree model, which were 94.5% (Figure 4a) and 22% (Figure 4b). This disparity implies that E cell can be partly replaced by other variables, such as E HG and E NG , whose importance was less than 0.4% in Figure 4a in comparison with that of E cell in Figure 4b. It was not surprising that the contributions of E cell and K cell were higher than those of E HG , E HD, and E NB because the former two were more closely related to the PV panel. The solar incidence angle cosσ was of a lower importance compared to other variables in Figure 4a. The variable for the RTree with a surrogate in Figure 4b was of moderate importance, probably because E cell was not directly available. The contributions of E HD and v were low for the RTree models developed either with or without surrogates. T air was of good accessibility by routine measurements; however, its contribution Sustainability 2019, 11, 6052 9 of 17 in estimating r was either moderate or low for the RTree. This is because the PV cell temperature was vastly affected by both T air and solar radiation.
Results and Discussion
Sustainability 2019, 11, x FOR PEER REVIEW 10 of 17 in estimating r was either moderate or low for the RTree. This is because the PV cell temperature was vastly affected by both Tair and solar radiation.
(a) (b) Figure 5 shows the R 2 of RTree through 100 bootstrapping tests using a few of the input variables. Lmin was set as 1000 on the basis of the data presented in Figure 3. R 2 s of Cases 3 to 6 were greater than 0.76 and Cases 3 and 5 exhibited the best performances. R 2 was approximately 0.51 for Case 8 when solar radiation was completely unavailable, which was considerably lesser than the lower limit shown in the figure. The difference of R 2 between Cases 1 and 3 exceeded 0.06 ( Figure 5); this indicated that it was difficult to estimate the PV performance without specifying its directions in the initial design stage. R 2 of Case 3 was higher than that of Case 2; the difference was approximately 0.015. The best performances were exhibited by Cases 3 and 5 because the RTrees were developed on the basis of the irradiance variable with reference to the PV panel. The R 2 of Case 2 was close to those of Cases 3 and 5, probably because the PV panels of most projects were similar to each other; furthermore, the on-panel irradiance could be estimated by the horizontal solar irradiance via the RTree model structure. R 2 s of Cases 4, 5, and 6 in Figure 5 show that the air temperature might have slightly affected the model, whereas the wind speed can be neglected to save the data measurement costs without influencing the accuracy of the model. Case 7 shows that approximately 74% of data can be estimated by global horizontal solar irradiance measurements using the RTree model. Finally, as Case 5 indicated, the five variables of Ecell, Kcell, ZS, σ, and Tair were used to develop the models required for estimating the real-time PV energy output rate. In addition, models developed using EHG, ZS, σ, and Tair of Case 7 without direct or diffuse components were also tested for data accessibility. Table 4 when a few of the variables were available; R 2 of Case 8 was approximately 0.51, far less than the lower limit of 0.68. Figure 5 shows the R 2 of RTree through 100 bootstrapping tests using a few of the input variables. L min was set as 1000 on the basis of the data presented in Figure 3. R 2 s of Cases 3 to 6 were greater than 0.76 and Cases 3 and 5 exhibited the best performances. R 2 was approximately 0.51 for Case 8 when solar radiation was completely unavailable, which was considerably lesser than the lower limit shown in the figure. The difference of R 2 between Cases 1 and 3 exceeded 0.06 ( Figure 5); this indicated that it was difficult to estimate the PV performance without specifying its directions in the initial design stage. R 2 of Case 3 was higher than that of Case 2; the difference was approximately 0.015. The best performances were exhibited by Cases 3 and 5 because the RTrees were developed on the basis of the irradiance variable with reference to the PV panel. The R 2 of Case 2 was close to those of Cases 3 and 5, probably because the PV panels of most projects were similar to each other; furthermore, the on-panel irradiance could be estimated by the horizontal solar irradiance via the RTree model structure. R 2 s of Cases 4, 5, and 6 in Figure 5 show that the air temperature might have slightly affected the model, whereas the wind speed can be neglected to save the data measurement costs without influencing the accuracy of the model. Case 7 shows that approximately 74% of data can be estimated by global horizontal solar irradiance measurements using the RTree model. Finally, as Case 5 indicated, the five variables of E cell , K cell , Z S , σ, and T air were used to develop the models required for estimating the real-time PV energy output rate. In addition, models developed using E HG , Z S , σ, and T air of Case 7 without direct or diffuse components were also tested for data accessibility. in estimating r was either moderate or low for the RTree. This is because the PV cell temperature was vastly affected by both Tair and solar radiation.
(a) (b) Figure 5 shows the R 2 of RTree through 100 bootstrapping tests using a few of the input variables. Lmin was set as 1000 on the basis of the data presented in Figure 3. R 2 s of Cases 3 to 6 were greater than 0.76 and Cases 3 and 5 exhibited the best performances. R 2 was approximately 0.51 for Case 8 when solar radiation was completely unavailable, which was considerably lesser than the lower limit shown in the figure. The difference of R 2 between Cases 1 and 3 exceeded 0.06 ( Figure 5); this indicated that it was difficult to estimate the PV performance without specifying its directions in the initial design stage. R 2 of Case 3 was higher than that of Case 2; the difference was approximately 0.015. The best performances were exhibited by Cases 3 and 5 because the RTrees were developed on the basis of the irradiance variable with reference to the PV panel. The R 2 of Case 2 was close to those of Cases 3 and 5, probably because the PV panels of most projects were similar to each other; furthermore, the on-panel irradiance could be estimated by the horizontal solar irradiance via the RTree model structure. R 2 s of Cases 4, 5, and 6 in Figure 5 show that the air temperature might have slightly affected the model, whereas the wind speed can be neglected to save the data measurement costs without influencing the accuracy of the model. Case 7 shows that approximately 74% of data can be estimated by global horizontal solar irradiance measurements using the RTree model. Finally, as Case 5 indicated, the five variables of Ecell, Kcell, ZS, σ, and Tair were used to develop the models required for estimating the real-time PV energy output rate. In addition, models developed using EHG, ZS, σ, and Tair of Case 7 without direct or diffuse components were also tested for data accessibility. Table 4 The polynomials were developed using the identified variables of high importance to evaluate the PV performance. Variables of low importance were neglected to simplify the equation. Equations (4) and (5) were developed using the five variables (E cell , K cell , cos(Z S ), cosσ, and T air ) of Case 5 from projects 1 to 17, and Equations (6) and (7) were developed by the four variables (E HG , cos(Z S ), cosσ, and T air ) of Case 7. The latter was essential when the direct and diffuse solar irradiance components were not available. The input variables were standardized using Z-score normalization as summarized in Table 5, and X 1 to X 5 represent the standardized variables. The coefficients of the second order polynomials are listed in Table 6. The second order coefficients (C i,j ) were close to zero for the five-variable polynomial, and X 3 2 , X 4 2 and X 5 2 were zero. The low D i,j values implied that the correlation was evidently linear. Table 5. Variables for model development that are standard by the Z-score normalization. Table 6. Coefficients C i,j of the second order polynomial. Figure 6 demonstrates the average r when each input variable was within a series of local ranges represented by their medians on the basis of PV projects 1 to 17. The output r could take different values when an input variable was maintained constant while other variables were not. In this connection, the r values for each subplot were averaged 100 times; each time, 1% of the local data was used. The values of r obtained from the four-variable model (Case 7) were plotted; the five-variable model (Case 5) exhibited better performance. The figures show the dependency of PV energy output on the input variables and estimation accuracies. Figure 6d also presents the variation trend of E cell with T air . Figure 6 depicts that r, estimated using the second order polynomial, are in good agreement with the practical measurements. The efficiencies at E HG greater than 1000 W/m 2 were overestimated; however, E HG rarely exceeded 1000 W/m 2 . Cases with E HG of approximately 1000 accounted for only 3.8% of the total datasets, and the extremely high values of E HG were measured during short summer periods. The smoothed r increased significantly over the ranges of E HG and cosZ S , as shown in Figure 6a,b, and moderately over the cosσ range as shown in Figure 6c. Such trends were consistent with their level of importance shown in Figures 4 and 5. Figure 6a reveals that the smoothed r increased from 0 to 0.8 as E HG increased from 0 to over 1000 W/m 2 , probably because the PV cells were insensitive to the low sunlight. However, r reduced to 0.6 as E HG increased further, possibly because of the high panel temperature. Figure 6b,c illustrates that the smoothed average r reduced from 0.8 and 0.6, respectively, to less than 0.1 when solar zenith and incidence angles increased from less than 10 • to 90 • . According to Figure 6b, the energy output rate peaked at cos(Z S ) = 0.975, which corresponded to Z S = 13 • , probably because most data were obtained from the PV cells with tilt angles lesser than 30 • ; many PV cells were horizontally installed. In addition, the solar irradiance was stronger at high cosZ S (i.e., low air mass) compared with that at low cosZ S . Figure 6d shows a gradual increase of r from 0.2 to 0.5 when T air increased from 0 to 30, indicating a relatively low contribution of T air to the model. The high air temperature over 30 • C corresponded to the substantial solar irradiance over 700 W/m 2 , which led to high energy outputs. However, the r of 700 W/m 2 shown in Figure 6d was not as significant as that shown in Figure 6a because of the high cell temperature. horizontally installed. In addition, the solar irradiance was stronger at high cosZS (i.e., low air mass) compared with that at low cosZS. Figure 6d shows a gradual increase of r from 0.2 to 0.5 when Tair increased from 0 to 30, indicating a relatively low contribution of Tair to the model. The high air temperature over 30 °C corresponded to the substantial solar irradiance over 700 W/m 2 , which led to high energy outputs. However, the r of 700 W/m 2 shown in Figure 6d was not as significant as that shown in Figure 6a because of the high cell temperature. Figure 7 shows the accuracies (R 2 ) of the first and second order polynomial equation models for different PV projects. According to Figure 7a, the accuracies of the linear (first-order) and second-order polynomials were comparable when the five variables of Case 5 (E cell , K cell , cosZ S , cosσ, and T air ) were available. Compared with the first-order polynomial, the second-order polynomial slightly increased the accuracies of projects 2, 4, 5, 13, 17, and 18 of moderate and high tilt angles; however, it reduced the accuracies of the horizontal PVs of projects 6 and 8. Figure 7b shows the RTree and polynomial performances developed by E HG , cosZ S , cosσ, and T air . E cell and K cell that could be determined by the direct and diffuse components were not available, and E HG was used as an alternative. The second order polynomials evidently improved the accuracies for PV projects 4, 5, and 14-17. For the horizontal PV cells of projects 6, 8, and 9, however, the universal polynomials were invalid when the E cell and K cell were not available. This was probably because the polynomials focused on the PV projects where the tilt angles were approximately 20 • -40 • ; this accounted for most of the datasets for the model development. The polynomials exhibited inconsistent performance for PV cells where the tilt angle exceeded 60 • , as the R 2 was higher than 0.7 for project 10, yet lower than 0.4 for project 4. Equations (8) and (9) were developed, in this connection, by data obtained from projects 6, 8, and 9 for horizontal PV panels only. The overall R 2 of Equations (8) and (9) for projects 6, 8, and 9 were 0.70 and 0.72, respectively. The results can be compared to a classical model given in Appendix A. Table 3. Figure 7 shows the accuracies (R 2 ) of the first and second order polynomial equation models for different PV projects. According to Figure 7a, the accuracies of the linear (first-order) and secondorder polynomials were comparable when the five variables of Case 5 (Ecell, Kcell, cosZS, cosσ, and Tair) were available. Compared with the first-order polynomial, the second-order polynomial slightly increased the accuracies of projects 2, 4, 5, 13, 17, and 18 of moderate and high tilt angles; however, it reduced the accuracies of the horizontal PVs of projects 6 and 8. Figure 7b shows the RTree and polynomial performances developed by EHG, cosZS, cosσ, and Tair. Ecell and Kcell that could be determined by the direct and diffuse components were not available, and EHG was used as an alternative. The second order polynomials evidently improved the accuracies for PV projects 4, 5, and 14-17. For the horizontal PV cells of projects 6, 8, and 9, however, the universal polynomials were invalid when the Ecell and Kcell were not available. This was probably because the polynomials focused on the PV projects where the tilt angles were approximately 20°-40°; this accounted for most of the datasets for the model development. The polynomials exhibited inconsistent performance for PV cells where the tilt angle exceeded 60°, as the R 2 was higher than 0.7 for project 10, yet lower than 0.4 for project 4. Equations (8) and (9) were developed, in this connection, by data obtained from projects 6, 8, and 9 for horizontal PV panels only. The overall R 2 of Equations (8) and (9) for projects 6, 8, and 9 were 0.70 and 0.72, respectively. The results can be compared to a classical model given in Appendix A.
Poly 1st
Poly 2nd Figure 7. R 2 of the first-and second-order polynomials that were developed by (a) the five variables (E cell , K cell , cosZ S , cosσ, T air ) of Case 5; (b) the four variables (E HG , cosZ S , cosσ, and T air ) of Case 7. 'Tilt' means the tilt angle in degrees, 'Prj.' stands for project, which is described in Table 3. Figure 8a-d presents the measured and estimated real-time r series of PV panels that were installed horizontally (project 8), and tilted by 20 • , 30 • , and 63 • (projects 7, 14, and 10, respectively) on a typical day in 2018. The measured and estimated r of the independent testing dataset of UNNC were plotted as shown in Figure 8e. These projects were selected to represent those with a similar tilt angle. The four variables of Case 7 including E HG , cosZ S , cosσ, and T air were the input variables for the second order polynomial of Equations (7) and (9). Models developed by the five variables of Case 5 should be of higher accuracy on the basis of Figure 7. The period considered was between the end of spring and the beginning of summer. In all graphs, there were a few discontinuities at a few data points; this was because data were either missing or rejected by data quality control. There were only a few data points removed during the plotted period. The figures showed that the second order polynomial correctly estimated the r variation features for PV panels with different tilt angles and at various locations using solely the four readily accessible variables. The PV panels produced more solar energy around noon, owing to the abundant solar radiation and lower air mass. Figure 8a shows that r was overestimated by the polynomial that was developed using the entire dataset of projects 1 to 17. In such cases, Equation (9) should be used to accurately estimate the r of horizontal PV projects. This implicates that the polynomial can somehow be limited for complicated problems that involve PV cells of different features. The solar irradiances on Figure 8a,d fluctuated evidently and were slightly less accurate than that shown in Figure 8b,c,e. The r shown in Figure 8a,d was probably affected by various factors such as cloud coverage, indicating that the sky condition can help evaluate the real-time PV energy output. As shown in Figure 8d, the energy output was underestimated in the afternoon; slight overestimations were observed in the morning and at noon in Figure 8e. various locations using solely the four readily accessible variables. The PV panels produced more solar energy around noon, owing to the abundant solar radiation and lower air mass. Figure 8a shows that r was overestimated by the polynomial that was developed using the entire dataset of projects 1 to 17. In such cases, Equation (9) should be used to accurately estimate the r of horizontal PV projects. This implicates that the polynomial can somehow be limited for complicated problems that involve PV cells of different features. The solar irradiances on Figure 8a,d fluctuated evidently and were slightly less accurate than that shown in Figure 8b,c,e. The r shown in Figure 8a,d was probably affected by various factors such as cloud coverage, indicating that the sky condition can help evaluate the real-time PV energy output. As shown in Figure 8d, the energy output was underestimated in the afternoon; slight overestimations were observed in the morning and at noon in Figure 8e.
Conclusions
We developed polynomials to estimate the energy output of silicon crystalline PV panels in different locations and at various tilt angles. The input variables deemed crucial to the model estimation were identified using the RTree for model simplicity. The important variables included the solar irradiance and diffuse fraction on the PV panel (Ecell and Kcell), cosine solar zenith angle and incidence angle (cosZS and cosσ), and air temperature (Tair). The horizontal solar global irradiance could be used as an alternative for the Ecell and Kcell because their values are unavailable in many places
Conclusions
We developed polynomials to estimate the energy output of silicon crystalline PV panels in different locations and at various tilt angles. The input variables deemed crucial to the model estimation were identified using the RTree for model simplicity. The important variables included the solar irradiance and diffuse fraction on the PV panel (E cell and K cell ), cosine solar zenith angle and incidence angle (cosZ S and cosσ), and air temperature (T air ). The horizontal solar global irradiance could be used as an alternative for the E cell and K cell because their values are unavailable in many places around the world. The R 2 values of the polynomials developed by the most relevant five variables were greater than 0.65 and 0.7 for projects 14 and 11, respectively, out of the 18 projects with different climates and in medium latitude regions. The model accuracy was slightly sacrificed when replacing E cell and K cell with the more accessible horizontal global solar irradiance E HG . There were 14 out of 18 PV projects with R 2 over 0.65 when their r values were estimated using the second order polynomial. However, the polynomials were developed independently for solely horizontal PV projects. It is thus concluded that the polynomial model is generally not case-sensitive and should reliably estimate the energy output of new silicon PV panels with low and medium tilt degrees, facing various directions including southeast, south, and southwest. The proposed models could accurately estimate the long-term energy productions of silicon crystalline PV panels typically in places where the meteorological year database was accessible. The work provides essential knowledge regarding the designs of energy saving and renewable energy projects. In addition, it demonstrates an approach to estimate the outcomes of machine learning to develop polynomial equations. The findings were applicable for silicon crystalline PV cells only, which, however, represent most of the engineering projects and commercial uses nowadays.
Acknowledgments:
The authors want to thank Dr. Isaac Yu Fat Lun from the University of Nottingham, Ningbo, China (UNNC), for helping us get access to the data available at the Centre for Sustainable Energy Technologies (CSET).
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A Performance of a Classical Model
Equation (A1) gives a classical model that estimates the effect of the environment on the PV efficiency. This model needs the nominal operating cell temperature (T NOCT ), which was tested by the manufacturer using 800 W/m 2 solar irradiance on the cell (E NOCT ), 20 • C surrounding air temperature, 1 m/s wind speed, and open back side installation. In Equation A1, η ref is the nameplate efficiency of the PV, η is the real-time efficiency determined by the environment, E cell is the real-time irradiance on the PV panel, and T ref is the PV temperature at the standard test condition, which should be 25 • C. Coefficient β is set at 0.0045, as recommended by reference [14], according to a number of models. The estimation of E cell needs the direct beam and sky diffuse radiation, which were less accessible than the horizontal global (E HG ). The solar incidence angle on the plane (σ) is needed as well. η ref and T NOCT identifies the energy production and thermal features of the PV panel, and was determined by the product catalogs. Performance of Case 1 was not given because the panel model was not specified.
The R 2 values in Table A1 show that the model was valid for most projects under study, yet became invalid for the others, and the R 2 values of six projects were less than 0.5. An R 2 lower than 0 meant the model estimation led to more uncertainties than the measurement average. The classical model outperformed the proposed equations (including those for tilt and horizontal cells) for projects 4 and 17 only, and was generally less accurate than the other cases. For project 18 that was not used in developing the new equations, R 2 of the classical model was 0.57, which was lower than the R 2 of the proposed case that almost reached 0.8. This indicates that the proposed model, in the form of one or two simple equations, was in good robustness for PV projects of different tilt angles. | 11,498.2 | 2019-10-31T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Measuring Surroundings Awareness using Different Visual Parameters in Virtual Reality
—Due to the popularity of digital games, there is a growing interest in using games as therapeutic interventions. The ability of games to capture attention can be beneficial to distract patients from pain. In this paper, we investigate the impact of visual parameters (color, shapes, and animation) on users’ awareness of their surroundings in virtual reality. We conducted a user study in which experiments included a visual search task using a virtual reality game. Through the game, the participants were asked to find a target among distraction objects. The results showed that the different visual representations of the target among distraction objects could affect the users’ awareness of their surroundings. The least awareness of the surroundings occurred when the target and distractors shared similar features. Further, the conjunction of low similarity between distractors-distractors and high similarity between target-distractors provided less awareness of the surroundings. Additionally, results revealed that there is a strong positive correlation between search time and awareness of the surroundings. Less awareness of the surroundings while playing a game implies that users are positively engaged in that game. These results offered a set of criteria that can be applied to future virtual reality interventions for medical pain distraction.
I. INTRODUCTION
Game playing was recognized as the most attractive activity for individuals all over the world.The success of games is highly dependent on their ability to keep their players engaged [1].Despite some of the negative aspects of playing games, there is growing research demonstrating the positive effects that game playing can provide.If games succeed to activate users' attentional engagement and motivation, they can be of valuable therapeutic benefits [2].Playing games offers a promising non-pharmacological distraction technique for pain control by diverting attention away from painful stimuli [3].The logic behind distraction is that pain requires attention and humans have limited information-processing resources [4].Therefore, the more attentional resources a distraction intervention consumes, the fewer resources are available for pain perception [3].The research work conducted by [5] [5] highlights that high attentional engagement during games demonstrates a significant analgesic effect on pain distraction.
Distraction interventions can engage one or more sensory modalities.Each modality could have an impact on the distraction level to attract focused attention.In the range of sensory modalities, vision is the most important in its capacity and utility in terms of perception [6].Besides, distraction using visual tasks was confirmed to significantly reduces pain compared to other modalities [7], [8].Numerous studies demonstrated that the majority of processing information comes from the visual modality (visual dominance) [9].A wide range of research supported that our vision captures the most percentage of our attentional resources [10].Recently, virtual reality (VR) becomes one of the major tools that affect visual perception by offering a highly immersive and tangible interaction experience [11].Experimental research showed that immersive environments build strong user engagement compared to screenbased environments [11].VR technologies provide a higher degree of presence through increased interactivity and hence increase the user's attention to the virtual environment [11].The unique characteristics of VR encouraged researchers to conduct numerous studies, [12], [13], to investigate the effect of VR as a non-pharmacological tool for pain management.
We believe that understanding the visual parameters affecting humans' awareness of their surroundings in VR is valuable.Less awareness of the surroundings increases engagement and immersion in the visual activity.High engagement demands a greater amount of user's focused attention.This is critical for many applications such as game-based learning [14], engagement in games [15], and pain distraction [16].Tasks that require subjects to detect a particular target among distractors gained a great interest in the research of vision and attention [17].The results of these tasks depend on the features of the target and distractors.Different features might have different capabilities to guide users' attention in a given task [17].The efficiency of a visual search can be assessed based on changes in performance such as search time or accuracy [17].Many studies [18], [19], have presented theories to discuss the factors that affect visual search efficiency in 2D, such as the feature integration theory and the similarity theory.
In this paper, we provide a user study to measure the impact of different visual parameters on users' awareness of their surroundings and task search time in VR.Experiments in this study focus on the principle of visual search using a simple VR game.Numerous studies proved that using the illusion of VR significantly reduces the perception of pain.Although, to our knowledge, no studies were conducted to determine the impact of visual parameters that affect the users' awareness of their surroundings.The less awareness of the surroundings implies that the VR intervention engages much of a user's attentional capacity.This is valuable, especially when developing VR interventions for pain distraction.Results from this study will be considered in designing future VR interventions for medical pain distraction.
Through the following sections, this paper discusses the background of VR distraction for managing pain in Section II.Then, Section III presents the conducted user study.Further-more, in Section IV we summarize the main findings of the study.Finally, the discussion and conclusion are included in Sections V and VI, respectively.
II. BACKGROUND
This section discusses in detail the related work of using VR distraction for pain management.Section II(A) provides some background on VR technology.Section II(B) discusses some applied VR research for pain distraction and reduction.
A. Virtual Reality
With advances in wearable technology, VR became popular across various industries due to its ability to engage users in a multisensory environment.There are several definitions of VR, but the most appropriate one "it is defined as a real or simulated environment in which a user experiences telepresence".This definition is chosen as it describes VR without any implications of technology [20].VR offers a combination of three effects: 1) Fully immersion, users wear a headset that visually isolates them from the real world, 2) Stereoscopic vision, the simulation of the real world in three dimensions, and 3) Motion capture, allows the tracking of head position and controllers with three or six degrees of freedom [21].These effects enable VR to provide users with a unique visualization tool to explore, manipulate, and interact with their data.
VR is unique in that it allows a multisensory experience that involves visual, auditory, and tangible senses [22].The VR characteristics of immersion, presence, and interactivity provide users with a greater sense of engagement.These three factors may subsequently prompt better distraction outcomes from a VR intervention via increased engagement.Presence describes the subjective experience of being in one place or environment, even when one is physically situated in another [23].There are two points of view for immersion definition; the first is based on the state of mind (feeling caught up in and absorbed by the virtual world) and the second is based on the technological capability of a VR system (1)Fully immersive, using a head-mounted display (HMD), (2)Semi-immersive, using large projection screens, (3)Minimal-immersive, using window-based display) [24].Interactivity describes the degree to which users can influence the content of the virtual environment [24].According to [25], growing improvements in hardware and software make VR technology more affordable for the scientific and commercial community.
B. Virtual Reality Distraction
Various studies have shown that VR distraction is more effective in reducing pain and anxiety than typical distraction techniques such as deep breathing, listening to music, watching a favorite video, and hypnosis [13].The last decade has witnessed exponential growth in using VR interventions for pain management with encouraging results that recommend VR distraction to enhance treatment outcomes [26].Numerous studies were conducted to investigate the effectiveness of VR distraction in reducing different types of pain.Research in this area [16], [27], indicates that VR is a promising adjunct for controlling acute and chronic pain.
Patients with severe burn injuries frequently experience extreme pain related to the injury itself or its wound care procedures.VR distraction provided a strong non-opioid pain control technique for both pediatric and adult burn patients even in the Intensive Care Unit [28].Also, adding VR to the rehabilitation program of pediatric burn patients had a significant effect on decreasing pain [29].The main finding from these studies is that VR significantly reduces pain and other uncomfortable symptoms experienced by burn patients [30].Also, the findings of the chronic pain studies supported the efficacy of VR distraction.Patients reported a significant decrease in pain ratings when using VR interventions compared to the control condition [31].However, the studies focused on the area of using VR with chronic pain were few [13], and further investigations are needed to ensure its feasibility.
Moreover, VR succeeded to offer a powerful distraction tool for patients who suffer from cancer pain.Cancer patients experience pain associated with the disease itself and/or pain caused by examinations and treatments such as chemotherapy.Several studies showed that VR interventions are effective in reducing pain and other chemotherapy-related symptoms in both adult and pediatric patients suffering from different types of cancer [26], [32].Patients receiving VR during their chemotherapy session reported less time thinking about pain and also an underestimation of the treatment session duration [33].VR distraction can help cancer patients accept and tolerate the treatment procedures and hence accelerate the recovery process.Further to the above category of studies that investigated the efficacy of VR distraction, other studies investigated whether VR distraction will provide a larger analgesic effect when used repeatedly during treatment sessions [28], [34]- [36].Results indicated that VR efficacy did not diminish with repeated use and pain intensity levels dropped significantly.Previous research, as indicated by both [34] and [36], suggested that receiving VR for a longer treatment duration is more effective than the shorter duration.
Moreover, few studies have investigated the impact of lowcost VR technology on pain tolerance [37], [38].For both studies, participants were suffering from severe burn injuries and results showed that low-cost VR technology succeeded to achieve a promising pain reduction level.This key finding will open the door to conducting further research to generalize using cost-effective technology with many more patients and different types of pain.Finally, VR interventions can also be used effectively to distract young children aged less than four years during their wound care procedure [39].This study used a projector-based VR system and the results indicated that VR significantly reduced children's acute pain.
Results from the scientific literature support the adjunctive use of VR distraction for pain management.However, it remains unclear what is the impact of different visual parameters such as color, shapes, and animation on users' awareness of their surroundings.Understanding the impact of such parameters on the awareness of the surrounding is valuable in developing powerful VR therapeutic interventions.To this aim, we conducted a study that included a VR game where players searched for a target among distractors.One study has investigated the impact of color congruency on task search time in VR.This study measured the search task performance when target and distractors were varied concerning congruency [40].Moreover, the study examined the impact of using a flanker item as a visual distraction from completing task flow.Participants were asked to search for daily items on a virtual kitchen countertop and ignore flanker items.Results indicated that the search time became longer when the target and distractors shared color.Regarding 2D applications, many studies examined the impact of visual representations of the target and distractors on task search time.Treisman et al. [18] provided a framework that explained the hypotheses of the feature integration theory.This theory hypothesized that the search task will proceed slowly when the target and distractors share features.The theory suggested that the human visual system maintains a set of feature maps for different visual attributes (such as color or shape).When the target has a unique feature, one feature map will be accessed and hence leading to a fast response time.Another research work suggested that the amount of difference between the target-distractors and distractors-distractors will affect the search time [19].This theory hypothesizes that if the "target-distractors" similarity is high, then search efficiency decreases and search time increases.Besides, if the "distractorsdistractors" similarity is low, then search efficiency decreases and search time increases.These researches showed that the presented visual features could significantly affect users' focus of attention and engagement in search tasks.However, one important gap in the literature on virtual reality analgesia is that no studies explored the impact of different visual parameters such as color, shapes, and animation on users' awareness of their surroundings.To address this gap, we performed a user study to determine the impact of these visual parameters on users' awareness of their surroundings and task search time.Moreover, we are also interested in exploring whether there is a correlation between task search time and awareness of the surrounding.The results from this study will help to develop effective VR interventions that engage most of the patient's attention and hence feel less pain.
III. METHODS
This study was designed to help in developing a future VR game for medical pain distraction.To provide more distraction effects, it is valuable to determine the impact of different visual parameters on users' awareness of their surroundings while in VR.We found that visual components such as color, shape, and animation are commonly included in any digital game.Accordingly, we focused on determining the impact of these visual components on users' awareness of their surroundings.Experiments in this study focus on the principle of visual search using a simple VR game.We examined the impact of different visual representations of the target and distractors on the search performance and awareness of the surroundings.
A. User Study
The study used a within-subject design where each participant experienced different conditions.These conditions varied with respect to some visual parameters such as color, shape, and animation.The participants were asked to perform a primary visual task, finding a target cube among distraction objects as fast as possible.Moreover, external disruptions (audio or vibration) were generated randomly while performing the search task.These disruptions were used to measure their awareness of the surroundings without affecting the completion of the primary task.The number of these disruptions was fixed among participants, but the order of them was randomized.Each condition consisted of twenty-five trials and had a duration of about three minutes.At the end of each condition, a compulsory break time was offered.During the break, each participant had to fill in a simple questionnaire that asked about the time duration and the observed disruptions -e.g., "How long did you feel the condition take?", and "Did you observe any disruption?".If they answered "yes" to the latter question, they have to provide which types occur and the number of their occurrence frequency.The experiment took around forty-five minutes per participant including breaks.For all conditions, we recorded the completion time and the generated disruptions.Through the study, we examined task completion time and disturbance awareness for each of the study's conditions.We measured the awareness and illusion errors that were reported by the participants.Awareness error is the number of missed disruptions, while illusion error is the number of disruptions that never happened.These data were collected to measure the participants' awareness of the surrounding environment.
B. Participants
A total of 31 undergraduate students (19 females) (12 males) aged (18)(19)(20)(21)(22)(23)(24) years participated in the study.Participants were recruited via a university announcement for voluntary inclusion in the study.All participants were eligible with respect to the criteria determined for the study (had normal visual and normal color vision).Five participants were excluded as they suffered from VR-induced motion sickness.So, a total of 26 healthy participants were included in the study analysis.Informed consent for the publication of identifying information/images in an online open-access platform and for participation in the study was provided by all participants before the experiment.The study was approved by the ethical committee in the Faculty of Computers and Artificial Intelligence, Benha University.All methods were carried out in accordance with relevant guidelines and regulations.The flow of participants is shown in the flow diagram (see Fig. 1).
C. Equipment
We carried out the experiments of the study using a controlled room.Participants delivered the VR experience using a Xiaomi MI VR headset with a Samsung Galaxy Note3 phone and a handheld controller as an interaction device.We used this head-mounted display to be able to use a variety of mobile phones.The Samsung Galaxy Note3 phone was attached to the HMD and was used for recording the completion time of the condition.Another mobile phone was used and attached to the participant's left arm.This phone was also used to generate random disruptions and save them.The participants were able to look around and navigate the virtual environment using their heads.The VR application was developed using the Unity engine and the Android application using Android Studio.The condition started once the participant wore the HMD and ended when the twenty-five trials ended.Fig. 2 shows the hardware components used in the experiment besides one of the participant's trials.
D. Stimuli
Participants were situated in a full virtual closed room surrounded by a number of distractive objects.There was only one real target cube which was always spawned randomly at the eye level of the user.The size of the objects was scaled according to their distance away from the camera's location (size was around 10 % of the display width and 22 % of its height).The distractive objects were spawned at fixed random locations within the room.Colors were defined by the following RGB values: target (R=0, G=255, B=118), in conditions 2 and 5 distractor's red color (R=255, G=0, B=0), in conditions 3, 4, and 6 the colors of the cubes were generated randomly while spheres in condition 6 used the same target color.The environment was populated with 20 distractive objects.The experiment took around forty-five minutes per participant.Each participant performed 150 trials: 6 conditions × 25 trials per condition.
E. Study Design
The study was carried out in immersive VR and included six different conditions with the same task.The participants were asked to find a target object in the presented VR condition.Each condition included a different visual representation of the target and distraction objects, which is our study's independent variable.This differentiation among conditions is used to investigate the different visual parameters and their impact on users' awareness of their surroundings.The following are our study's conditions: 1) Condition1: Single-cube.
All conditions included one target green cube along with other distraction objects spheres or cubes.The order of exploring the conditions was randomized among the participants.We previously conducted a pilot study to determine the best visual design for the conditions.Also, the pilot study helped us to determine the break time duration which was set to five minutes.
Fig. 3 shows our study's six conditions, where the visual representation of the target and distraction objects varied between them.Fig. 3(a) shows "Condition1" that included the target green cube only with no distraction objects.Fig. 3(b) shows "Condition2" with one-color distraction objects.
Participants were asked to find the target green cube which was allocated randomly with the fixed distraction red cubes.Fig. 3(c) shows "Condition3" with a fixed multi-color distraction objects.The participants had to find the target green cube out of the multi-color presented cubes.Fig. 3(d) shows "Condi-tion4" with animated multi-color cubes as distraction objects.Condition4 included an extra effect which was animation.Fig. 3(e) shows "Condition5" in which participants had to find the target green cube hidden between fixed red spheres.Condition5 included different shapes as distraction objects.Fig. 3(f) shows "Condition6" in which participants had to find the target green cube which was surrounded by fixed one-color spheres (the same target color) and fixed multi-color cubes.
The participants were asked to complete the six conditions.The completion time and the generated disruptions were recorded for each condition by the application.The completion time was automatically recorded from the start of the condition till the participant finished the twenty-five trials.The completion time represented the total time taken to finish the condition.Moreover, we recorded the response time taken to find the target in each trial.We calculated the search time for the condition as the average of the twenty-five trials' response time.The completion time was saved on the mobile attached to the HMD, while the generated disruptions were saved on the other mobile phone.On the other hand, the estimated time and errors were reported by the participants via the questionnaire that was filled out after each condition.All of the data was recorded and transcribed to a computer spreadsheet for later analysis.
F. Procedure
Prior to running the study, we had explained many rules and cautions to the participants.We told them to take off the HMD and stop running the condition if they felt any VRinduced motion sickness during their running.We showed an example display of the conditions to explain how to play.The participants heard the audio disruptions and felt the vibrations to get familiar with them.We had two types of audio disruptions (ringtone and beep) and two types of vibrations that varied in duration (short: one second and long: three seconds).After getting ready the participant wore the HMD, hold the handheld controller, and attached the other mobile phone to his/her left arm to start the condition.The participants used the handheld controller to press on the target green cube when found.
The primary purpose of the study was to determine the impact of different visual parameters on users' awareness of their surroundings in VR.The conditions varied according to many parameters such as color, shape, and animation.The main task was to search for the green target cube and find it as fast as possible.When the target fell into the participant's view, he/she focused the cursor of the handheld controller on the target and place a single click.Finding the target indicated the end of a trial and the start of a new one.After the click, the target disappeared and a new target object was randomly located in another location within the same room.Through each condition, the mobile attached to the participant's left arm randomly generated four disruptions.The type and occurrence frequency of these four disruptions were used to measure both the awareness and illusion errors.We supported different types of disruption to measure the participants' awareness of the surrounding environment.The number of disruptions (four) was fixed among conditions and participants.After twentyfive trials, the current condition was ended and the compulsory break time must be taken.
During break time, the participants were offered to take off the HMD and were asked to fill out the user-experience questionnaire.The participants were asked to report the estimated time duration of the condition in minutes.Also, they were asked to report the observed disruptions that occurred during running the condition.After the break time, the participants return back to continue running the study and repeat the same procedure with a new condition.Each participant completed a block of six conditions.The order of experiencing the conditions was randomized among the participants.
IV. EXPERIMENTAL RESULTS
Analyses of the sample data (N = 26) were conducted using IBM SPSS Statistics v25.For all analyses, an alpha level of 0.05 was used unless otherwise specified.
A. Completion Time
We ran the study to measure if the visual parameters affect the task completion time or/and users' awareness of the surroundings in VR.We recorded the data as we calculated the time difference values by subtracting the completion time (automatically recorded) from the estimated time (reported by the participant).This time difference was used to generate the less than and greater than values.Less than indicated that the participant reported time less than the actual time, while greater than indicated the opposite.Then we calculated less than as the negative value in the time difference and greater than as the positive value.We used the absolute value for all of the time differences, less than, and greater than values.For each condition, we ran paired samples t-Test between the less than and greater than values to determine the significant condition.The results showed there was a significant difference between less than and greater than, but with a high mean for greater than values (see Fig. 4).
We ran the time difference values through one-way repeated measures ANOVA.Mauchly's test indicated that the assumption of sphericity had not been violated, but when it was violated, the Greenhouse-Geisser corrected tests were reported, χ 2 (14) = 19.06,p = 0.165.The results showed that there was a significant main effect of the different visual param- eters (color, shape, and animation) on the time difference, F (5, 125) = 2.51, p = 0.034.Fisher's Least Significant Difference (LSD) post hoc analysis of the results showed that participants significantly overestimated the time duration of cubesspheres condition (mean = 1.39;SD = 1.53) compared to the single-cube condition (mean = 0.78; SD = 0.80; p = 0.018) and the spheres condition (mean = 0.93; SD = 0.99; p = 0.020).There was no significant difference between the other pairs of conditions.Fig. 5 shows the mean values by condition for the time difference.
We ran the study's search time results values through oneway repeated measures ANOVA.Mauchly's test indicated that the assumption of sphericity was met, χ 2 (14) = 21.72,p = 0.086.The results showed that there was no significant main effect of the different visual parameters on search time, F (5, 125) = 0.393, p = 0.853.Fig. 6 shows the mean values by condition for the search time.
B. Task Errors
Following in the two sections we are presenting the analysis results for the awareness and illusion errors reported by the participants.Awareness error is the number of missed disruptions, while illusion is the number of disruptions that never happened.These errors were collected to measure the participants' awareness of the surrounding environment.
1) Awareness error: For each condition, we ran the awareness error through one-way repeated measures ANOVA.Mauchly's test indicated that the assumption of sphericity was met, χ 2 (14) = 15.19,p = 0.368.The results showed that there was a significant main effect of the different visual parameters on the awareness error, F (5, 125) = 1.29, p = 0.273.LSD post hoc analysis of the results showed that participants were significantly aware of surroundings in the cubesspheres condition (mean = 1.38;SD = 1.28) compared to the single-cube condition (mean = 0.81; SD = 0.89, p = 0.037).
2) Illusion error: For each condition, we ran the illusion error through one-way repeated measures ANOVA.Mauchly's test indicated that the assumption of sphericity was met, χ 2 (14) = 16.57,p = 0.282.The results showed that there was a significant main effect of the different visual parameters on the illusion error, F (5, 125) = 1.77, p = 0.124.LSD post hoc analysis of the results showed that participants were significantly less aware of their surroundings in the cubes-spheres condition (mean = 0.88; SD = 1.14) compared to the singlecube condition (mean = 0.38; SD = 0.63, p = 0.030) and the one-color cubes condition (mean = 0.31; SD = 0.54, p = 0.041).Fig. 7 represents the mean values of awareness and illusion errors by condition.
C. Search Time and Awareness of Surroundings
We ran Spearman's correlation test to determine if there is a relationship between participants' search time and the awareness error.The results showed that search time and awareness error have a statistically significant relationship (r s = 0.886, p = 0.019).The direction of the relationship is positive where search time and error rate are positively correlated, meaning that these variables tend to increase together.The magnitude of the association is strong (0.5 < |r| < 1.0).A Spearman's correlation test again was computed to assess the relationship between participants' search time and illusion error.There was a positive correlation between the two variables s = 0.829, p = 0.042).Overall, there was a strong, positive correlation between search time and illusion error.Increases in search time were correlated with increases in the rating of illusion error.
V. DISCUSSION
The goal of this experiment was to determine the impact of different visual parameters on the users' awareness of their surroundings.The least awareness of the surroundings means that the user is highly engaged in the VR content.High engagement increases the levels of immersive experience, thus, increasing the impact of future VR interventions for medical pain distraction.
A. Completion Time
Based on the study analysis, the time difference (estimated -actual) data revealed that there is a significant difference between the cubes-spheres condition and both the singlecube and the spheres conditions.These results indicated that more attentional resources were engaged in the cube-spheres condition and hence the passage of time got distorted.This implies that the cubes-spheres condition affected the users' perception of time in VR.Due to the visual representation of the target and distractors in the cubes-spheres condition, participants focused their attention on detecting the target and hence failed to judge the time duration.We further measured the search time of each condition.The analysis showed that there was no significant difference between the six conditions.However, the cubes-spheres condition required more search time to detect the target compared to the remaining conditions.Results revealed that the search time increases when it is hard to identify the target among distractor objects.This implies that the visual representations of the target and distractors affect the search time in VR.Our results revealed that the spheres condition was the lowerorder condition against the cubes-spheres condition with the higher-order.The target in the spheres condition had unique features color and shape, so participants were able to detect it easily.The cubes-spheres condition is the highest due to sharing multiple features color and shape with the distracting objects.Sharing similar features between target and distractors increased the response time for the search task.Our finding comes in line with the conducted research in other 2D and 3D platforms [40], [18].Fig. 6 shows the mean values of search time for the six conditions.
Another important issue is that the similarity between the (target-distractors) and (distractors-distractors) also affected the search time.In the spheres condition the similarity between the target and distractors was low accompanied by the high similarity between distractors-distractors and in turn, search time decreased.On contrary the target-distractors similarity in the cubes-spheres condition was high and the distractorsdistractors similarity was low.Thus, participants in the cubesspheres condition required more time and also more attention to the target.This finding is aligned with the finding presented in [19].Overall, our findings indicated that the search time was higher when the target and distractors shared similar features.Further, the conjunction of low similarity between distractors-distractors and high similarity between target-distractors provided a long search time.This finding shows a similarity in results between the task search time in VR and desktop 2D platforms.
Finally, it may be worth considering the qualitative data.It is interesting to state that the comments from the participants further supported the statistical findings.For the overestimation of time, participants claimed that the task completion time influenced their judgment of the estimated time.Therefore, as the task became longer the total time was assumed to become longer.This explains why participants overestimated the time duration of the conditions.
B. Task Errors
The analysis of awareness error showed that there was a significant difference between the cubes-spheres condition and the single-cube condition.As shown in Fig. 7, the cubesspheres condition has the highest awareness error rate.We found that participants in this condition required more of their focused attention to identifying the target among distractors.Therefore, participants were less aware of their surroundings in this condition compared to the others.Participants missed observation of many auditory and vibration distractions in this condition.As indicated, the cubes-spheres condition included conjunctions of many visual factors that led to increased engagement.Sharing similar features between the target and distractors demanded more attentional resources and hence decreased the participants' awareness of their surroundings.Moreover, participants were less aware of their surroundings when the similarity between target-distractors was high and the similarity between distractors-distractors was low.In line with the previous analysis, the condition that provides more distraction from surroundings is that of the longest search time and the highest awareness error.
Regarding the illusion error, the results showed that participants in the cubes-spheres condition were less aware of their surroundings compared to the single-cube condition and the one-color cube condition.Participants in the cubes-spheres condition reported a significant number of disruptions that never happened, which implies that their focus attention was affected by the visual representations of this condition.Similar to awareness error, participants' attention was significantly engaged in the cubes-spheres condition.The superiority of that condition as an influence on awareness and illusion errors strongly suggests that this condition effectively decreased the participants' awareness of their surroundings.Participants in the cubes-spheres condition consumed a high capacity of attentional resources to detect the target compared to other conditions.This implies that the participants were highly engaged in that condition.In our VR game, the least awareness of the surroundings was provided when the target shared similar features with distractors along with the conjunction of low similarity between distractors-distractors and high similarity between target-distractors.Thus, these visual representations of the target and distraction objects should be considered when developing a VR intervention for pain distraction.
C. Search Time and Awareness of Surroundings
Regarding the results of search time and task errors (awareness and illusion), there was a strong positive correlation between search time and awareness of the surroundings in immersive VR.When the search task requires more time, the awareness of the surroundings decreases.The awareness of the surroundings is represented by the awareness error and illusion error.Notably, participants lost awareness of their surroundings when the search task required more of their focused attention.Therefore, they missed the observation of many disruptions that were generated during playing the game.The logic behind this is that humans' attentional resources are limited.When the search task requires more time, a great amount of these resources will be captured to perform the task.Thus, less attention is available to process incoming signals from the surroundings.This finding offers potential, especially for medical applications that can make benefit from using VR interventions for pain distraction.Future VR interventions should employ these findings to maximize distraction effects and provide more reduction in pain intensity.
We would like to highlight the following limitations, which should be thoughtfully considered within the context of the study's findings.Firstly, our participant pool was comprised of individuals aged 18 to 24 years, who had minimal experience of using VR.To enhance the generalizability of our findings, future research should encompass more diverse participant populations, spanning various age groups and educational backgrounds.Furthermore, future studies should examine gender and age differences, the outcomes may be linked to variables such as gender and age.Another limitation is related to the relatively short duration of each condition which led to participant overestimation of the time duration.To address this limitation, future studies should consider implementing longer VR interventions.Lastly, this study was limited to examining the main components of visualization however, later we need to examine the combination of these components.
VI. CONCLUSION
There is solid evidence from controlled research that VR distraction is effective for pain distraction.Based on our knowledge, none of the previous controlled studies has examined the impact of the different visual parameters on users' awareness of their surroundings in VR, so our study examined in-depth the visual parameters and their impact on users' awareness of their surroundings.Results showed that when the search task required more time, the awareness and illusion errors were high.High errors indicate less awareness of the surroundings and more engagement in the game.Moreover, results revealed visual features that affect search time and capture a viewer's focus of attention in 2D games are also feasible in VR.This key finding in addition to immersion renders VR an effective tool for pain distraction.This study is an elementary study conducted to determine the visual representation of target and distractors that provides the least awareness of the surroundings in VR.This visual representation will be employed in our next VR game designed for distracting patients from pain.By using VR technology we may make a significant step towards increasing the therapeutic benefits of VR for pain management.
Fig. 7 .
Fig. 7. values for the awareness and illusion errors. | 8,239 | 2023-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Fuzzy Synchronization Control for Fractional-Order Chaotic Systems With Different Structures
This paper discusses the synchronization problem for a class of unknown fractional-order chaotic systems (FOCSs) with indeterminate external disturbances and non-symmetrical control gain. A paralleled adaptive fuzzy synchronization controller is constructed in which indeterminate non-linear functions are approximated by the fuzzy logic systems depending on fractional-order Lyapunov stability criteria and the fractional-order parameter adaptive law is designed to regulate corresponding parameters of the fuzzy systems. The proposed method guarantees the boundedness of all of the signals in the closed-loop system, and at the same time, it ensures the convergence of the synchronization error between the master and slave FOCSs. Finally, the feasibility is demonstrated by simulation studies.
INTRODUCTION
The fractional calculus appeared in the same era as the classical integer-order calculus, but due to the facts that the fractional-order calculus lacks actual background and its theory is complex, the fractional calculus has rarely been investigated by scholars. Recently, it has been shown that fractional calculus not only provides new mathematical methods for practical systems but also is especially well-suited for describing some dynamical behaviors of physical systems [1][2][3][4][5]. Consequently, the fractional-order calculus has been employed to describe phonology and thermal systems, signal processing and system identification [6,7], control and robotics [8][9][10][11], and many other real-world systems. Since the fractional-order calculus has memory ability, in the description of complex dynamic systems, a model built depending on fractional-order calculus is more accurate than an integer-order one. The study for the fractional-order chaotic system (FOCS) has thus slowly become a hot research topic.
It is well-known that chaotic systems (integer-order or fractional-order) are sensitive to initial state values, i.e., the stability of systems will change obviously with small changes in initial values; thus, the synchronization control of FOCSs is challenging work. Some methods, such as PD control [12], PID control [13][14][15], adaptive fuzzy backstepping control [16][17][18][19][20], sliding mode control [21][22][23][24][25], and Lyapunov direct [26][27][28] and adaptive neural network control [29][30][31][32] have been used to control or synchronize fractional chaotic systems. Chen et al. [21] investigated the adaptive synchronization of FOCSs, where different structures of the master and slave FOCSs and the existence of external disturbances are ignored. In Wang et al. [33], the synchronization of FOCSs accompanied by external disturbances was studied. To handle the unmatched disturbances, in He et al. [30], a robust synchronization method with non-linear input was proposed, but its control cost was very high. It should be mentioned that, in the above literature, the stability analysis of the synchronization of FOCSs still uses the ideal of linear systems. Generally speaking, the synchronization of FOCS systems with some unknown factors and external disturbances needs to be further researched.
Motivated by the above discussion, this paper aims to design a synchronization controller for a master and slave fractionalorder chaotic system (FOCS) subject to different structures and external disturbances. The control gain matrix is assumed to be unknown. Fuzzy logic systems are used to approximate the unknown non-linear functions. Fractional-order parameter adaptive laws are designed to update the fuzzy parameters. The main contributions of this work are summarized as follows.
(1) The non-symmetrical control gain matrix and external disturbances in FOCSs are considered. Besides, unlike some prior works, such as Liu et al. [16] and Pan et al. [31], the sequenceleading minor in the control gain matrix is not assumed to be zero. (2) Based on the Lyapunov stability theorem, fractionalorder fuzzy parameter adaptive laws are designed.
PRELIMINARIES
The ν-th fractional-order integral is defined as: where Ŵ(·) function can be defined as The ν-th Caputo's derivative can be defined as: clearly, where n is an integer satisfying n − 1 ≤ ν < n. The Laplace transform of Caputo's fractional-order derivative (3) can be expressed by Li et al. [2] . For simplicity, we suppose that ν ∈ (0, 1) in the rest of this paper. The following conclusions will be given in advance.
Definition 1. Pudlubny [3] The Mittag-Leffler function can be given by where ν, ξ > 0, and z ∈ C, the Laplace transform of which is Lemma 1. Pudlubny [3] If m(k) ∈ C 1 [0, T](T > 0) (the symbol C 1 means that a function has a continuous derivative), the following equation satisfies: Lemma 2. (Lyapunov's second fractional-order method [34]) Suppose that e e e(k) = 0 is an equilibrium point of the following FOCS: where e e e(k) ∈ R n is a system variable, and h h h(e e e(k)) ∈ R n is a nonlinear function that has a Lipschitz local condition. If there exists a Lyapunov function V(k, e e e(k)) and positive parameters a 1 , a 2 , a 3 such that a 1 ||e e e(k)|| ≤ V(k, e e e(k)) ≤ a 2 ||e e e(k)||, C 0 D ν k V(k, e e e(k)) ≤ −a 3 ||e e e(k)||, then system (9) is asymptotically stable.
Lemma 3. Aguila-Camacho et al. [35] Suppose that e e e(k) ∈ R n is a continuous and derivable function, then Lemma 4. Costa et al. [36] and Liu et al. [37] Let matrix G G G ∈ R n×n be the non-zero sequence-leading minor, then G G G can be factorized as G G G = G G G 1 A A A g T T T g , where G G G 1 ∈ R n×n is a positive matrix, A A A g ∈ R n×n is a diagonal matrix whose diagonal line is +1 or −1 (signal of each of its elements is determined by corresponding the sequence-leading minor signal of G G G), and T T T g ∈ R n×n is a upper triangular matrix.
System Dynamics
Suppose that the slave and respond FOCSs are separately defined as C 0 D ν k y y y(k) = p p p(y y y(k)) + G G Gu u u(k) where x x x(k) = [x 1 (k), x 2 (k), · · · , x n (k)] T ∈ R n and y y y(k) = [y 1 (k), y 2 (k), · · · , y n (k)] T ∈ R n are separately system measurable state variables of the drive system and respond system, h h h, p p p : R n → R n are uncertain non-linear continuous functions, G G G ∈ R n×n is an unknown constant matrix, D D D(k) ∈ R n×n is an indeterminate external disturbance, and u u u(k) ∈ R n is the control input.
Introduction of a Fuzzy System
A fuzzy logic system includes the knowledge base, fuzzier, fuzzy inference engine based on the fuzzy rules and defuzzier [38][39][40][41].
The form of the j-th fuzzy rule is , · · · , x n (k)] T ∈ R n andĥ(x x x(k)) ∈ R are respectively the input and the output of fuzzy logic systems. The output isĥ where θ j (k) is a value where the fuzzy membership function µ C j is maximum. Generally, we can consider that µ C j (θ j (k)) = 1, and the fuzzy basic function is . Let
then the output of fuzzy logic systems is written asĥ
is a continuous function defined on compact set . For any constants ε > 0, there exists a fuzzy logic system approximating functionĥ(x x x) forming (16) such that whereθ θ θ is an estimator of optimal vector θ θ θ * .
Control Objective
The synchronization error can be defined as e e e(k) = y y y(k) − x x x(k). Our control objective is to design an adaptive controller such that the synchronization error tends to zero asymptotically (i.e., lim k→∞ ||e e e(k)|| = 0).
CONTROLLER DESIGN AND STABILITY ANALYSIS
Assumption 1. The control gain matric G G G has a non-zero sequence-leading minor whose signal is known.
Remark 1. Assumption 1 is not strict. In fact, the gain matrix of some actual systems (such as a visual servo and vehicle thermal management system [42]) is non-symmetrical. According to Lemma 4, one can factorize G G G as G G G = G G G 1 A A AT T T, where G G G 1 is an unknown positive definite matrix, A A A is a known matrix whose diagonal line is +1 or -1, A A AA A A = I I I n×n (I I I n×n is a n-order unitary matrix), and T T T is an uncertain upper triangle matrix. Assumption 2. The product of the external disturbance D D D(k) and the positive definite matrix G G G −1 1 is a function that is bounded, i.e., there exists an uncertain constant M i > 0 so that Remark 2. Assumption 2 is not restrictive, and it is used in some similar literature, for example, in Liu et al. [9], Rahmani et al. [10], and Ferdaus et al. [11]. In fact, most commonly used disturbances satisfy Assumption 2.
To facilitate the coming stability analysis, let us display some results in advance.
Proof. We only verify the first condition (the second condition is the same). Because C 0 D ν k e(k) ≤ 0, there exists a non-negative function h Taking the Laplace transform on both sides of equation (31), we get s ν E(s)−s ν−1 e(0)+F(s) = 0, where E(s) and F(s) are separately the Laplace transform of e(k) and h(k). It is simplified to Taking the inverse Laplace transform on both sides of equation (32), we obtain By the fractional-order integral (1), we have [D −ν h](k) ≥ 0. Further, we have e(k) ≤ e(0) on [0, +∞).
Remark 3. Lemma 5 shows the difference between a fractionalorder derivative and an integer-order derivative, but it cannot be described as: if C 0 D ν k e(k) ≤ 0, then e(k) is monotonically decreasing on the interval [0, +∞); if C 0 D ν k e(k) ≥ 0, then e(k) is monotonically increasing on the interval [0, +∞). To explain this, a counterexample is given as follows.
Lemma 6. Suppose that e e e(k) ∈ R n be a continuous one-order derivative, then where Q Q Q is an arbitrary n-order positive definite matrix.
Proof. Since
Lemma 7. Suppose that V(k) = 1 2 x x x T (k)x x x(k) + 1 2 y y y T (k)y y y(k), where x x x(k) and y y y(k) ∈ R n are continuous one-order derivatives. If there exists a constant q > 0 satisfying the following inequality then ||x x x(k)|| and ||y y y(k)|| are both bounded, and x x x(k) tends to zero asymptotically, where || · || represents the Euclidian norm.
Taking the ν-th integral on both sides of inequality From the structure of V(k), we have x x x T (k)x x x(k) ≤ 2V(k), and furthermore, Frontiers in Physics | www.frontiersin.org It follows from (39) that there exists a non-negative function M(k) such that Taking the Laplace transform on (40), we obtain: where X X X(s) and M M M(s) are respectively the Laplace transforms of x x x(k) and M(k). Taking the inverse Laplace transform on both sides of equation (41), the solution is where * is the convolution. Since k −1 and E ν,0 (−2qk ν ) are both non-negative functions, x x x T (k)x x x(k) ≤ 2V(0)E ν,1 (−2qk ν ). From Li et al. [2], we know that x x x(k) is M-L stable and x x x(k) tends to zero asymptotically, i.e., lim k→∞ ||x x x(k)|| = 0.
, where z z z(k), d d d(k) ∈ R n and Q Q Q 1 , Q Q Q 2 ∈ R n×n are both positive definite matrixes. If there exists a positive definite matrix Q Q Q 3 and a constant q 0 > 0 satisfying then ||z z z(k)|| and ||d d d(k)|| are bounded, and z z z(k) tends to zero asymptotically (i.e., lim k→∞ ||z z z(k)|| = 0).
The main results of the paper are given as follows.
Theorem 2.
Under Assumption 1 and Assumption 2, the synchronization between the drive system (13) and the respond system (14) can be achieved on the work of the adaptive fuzzy controller (26) and fractional-order adaptive laws (27), (28), and (29). In addition, all the signals of the closed-loop system are bounded.
Proof. Since A A A = I I I n×n , substituting the controller (26) into the error dynamic equation (21) gives Then, we have e e e T (k)Q Q Qe e e(k) Because the ν-order Caputo derivative of a constant is zero, , i = 1, 2, · · · , n. By Lemma 3 and Lemma 6, taking the ν-order derivative of V(k) in equality (47) yields Substituting (27), (28), and (29) into (48) gives where l 0 = min{l 1 , l 2 , · · · , l n } and λ max is a maximal eigenvalue in positive definite matrix Q Q Q. From Lemma 8 and inequality (49), we know that the synchronization error satisfies lim Frontiers in Physics | www.frontiersin.org system (13) is a chaotic system, we know that x x x(k) is also bounded. Thus, e e e(k) is also bounded, which implies that y y y(k) is bounded, too. Consequently, by using (26), we get that u u u(k) is bounded. Thereby, all the signals in the closed-loop system are bounded.
Remark 4. For respond system (14), when G G G = E E E, the synchronization between the uncertain FOCSs was solved in Liu et al. [44]. However, this solution cannot solve the synchronization question for systems with uncertain nonsymmetrical control gain; when D D D(k) = 0 0 0, Ha et al. [45] researched the synchronization of FOCSs with indeterminate non-symmetrical control gain but also did not solve the synchronization question of systems with unknown disturbances. In contrast, by considering the above two conditions, this paper addresses the synchronization question for systems with uncertain non-symmetrical control gain and unknown disturbances.
NUMERICAL SIMULATION
In the simulation, the effectiveness of the controller is tested by researching the synchronization between the fractional-order Newton-Leipnik system [46,47] and the fractional-order Lü system [48,49]. The fractional-order Newton-Leipnik system is given as follows.
It is easy to gather that the following inequality holds: The initial values of the drive system and respond system can be respectively In the numerical simulation, the input variables of the fuzzy system are x x x(k), y y y(k), and u u u(k). For inducing calculation of the fuzzy logic system, we will replace x x x(k) and y y y(k) by e e e(k). For e 1 (k), e 2 (k), and e 3 (k), we can select five Gaussian membership functions whose mathematical expectations are respectively −4, −2, 0, 2, and 4 and whose parameters are ( [1.2],[-4, -2, 0, 2, 4]), uniformly distributed in the interval [−4, 4] for each input. Therefore, the number of the rules that are produced by the fuzzy logic system approximating function is 5 3 = 125. In order to better test the effectiveness of the controller, we can chose adjustable parameters, which are represented by θ 1 (0), θ 2 (0), and θ 3 (0), as random vectors in 125 dimensions.
The other parameters of the controller are defined as l i = 5, λ i = 500, ξ i = 0.5, and µ i = 0.5, and the estimated values of the fuzzy logic system approximating error areε * 1 (0) =ε * 3 (0) = 1. The simulation results clearly show that the convergence rate of synchronization error is fast when l i is reasonable. Figures 1, 4 give the error change trend that the error is large at first and then gets smaller and smaller after a time, finally tending to zero asymptotically. Furthermore, from case 1 and case 2, we know that a minimal change in initial values can have obvious effects on the error but cannot affect the eventual convergence of error. This implies that the fuzzy system proposed in this paper has good approximation performance. Figures 3, 4 display the changing situation of control variables and fuzzy logic system parameters, and it conforms to our expectations. In addition, from the above simulation results, we can see a chattering phenomenon because a discontinuous sign function is used in the synchronization controller.
CONCLUSION
In this paper, a robust adaptive fuzzy controller for indeterminate FOCSs with uncertain external disturbances and nonsymmetrical control gain is proposed. The proposed method has good ability on the condition that each sequence-leading minor of the uncertain non-symmetrical gain matrix is non-zero, and the upper bound of the product of the positive definite matrix factorized by gain matrix and external disturbance is known. The stability of the closed-loop system is successfully discussed by using a fractional-order Lyapunov method and quadratic Lyapunov functions.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/supplementary material.
AUTHOR CONTRIBUTIONS
JX and XZ contributed the conception and design of the study. XQ and NL organized the literature. JX performed the design of figures and wrote the first draft of the manuscript. All authors contributed to manuscript revision and read and approved the submitted version. | 4,043.2 | 2020-05-21T00:00:00.000 | [
"Mathematics"
] |
Control of the first wave of COVID-19: Some economic freedom-related success factors
. This research aims to study some economic freedom-related factors with explanatory power on the countries’ success in controlling the first wave of COVID-19. Our selected factors include the economic, business, labour, monetary, trade, investment, financial, press, human, and personal freedom indexes. Our dependent variables include the government’s daily average stringency index, the outbreak response time, the daily average of cases per million, the daily average of deaths per million, and the daily average of COVID-19 tests per thousand. We find that countries with superior degrees of freedom suffered a more severe impact of the outbreak as confirmed by the highest daily average of cases and deaths per million. This severe impact happened while governments had a controllable response to the outbreak as verified by the lowest daily average stringency index. However, these countries were more effective at controlling the first wave of COVID-19 as measured by the shorter outbreak response time and a higher daily average of COVID-19 tests per thousand.
INTRODUCTION
The COVID-19 outbreak constitutes the most severe global crisis of the 21st century after the 9/11 terror attacks and the 2008 financial crisis. However, since the outbreak is not under control yet, it will probably represent one of the most challenging crises in modern human history. The pandemic will produce significant long-term political, economic, and social effects that are difficult to anticipate. For some countries, this crisis has already produced unprecedented statistics. For example, Lambert (2020) informs that more Americans have died of COVID-19 than the number of American lives lost in World War I (116,516), in the 1968 pandemic (100,000), in the Vietnam conflict (58,220), in the Korean War (36,574), and all US military conflicts in the Middle East (9,353).
The International Labour Organization (ILO 2020) estimated a decline in working hours of about 10.7 percent in the second quarter of 2020 compared to the same quarter in 2019, which is equivalent to 305 million full-time jobs. The World Bank (WB 2020) estimates a 5.2 percent contraction in the global Gross Domestic Product (GDP) for 2020, considered the deepest global recession in decades. The World Bank also estimates that most countries will enter into a recession in 2020, resulting in a per capita income contraction of about 7 percent for the most substantial fraction of countries globally since 1870. The United Nations (UN 2020) states that the COVID-19 outbreak has led to the most massive disruption of education ever, with at least 40 million children worldwide missing education in their critical pre-school year. These are just a few unprecedented records set by the COVID-19 crisis.
This article aims to study some economic freedom-related factors that may explain the success of some countries at controlling the first wave of COVID-19. Our sample of sixty-five countries was analysed using generalized linear models, generalized binomial models, and weighted least squares models. Our dependent variables include the government's daily average stringency index, the outbreak response time, the daily average of cases per million, the daily average of deaths per million, and the daily average of COVID-19 tests per thousand. We find that countries with superior degrees of freedom suffered more severe effects of the outbreak as measured by higher daily averages of cases and deaths per million. These grave effects resulted from a softer government pandemic response as verified by a lower daily average stringency index. However, these countries could control the outbreak more effectively as evidenced by a shorter outbreak response time and a higher daily average of COVID-19 tests per thousand. We also find that the probability of a country controlling the COVID-19 successfully is negatively related to its business freedom, but positively related to its monetary and press freedom. The limitations of our study include, but are not limited to, differences in how countries record COVID-19 deaths, differences in testing efforts, differences in health services, possibly unreliable data from countries with tightly controlled political systems, and many demographics variables affecting the pandemic spread like average age, population density, urban versus rural population, age structure, etcetera. The rest of this article is organized as follows: literature review, data and methodology, results, interpretations and limitations, conclusions, and references.
COVID-19 outbreak
Previous research works on COVID-19 are primarily focused on medical sciences and healthcarerelated disciplines. A few academic articles have used similar dependent or independent variables as the ones considered in this study. Erdem (2020) studies investors' reaction to coronavirus data announcements from seventy-five countries controlling their scores on the Freedom House's 2019 freedom index. This index is based on the political rights and civil liberties exiting in each country. He finds significant negative effects in stock markets resulting from the number of cases per million. He also finds that announcements about increases in the number of cases per million generates lower stock market returns and volatiles in less-free countries. Herren et al. (2020) study some factors affecting non-pharmaceutical interventions (NPIs) defined as the efforts to decrease social mobility to reduce the spread of the COVID-19. They find that GDP per capita, country-specific outbreak trajectory, democracy index are relevant factors in determining a given population's acceptance of NPIs. Mazzucchelli et al. (2020) study some political-risk factors with significant explanatory power on the variability of COVID-19 mortality among European countries. They analyze the democracy index and its components, including each country's political system and corruption index. They find that the democracy index and its components, the political system, and the corruption index all have a statistically significant and positive relationship with the COVID-19 mortality. In other words, significant scores on democracy indexes have associated high scores on pandemic mortality. Alon, Farrell, and Li (2020) compare the China's COVID-19 response and Taiwan's using a multi-case approach. They find that transparent and open communications, a characteristic of democratic countries, allowed Taiwan's response to be more effective and less invasive than China's. They argue that the slow response to the epidemic of politicians in Italy, Spain, the US, and other democracies has caused heavy costs to their countries and the world. They also state that in democracies, voters distrust politicians in responding to crises, which increases the difficulty of implementing any policies.
Economic freedom
Regard our independent variables, some previous studies could find evidence supporting some relationships also identified in this article. Yevdokimov et al. (2018) study the influence of economic freedom on macroeconomic stability. They find that countries with open institutions have democratic political systems and high GDP per capita. They also find a positive and significant relationship between economic freedom and macroeconomic stability. Bjørnskov, C. (2016) studies the relationship between the degree of capitalism as measured by economic freedom, and the risk and characteristics of economic crises. He finds that the magnitude of the economic contraction during an economic crisis measured by the peakto-trough ratio of real GDP per capita has a negative and significant relationship with the initial economic freedom. Peev and Mueller (2012) study twenty-four post-communist economies over the period 1990-2007 and find that trade and monetary freedom, as well as freedom from corruption indexes, are the most significant variables that can explain economic growth for the studied countries. Finally, Cepaluni et al. (2020) study the relationship between political institutions and deaths during the first 100 days of the COVID-19 pandemic. They find that more democratic political institutions experienced more and sooner deaths per capita than less democratic institutions.
No previous research article has studied economic freedom-related factors that may explain countries' success in controlling the first wave of COVID-19. Therefore, this study's original contribution is to determine the explanatory power of freedom-related variables regarding outbreak control. Our results may be valuable for multilateral entities and international non-governmental organizations when designing aid policies to fight the pandemic.
DATA & METHODOLOGY
Our sample includes countries with available data for our dependent and independent variables. We excluded countries with a population of less than a quarter-million people to avoid outliers in our dependent variables. We also excluded countries with internal conflicts (Libya, Yemen, and Syria) and countries with external political conflict affecting their capacity to control the COVID-19 outbreak (Iran and Venezuela). Our final sample consists of one hundred and fifty-six countries.
Our dependent variables include the government's daily average stringency index (DV1), the outbreak response time (DV2), the daily average of cases per million (DV3), the daily average of deaths per million (DV4), and the daily average of COVID-19 tests per thousand (DV5). Like Erdem (2020) and Herren et al. (2020), the data of our dependent variables were compiled by Hannah et al. (2020) and retrieved from Our World in Data. The starting outbreak date varies from country to country; however, no country has a beginning date earlier than December 31, 2019. Nevertheless, all countries in our sample have the same ending date on July 10, 2020. Our last dependent variable is a binary variable (DV6) that takes the value of one if the graph of the 5-day moving average of the reported daily new cases is concave down with only one maximum and a partially symmetric shape (kurtosis between -3 and +3). The same dependent variable equals zero if the same graph shows a growing linear, exponential, or logarithmic trend. Countries experiencing a second wave of COVID-19 were excluded from this analysis. This narrow selection criterion produced a subsample of sixty-five countries that we analyzed using generalized binomial models explained below.
The government's daily average stringency index is a combined measure based on nine response scores, including but not limited to, school closures, workplace closures, and travel bans. This variable is measured on a scale from zero to one hundred, where one hundred represents the strictest government reaction to the COVID-19. The daily average for this stringency index is calculated from the first reported case's date until July 10, 2020. The outbreak response time is the number of days between the first reported case's date and the date of the first maximum of the curve resulting from the 5-day moving average of the daily new cases. This methodology is similar to that of Bjørnskov, C. (2016), who finds that the recovery time measured by the peak-to-trough ratio of real GDP per capita is negatively related to initial economic freedom, although he studies crises of economic nature. The daily average of cases and deaths per million and tests per thousand were determined by dividing the total cases and deaths per million, and tests per thousand by July 10, 2020, over the number of days since the first case's date. The mortality rate is calculated by dividing the total deaths per million on July 10, 2020, by the total cases per million on that date.
Data for our independent variables were retrieved from the individual components of the 2020 Index of Economic Freedom (IV1) published by the Heritage Foundation (HF 2020.) This index is based on twelve quantitative factors, grouped into four broad categories, including the rule of law (which includes property rights, government integrity, and judiciary effectiveness) and government size (which includes government spending, tax burden, and fiscal health). The other two categories are regulatory efficiency (which includes business freedom, labor freedom, monetary freedom), and open markets (which includes trade freedom, investment freedom, and financial freedom). Out of the four classes listed above, only two measuring different types of freedom were analyzed in this study. These freedom-related independent variables include the business freedom index (IV2), the labor freedom index (IV3), the monetary freedom index (IV4), the trade freedom index (IV5), the investment freedom index (IV6), and the financial freedom index (IV7).
The business freedom index measures the degree of constraining imposed by national regulatory and infrastructure environments to businesses' efficient operation. This index includes several factors associated with starting and closing a business: the procedures, time, cost, minimum capital, and recovery rate. The index also includes the procedures, time, and cost of obtaining a license and getting electricity. Each of these elements is converted to a scale of zero to one hundred, and their average is the country's business freedom score.
The labor freedom index includes several elements of the legal and regulatory framework of a country's labor market. The index includes the ratio of the minimum wage to the average value added per worker, hindrance to hiring additional workers, rigidity of hours, difficulty of firing employees, legally required notice period, mandatory severance pay, and labor force participation rate. These factors are transformed into a scale of zero to one hundred, and their average is the labor freedom score.
The monetary freedom index quantifies inflation and government activities that may result in distorted prices of goods and services. The index comprises the most recent three-year weighted average inflation rate and a qualitative judgement about the impact of governments' controls or subsidies aimed at manipulation of market prices. The trade freedom index measures the impact of tariff and non-tariff barriers on imports and exports of goods and services. This index includes the trade-weighted average tariff rate and a qualitative assessment for non-tariff barriers.
The investment freedom index measures the quality and quantity of national restrictions to the flow of investment capital, such as restrictions on access to foreign exchange, payments, transfers, and capital transactions. This index is calculated based on an ideal national score of one hundred, reflecting a scenario of people and companies allowed to allocate their resources in and out of economic activities, both nationally and internationally, without any government restriction. Points are deducted from this perfect score for every restriction associated with the national treatment of foreign investment, foreign investment code, restrictions on land ownership, sectorial investment restrictions, expropriation of investments without fair compensation, foreign exchange controls, and capital controls.
The financial freedom index measures the degree of a country's banking efficiency and independence from government control and interference in the financial sector. This index quantifies five broad areas, namely the degree of government regulation of financial services, the extent of government influence in the financial sector through direct and indirect ownership, government influence on credit allocation, the degree of financial and capital market development, and the country's openness to foreign competition in the financial sector.
We complement these freedom-related variables listed above with three additional independent variables. Reporters Without Borders (RWB 2020) compiles the press freedom index (IV8). This index is based on a questionnaire that assesses six broad criteria, namely pluralism, media independence, media environment and self-censorship, legislative framework, transparency, and quality of infrastructure for news and information production and broadcasting, including the free flow of information on the Internet. This index moves in the opposite direction as that of the other indexes, namely, the higher the press index's score, the lower the press freedom.
Additionally, the Cato Institute (CI 2020) is a public policy research organization that compiles the 2019 Human Freedom Index (IV9). This index is an overall score based on seventy-six different indicators of personal, civil, and economic freedom including but not limited to the following areas: the size of government, legal system and property rights, access to sound money, freedom to trade internationally, regulations of credit, labor, and business. A narrower sub-index, also compiled by the Cato Institute, is the Personal Freedom Index (IV10). This sub-index quantifies specific personal freedoms like movement, religion, association, assembly, and civil society, expression and information, identity and relationships, the rule of law, and security and protection.
We analyze our data using generalized linear models, which are made up of a linear predictor = 0 + 1 1 + ⋯ + (1); and two functions, namely a link function that describes how the mean ( ) = , depends on the linear predictor ( ) = ; and a variance function that describes how the variance, var(DVi) depends on the mean var(DVi) = ( ), where the dispersion parameter is a constant.
In the case of the general linear model with = (0, 2 ), we have the linear predictor specified above, the link function ( ) = , and the variance function ( ) = 1. In the case of a general binomial model where Φ is the cumulative distribution function. We also use weighted least squares models to study the explanatory power of our independent variables over our dependent ones. When a model specification DVi = β0 + β1IV1 + β2IV2 + … + βkIVk + ui (2) has a heteroskedastic variance ( ) = 2 , we can divide each term by the weight = 1⁄ , to adjust the independent variables and transform the original equation ; which is the same as * = 0 * + 1 1 * + 2 2 * + ⋯ + * + * (4); but with homoscedastic variance ( * ) = ( ⁄ ) = ( ) 2 = 1 ⁄ ..
EMPIRICAL RESULTS
We classified our sample using our dependent variables from highest to lowest and identified the first (lowest) and fourth (highest) quartiles for each of them. A table with these results was not included in this article. However, we find that countries with the lowest government's daily average stringency indexes (37.48) have less than half the average of those with the highest indexes (81.9). Similarly, countries with the fastest COVID-19 response took an average of 27 days, while those without control of the outbreak have faced it during an average of 129 days until July 10, 2020. Likewise, those countries with the highest average of daily cases and deaths per million (0.55 and 0.02) have experienced about 120 times more cases and deaths per million than countries with the lowest averages (51.27 and 1.99). Correspondingly, those countries with the highest daily average of COVID-19 tests per thousand (0.031) have applied about 36 times more tests than those with the lowest averages (1.13). Table 1 shows the independent sample tests of our independent variables for the first and fourth quartiles of our dependent variables obtained in the way we explained before. The statistically significant results show that countries of the lowest daily average stringency index (ceteris paribus) also have the highest economic, business, labor, monetary, trade, investment, financial, press, human, and personal freedoms as measured by their corresponding indexes. The significant results also show a similar outcome for the outbreak response time, but excluding financial freedom. These results suggest that, on average, those countries enjoying the highest degree of freedom did not impose strict restrictions on their citizens to control the first wave of COVID-19. However, these same countries were faster in controlling the outbreak than those with the lowest degrees of freedom. Table 1 also contains statistically significant results showing that countries with the lowest daily average of cases and deaths per million (ceteris paribus) have the lowest economic, business, monetary, trade, investment, financial, human, and personal freedom measured by the corresponding indexes. Nevertheless, the press freedom index is significant only for the deaths per million, but this result is similar in that those countries with the lowest press freedom experience the lowest average daily deaths per million. These results suggest that those countries with the lowest degree of freedom experienced a less severe outbreak impact as measured by the daily average of cases and deaths per million.
Table 1 also shows that those countries with the lowest average daily tests per thousand (ceteris paribus) were the ones with the lowest economic, business, monetary, trade, investment, financial, press, human, and personal freedoms measured by the corresponding indexes. The table also shows marginally significant results regarding countries with high daily average tests per thousand and high labor freedom index, but these results are not significant at conventional levels of confidence. These results suggest that, on average, countries with the highest degrees of freedom have superior testing efforts.
These results are consistent with those previous studies mentioned before (Cepaluni et al. 2020, andMazzucchelli et al. 2020.) To verify this consistency, we organized our sample by their 2019 Democracy Index compiled by the Economist's Intelligence Unit (TE-IU 2020), from lowest to highest, and determined the first (more democratic) and fourth (less democratic) quartiles. Table 2 shows the average values of our independent variables for each quartile. (DV1) IV1 IV2 IV3 IV4 IV5 IV6 IV7 IV8 IV9 IV10 Q1 69 Notes: **** , *** , ** and * denote statistical significance at the 0.1%, 1%, 5%, and 10% significance level, respectively. We repeat the same tests using the logarithmic transformations of our dependent and independent variables with the same results in terms of statistical significance, but these additional tests were omitted in this report. Table 2 shows the independent samples tests of our independent variables for our sample of countries organized by their 2019 Democracy Index. The statistically significant results show that the countries with superior democracy indexes exhibit (ceteris paribus) superior economic, business, labor, monetary, trade, investment, financial, press, human, and personal freedom indexes. These results are consistent with those of Peev and Mueller (2012), who found that countries with democratic institutions are associated with higher economic freedoms. 00] **** Notes: **** , *** , ** and * denote statistical significance at the 0.1%, 1%, 5%, and 10% significance level, respectively. The same analysis was done using the logarithmic transformations of both the dependent and independent variables with the same results in terms of statistical significance. Table 3 shows our generalized linear regression models using the logarithmic weighted transformations of our dependent and independent variables. According to the results, the government's daily average stringency index has a negative relationship with the business freedom index. Besides, the outbreak response time has a negative and significant relationship with the monetary freedom index and the financial freedom index, but a positive and significant relationship with the press freedom index. The daily average of cases per million have a positive and significant relationship with the business freedom index, the monetary freedom index, and the financial freedom index, but a negative and significant relationship with the labor freedom index. Similarly, the daily average of deaths per million has a positive and significant relationship with the business, trade, and financial freedom indexes. Finally, no independent variable has significant explanatory power on the daily average of COVID-19 tests per thousand. [0.02] ** LOG(IV10) -0.12 t-sta.
-0.77 p-val. [0.44] Notes: ****, ***, ** and * denote statistical significance at the 0.1%, 1%, 5%, and 10% significance level, respectively. The table contains t-statistic and their corresponding p-values below in brackets. To avoid problems of multicollinearity, the variables IV9 and IV10 were included separately in the same regression models, but IV9 did not provide significant results, so these insignificant results were omitted in this report. We applied the Breusch-Pagan-Godfrey and the White tests for heteroskedasticity. The same models were also analyzed using generalized linear weighted models with the same significant results and signs, but these additional results were omitted in this article.
All these significant results of table 3 support those of table 1, except for the relation between the average of cases per million and the labor freedom, which is insignificant in table 1. The other exception is the relationship between the outbreak response time and the financial freedom index, which is insignificant in table 1. Overall, these results suggest that countries with superior degrees of freedom experienced a more acute impact of the COVID-19 corroborated by a higher daily average of cases and deaths per million. This relation might be explained by the fact that these countries also did not impose strict restrictions compared with those with lower scores on their freedom indexes. However, these countries could control the first wave of the outbreak faster, as suggested by the shortest outbreak response time and the largest daily average of COVID-19 tests per thousand. Table 4 shows the results of our generalized binomial models using both logit and probit models. The significant results show that the probability of a country to succeed in controlling the first wave of COVID-19 is negatively related to its business freedom index but positively related to its monetary freedom and press freedom. That probability also has a significant relationship with the personal freedom index but only in the probit model. Table 1 shows that (ceteris paribus) countries with higher degrees of economic, business, labor, monetary, trade, investment, financial, press, human, and personal freedom indexes, failed at imposing severe restrictions to control the first wave of COVID-19. As a result, these freer countries (excluding labor freedom) suffered a more severe impact of the outbreak measured by the daily average of cases and deaths per million. However, these same countries with higher scores of freedom indexes were more effective at controlling the first wave of COVID-19 confirmed by a shorter outbreak response time (excluding the financial freedom index) and a higher daily average of COVID-19 tests per thousand (excluding the labor freedom index).
DISCUSSIONS & LIMITATIONS
The results of table 3 suggest that business freedom has a significant positive effect on the severity of the outbreak's impact measured by the daily average of cases and deaths per million. The table also shows that this index has a significant negative impact on a country's response to the pandemic as measured by the government's daily average stringency index. Indeed, countries with excellent business freedom and no business constrains by national governments face significant challenges to impose business restrictions to control the COVID-19. The resistance of a country's business sector to control measures like business lockdowns results in higher average daily cases and deaths per million. These results are consistent with that of Díaz-Casero et al. (2012), who find that almost all components of the Economic Freedom Index have a significant relationship with the entrepreneurial activity.
Table 3 also shows that the monetary freedom index has a significant negative impact on a country's response speed to the COVID-19 measured by the outbreak response time, but a significant positive effect on the harshness of the outbreak measured by the daily average of cases per million. Indeed, during the first months of the outbreak, countries worldwide experienced shortages and price spikes of many goods. These price increases and shortages were the direct consequence of frozen factories, crops rotting in fields, international supply chain disruptions, etcetera. A representative example was the call of the World Health Organization (2020, March) asking for industry and government action to control rising demand, panic buying, speculation, and hoarding, particularly for medical supplies. Countries enjoying great monetary freedom during the pandemic experienced a significant hardship implementing controls aimed to avoid distortions in the market place caused by the COVID-19. These difficulties resulted in slower response times and a more significant number of cases. For countries with low monetary freedom, the control of distortions in the marketplace was easier, resulting in faster response time and lower cases. Table 3 also shows that the financial freedom index has a significant positive relationship with the daily average of cases and deaths per million, but a negative and significant relationship with the outbreak response time. The impact of the COVID-19 on financial institutions has resulted in many challenges, including defaults on credit cards, loans, mortgages, etcetera. As a result, most central banks worldwide have implemented guidelines and regulations to help financial institutions deal with these challenges. Let us consider the case of the US, a country with a tremendous financial freedom index. On August 3, 2020, the Federal Financial Examination Council (FFEC 2020) issued a set of guidelines suggesting financial institutions 'work prudently' with borrowers who may be unable to honor their payment obligations due to the COVID-19. Similarly, the Board of Governors of the Federal Reserve (BGFD 2020) issued a regulatory action related to COVID-19, where it 'encourages' financial institutions to 'work constructively' with borrowers affected by COVID-19.
China is an example of a country with a low financial freedom index. Ali (2020) informs that four of the first five largest banks in the world by total assets are state-owned Chinese banks. Besides, Chong (2020) informed that China has engaged in a massive forbearance effort coordinated by regulators and banks to deal with a wave of COVID-19-related defaults. Accordingly, Chinese companies and individuals were allowed to defer loan principal and interest payments until June 30, 2020. These two examples contrast the prospects of US borrowers hoping to work 'constructively' with their lenders in the middle of the outbreak, versus the certainty of the forbearance efforts implemented by the Chinese regulators. Certainly that any government efforts to control COVID-19, including lockdown of business, will face a higher resistance in the US than in China, at least from a borrowers' perspective. Therefore, countries with low financial freedom and controls intended to reduce the financial burden on individual and business borrowers will find less resistance at implementing lockdown restrictions and, in turn, lower daily average of cases and deaths per million.
Regarding the more prolonged outbreak response time in countries with superior financial freedom, the explanation comes from the difference between private and public financial institutions in terms of efficiency. Indeed, Cull, Martinez, and Verrier (2018) state that the agency costs in state-owned banks lead to operational inefficiencies and low intermediation quality, especially in developing countries. Similarly, Chortareas, Girardone, and Ventouric (2013) find a positive and significant relationship between an economy's financial freedom and banks' cost advantages and overall efficiency, especially in countries with freer political systems.
Table 3 also shows that the labor market freedom has a significant and negative relationship with the daily average of cases per million. Countries with high scores on labor freedom have experienced a substantial increase in unemployment rates than those with low labor freedom scores. State intervention and government controls over the labor market reduce the labor freedom index. These actions are precisely what many countries have been doing to minimize the negative impact of COVID-19 on their national labor markets. According to the Organization for Economic Co-operation and Development (OECD 2020), a large number of countries have implemented a wide range of policies to preserve existing jobs, such as job retention schemes and administrative suspensions of dismissals. They find that those countries with a large proportion of their labor market covered by these policies have experienced smaller increases in unemployment rates between early March and end-April 2020, compared to countries with narrower policies' scope and budget. Countries with high unemployment rates resulting from low government intervention in the labor market (high labor market freedom) have faced significant challenges at persuading people to stay at home. Contrary, countries that implemented broad policies and regulations to protect their national labor markets have experienced better acceptance of restrictions aimed at controlling the first wave of COVID-19 and, in turn, a lower daily average of cases per million. Table 3 also shows a positive and significant relationship between the daily average of deaths per million and the trade freedom index. Indeed, on April 23, 2020, the World Trade Organization (WTO 2020b) reported that a growing number of export prohibitions and restrictions were introduced by many countries to mitigate critical shortages of food and medical supplies. These restrictions reduced trade freedom and allowed many countries to secure their existing medical supplies and reduce their daily average of deaths per million (WHO 2020a). Finally, table 3 also shows a positive and significant relationship between the outbreak response time and the press freedom index. Countries with superior press freedom enjoy many benefits, but also a significant challenge, namely fake news. Indeed, the World Health Organization (WHO 2020, May) joined the UK government in the awareness campaign called 'Stop The Spread', which aimed to raise awareness about the risks of misinformation around COVID-19. Fake news and misinformation on the outbreak in social media has been a challenge for countries with greater press freedom. Therefore, those countries with media restrictions and poor press freedom, particularly those censoring fake news and misinformation flowing on the Internet, have a shorter outbreak response time.
Finally, table 4 shows that the probability of a country to control the first wave of COVID-19 is negatively and significantly related to the business freedom index. The rationale to explain this relationship is the same as that described above. Countries with superior business freedom face significant difficulties in imposing restrictions on business operations aimed at controlling the outbreak. Similarly, table 4 shows that the probability for a country to control the outbreak is positively and significantly related to the monetary, press, and personal freedom indexes. The rationale to explain these relationships are the same as those described above. Countries with great monetary freedom experience severe difficulties in imposing market controls aimed to avoid price distortions caused by the COVID-19. Likewise, countries with excellent press freedom and severe problems of outbreak-related misinformation, particularly fake news flowing on the Internet, have a longer outbreak response time. Finally, the positive relationship between the probability of controlling the first wave of COVID-19 and the personal freedom index can be explained by the consistency with previous studies (Cepaluni et al. 2020, Dempere 2021, and Mazzucchelli et al. 2020.) Indeed, we confirmed on table 2 that countries with higher democracy indexes, and therefore higher personal freedom, have a much better probability of controlling the first wave of COVID-19.
Regarding the limitations of this study, we can mention that our dependent variables do not enjoy universal consensus as valid metrics to study countries' effectiveness to control the first wave of COVID-19. Regarding our independent variables, a similar limitation arises. For example, the Heritage Foundation (HF 2020) and the Fraser Institute are the most quoted sources of economic freedom. However, Ram (2014) finds that country rankings on the two sources show significant differences in several cases and warns that users should exercise caution in drawing inferences when using these classifications. Therefore, repeating our analysis using data from the Fraser Institute might produce different results.
Finally, Morris and Reuben (2020) also identify several limitations when trying to make an international comparison of the outbreak. They mention differences in how countries record COVID-19 deaths, differences in testing efforts, differences in health services, possible unreliable data from countries with tightly controlled political systems, and many demographics variables affecting the pandemic spread like average age, population density, urban versus rural population, age structure, etcetera.
CONCLUSIONS
This study tries to identify some country-specific economic freedom-related factors with explanatory power at the success in controlling the first wave of COVID-19 outbreak. Our sample of one hundred and fifty-six countries suggests that those with superior economic, business, labor, monetary, trade, investment, financial, press, human, and personal freedom exhibited (ceteris paribus) the lowest daily average stringency index. These same countries also exhibit the quickest response time and the highest daily average of tests per thousand, but excluding financial and labor freedom. We also find that countries with superior degrees of freedoms suffered a more severe impact of the outbreak as measured by the daily average of cases and deaths per million (excluding the labor freedom). However, these countries were more effective in controlling the first wave of COVID-19 as measured by the shorter outbreak response time (excluding the financial freedom index) and a higher daily average of COVID-19 tests per thousand (excluding the labor freedom index).
We also find a positive and significant relationship between a country's business, monetary, and financial freedom indexes and its daily average of cases and deaths per million; and between a country's press freedom and its outbreak response time. We also find a negative and significant relationship between a country's monetary freedom and its pandemic response time, and between a country's labor market freedom and its daily average of cases per million. We also find significant results showing that the probability of a country to succeed in controlling the first wave of COVID-19 has a negative relationship with its business freedom index, but a positive one with its monetary freedom and press freedom.
Our results suggest that during the early stages of the COVID-19 crisis, countries with superior business freedom may have experienced significant resistance from business entities to governmentimposed restrictions. Similarly, governments of countries with high monetary freedom may have faced difficulties implementing controls to minimize market distortions caused by the pandemic. Likewise, financial regulators (e.g., central banks) in countries with a high degree of financial freedom may have endured problems trying to modify the conditions of lender-borrower relationships during the beginning of the outbreak. The lack of quick and significant changes in the creditor-debtor relationship may have placed pressure on borrowers to continue performing their economic activities with minimum interference from any restrictive government policy. Correspondingly, governments in countries with low scores of labor freedom may have imposed government controls over the labor market more quickly to preserve existing jobs and minimize the population's economic hardship resulting from policies like lockdowns or business closures. Nations with high unemployment rates resulting from low government intervention in the labor market may have faced significant challenges encouraging people to stay at home.
Equally, our results suggest that countries with inferior trade freedom may have imposed trade restrictions quickly to protect their existing medical supplies and reduce their daily average of COVID-19 deaths per million. In the same way, countries with superior press freedom face the challenge of fake news. The WHO (2019) has highlighted public health communication and community engagement as vital government health policies. Notably, the WHO has cautioned about the risk of infodemic defined as the COVID-19 information overload (some accurate and some fake), making it problematic for people to recognize truthful sources of information and dependable guidance when needed. Our results suggest that countries with media restrictions and poor press freedom may have quickly censored fake news and misinformation flowing on the Internet, resulting in a shorter outbreak response time. | 8,743 | 2021-12-01T00:00:00.000 | [
"Economics"
] |
Systematics of the $\alpha'$ Expansion in F-Theory
Extracting reliable low-energy information from string compactifications notoriously requires a detailed understanding of the UV sensitivity of the corresponding effective field theories. Despite past efforts in computing perturbative string corrections to the tree-level action, neither a systematic approach nor a unified framework has emerged yet. We make progress in this direction, focusing on the moduli dependence of perturbative corrections to the 4D scalar potential of type IIB Calabi-Yau orientifold compactifications. We proceed by employing two strategies. First, we use two rescaling symmetries of type IIB string theory to infer the dependence of any perturbative correction on both the dilaton and the Calabi-Yau volume. Second, we use F/M-theory duality to conclude that KK reductions on elliptically-fibred Calabi-Yau fourfolds of the M-theory action at any order in the derivative expansion can only generate $(\alpha')^{\rm even}$ corrections to the 4D scalar potential, which, moreover, all vanish for trivial fibrations. We finally give evidence that $(\alpha')^{\rm odd}$ effects arise from integrating out KK and winding modes on the elliptic fibration and argue that the leading no-scale breaking effects at string tree-level arise from $(\alpha')^3$ effects, modulo potential logarithmic corrections.
Introduction
Effective field theories (EFT) have been the subject of recent debates regarding their relative importance for a UV complete theory of gravity. On the one hand, based on the outstanding success of EFTs to describe all kinds of physical phenomena [1], a common bottom-up attitude is to fully concentrate on EFTs at low-energies assuming that their self-consistency is enough to expect that they can be completed in the UV. On the other hand, the swampland programme argues that most EFTs cannot be UV completed, and concentrates on conjectures that could eliminate general classes of EFTs [2]. In this paper we take an alternative, more traditional, top-down approach where we perform a systematic study of α corrections to the 4D effective action of compactified string theories which automatically provide a UV completion.
From the topological understanding of Calabi-Yau (CY) compactifications, direct dimensional reduction, supersymmetry and scaling symmetries, we have a very good control over tree-level effective actions for N = 1 supersymmetric compactifications in terms of the Kähler potential K and superpotential W for moduli and matter fields. Focussing on type IIB compactifications, these EFTs are of the no-scale type in the sense that the corresponding 4D scalar potential for the Kähler moduli vanishes identically when the other moduli are fixed supersymmetrically, since K ij K i Kj = 3. Given that this is a tree-level property, α and string loop corrections are in general expected to lift these flat directions and to play a crucial role in stabilising moduli.
The challenge is further complicated by the fact that string theory does not have free parameters since the higher derivative and string loop expansions are controlled respectively by the vacuum expectation values of the CY volume modulus V and the imaginary part of the axio-dilaton τ . Hence it is only after determining their value that we can assess if the expansion parameters are small enough to trust the calculations. Furthermore, the fact that free 10D string theory is always a solution already indicates that the determination of other vacua will never be under full computational control since the scalar potential for V and τ runs away towards their value at infinity. This is the well-known Dine-Seiberg problem [3]. It is essentially the prize string theory has to pay for not having free parameters and is a fully general situation independent of any scenario of moduli stabilisation.
Not having arbitrary good control of perturbative expansions is not a string theory disease but a condition we have to live with. Fortunately there are extra parameters appearing from the nature of the compactification which can play an important role to allow non-trivial moduli stabilisation at couplings which are weak enough to trust the perturbative expansions. These are usually discrete parameters such as the CY Euler number, the rank of condensing gauge groups and the many integer fluxes which are ubiquitous in string compactifications. The derivation of non-trivial vacua necessarily involves a combination between these discrete parameters as well as perturbative and non-perturbative corrections to the 4D scalar potential.
Together with the dilaton, every Kähler modulus, which measures the size of a 4-cycle, can be considered as an expansion parameter, since it determines the gauge coupling of the EFT on D7-branes wrapped on the corresponding 4-cycle. Thus the 4D EFT has many expansion parameters which on the one hand make the calculations more involved since each of them has to be stabilised within the regime of validity of the approximations. On the other hand, however, they allow to stabilise the moduli at weak coupling since, as it happens in the Large Volume Scenario (LVS) [4,5], a vacuum can arise from balancing terms of two different expansions without causing a breakdown of perturbation theory.
On top of moduli stabilisation, identifying the leading no-scale breaking effects beyond the tree-level approximation is crucial to shed light on several important implications of string vacua for cosmology and particle phenomenology. Promising inflationary models based on Kähler moduli [6][7][8][9][10][11][12] feature a shallow potential which is protected by approximate non-compact rescaling shift symmetries [13,14] that are broken by no-scale breaking effects. As shown in [15], leading-order perturbative corrections to the Kähler potential are in general also crucial to determine the mass spectrum of the Kähler moduli. Moreover, in sequestered models with D3-branes at singularities, the mass scale of the soft terms is set by the dominant no-scale breaking effect at perturbative level [16][17][18].
In this article we present a systematic analysis of α corrections to the 4D scalar potential of type IIB string compactifications. It is well-known that in 10D type IIB string theory the leading higher derivative corrections arise only at order (α ) 3 . These include the R 4 correction to the Einstein-Hilbert action plus its supersymmetric extensions. This property is inherited by N = 2 CY compactifications where the corresponding (α ) 3 correction to the Kähler potential has been computed in [19]. Additional N = 2 string loop corrections to K at O((α ) 2 ) and O((α ) 4 ) have been computed in [20][21][22], and in the F-theory context in [23], but they yield subdominant contributions to the scalar potential due to a cancellation of O((α ) 2 ) terms named 'extended no-scale' in [24]. Backreaction of (α ) 3 effects on the internal geometry have been considered in [25] which however found only moduli redefinitions. Further O((α ) 3 ) terms have been shown in [26,27] to give rise to contributions to the 4D scalar potential at F 4 order, where F denotes the F-term of the moduli fields.
Genuine N = 1 corrections are less understood. Different papers found shifts of the CY Euler number induced by O((α ) 3 ) corrections at tree- [28] and loop-level [29][30][31][32]. Using M/F-theory duality, novel O((α ) 2 ) effects were found in [33,34], which can however be affected by field redefinitions of the 11D fields [35]. More corrections in the N = 1 4D effective action of F-theory were discussed in [36], which were further constrained recently in [37] by studying infinite distance limits. A full understanding of α corrections to the type IIB N = 1 effective action is not available yet. In particular any correction that would dominate over the (α ) 3 ones may play an important role in moduli stabilisation scenarios as LVS [4,5] and KKLT [38]. They may shift the minimum, provide a potential de Sitter uplift or destabilise the original vacuum. Moreover, subdominant higher derivative or string loop corrections can still be relevant for lifting leading order flat directions in scenarios with more than one Kähler modulus [12,24,39,40].
A complete analysis of α corrections is too ambitious to be achievable. Here we will extract information on the moduli dependence of the low-energy scalar potential by combining techniques that rely on either symmetries of 10D type IIB string theory or on dimensional analysis in M/F-theory compactifications which come along with a rich web of dualities summarised pictorially in Fig. 1. Our analysis is simplified by concentrating only on the dilaton and overall volume dependence of perturbative corrections. Even if a full dependence of arbitrary α and g s effects on all the Kähler moduli is beyond our reach, it is the V-dependence that is the most relevant information for moduli stabilisation.
In practice, investigations of F-theory compactifications start from M-theory by reducing the 11D action on a CY fourfold Y 4 which leads to 3D gauged N = 2 supergravity [41][42][43][44][45]. Under the assumption that Y 4 is elliptically fibred over a 6D Kähler base manifold B 3 , one takes the point-wise limit of vanishing fibre volume, v f → 0, thereby decompactifying a single direction giving rise to F-theory in 4D, that is type IIB compactified on the base manifold B 3 [46][47][48][49][50]. The elliptic curve in Y 4 is effectively keeping track of the dynamics of the axio-dilaton. The singular loci of the fibre are associated with 7-branes on the base whose precise realisation within Y 4 specifies the gauge algebra (see [51] for a recent review). In the weak coupling limit g s 1 of F-theory, the so-called Sen limit [52], one recovers perturbative type IIB orientifold compactifications on the double cover Type IIB
M-theory
Type IIA the base B 3 [49]. The existence of an elliptic fibre leads to an SL(2, Z) symmetry acting on the axio-dilaton τ in type IIB. More precisely, the strong coupling dynamics of string theory is accessible via dualities even if the microscopic origin remains elusive. The most basic duality is M-theory compactified on an S 1 A giving rise to 10D type IIA supergravity. After a subsequent reduction on another circle S 1 B , we can use T-duality [53,54] to obtain type IIB supergravity on a circle. This is equivalent to compactifying M-theory on a torus T 2 = S 1 A × S 1 B and taking the Vol(T 2 ) → 0 limit [55], as we said above. Recently an effective 12D approach (indicated by dashed lines in Fig. 1) has been put forward in [28] which in principle allows for a new access to α effects in F-theory. In this paper we will take advantage of these dualities together with scaling symmetries to extract direct information on α corrections to the 4D scalar potential. This paper is structured as follows. In Sec. 2 we follow [15] and use the symmetries of the 10D type IIB action to organise different perturbative corrections to the N = 1 4D effective action concentrating on the dilaton and volume dependence of each order in the α and string loop expansions. We make use of the fact that each of the two expansions is directly related to the existence of two scaling symmetries of the 10D action. In particular we present the general expression of the 4D scalar potential including each order in the α and g s expansions as well as the number of powers of F-terms of the low-energy moduli which corresponds to an expansion in terms of inverse powers of the Kaluza-Klein (KK) scale [56], as typical of KK compactifications. We recover all known corrections that have been computed so far as particular cases of our general expression.
In Sec. 3 we see how the absence of (α ) 1 corrections at string tree-level to either the 10D bulk action or to the 8D action of localised sources, combined with our symmetry considerations and the extended no-scale structure, allow us to infer that the leading noscale breaking effects at tree-level in g s should arise from (α ) 3 effects. We confirm this claim by dimensional reduction and dimensional analysis considering all potential sources for these corrections: bulk terms, brane effects and backreaction.
Sec. 4 is the core of the paper, where we utilise F/M-theory duality techniques as well as a dimensional analysis to extract systematic information on the α expansion of the 4D scalar potential. We present the rules to perform the F-theory limit by first considering the 3D EFT obtained via a fourfold compactification of 11D M-theory and then taking the vanishing fibre volume limit to extract information on 4D compactifications of F-theory. In particular, using a very general ansatz for the metric of an elliptically fibred fourfold, we constrain the moduli dependence of higher derivative corrections to the 4D scalar potential. We find that conventional KK reductions on elliptically fibred CY fourfolds of the 11D supergravity action, corrected at arbitrary order in the derivative expansion, can generate only (α ) even corrections to the 4D scalar potential. We come to the conclusion that only a certain class of higher-order terms in the 11D Planck length M gives rise to a finite contribution in the F-theory limit. Remarkably, this class of 11D higher derivative structures precisely falls into the general pattern of the M-theory higher derivative expansion as conjectured by [57], using symmetry constraints from the Kac-Moody algebra E 10 . 1 Furthermore, for the case of trivial fibrations, we find that all such higher derivative corrections give vanishing contribution in 4D.
In contrast, we argue that (α ) odd effects arise from a proper process of integrating out KK and winding states on the elliptic fibration, which we outline in Sec. 5. Here we provide evidence in favour of this claim by focusing on the simple case of trivial fibrations where we manage to show that our approach based on dimensional analysis allows to reproduce, from 11D loops, known (α ) 3 corrections at different orders in the low-energy F-term expansion.
We present our conclusions and outlook in Sec. 6 and leave some technical aspects to the appendices. In App. A we collect some results on higher curvature terms for elliptic fibrations. For completeness, in App. B we explore the effects that potential loop corrections at order (α ) 1 could have, if they existed, on moduli stabilisation. Interestingly we find that they could give rise to new dS vacua in a regime where the EFT is under relatively good control.
Perturbative corrections from symmetries in type IIB
In this section we show how perturbative corrections to the 4D EFT of type IIB string theory can be constrained using the symmetries of the underlying 10D theory. 1 More precisely, we show that higher derivative corrections in M-theory should appear only at order 6p M with M the 11D fundamental length and p ∈ N, assuming that they contribute to the effective action in the F-theory limit.
Tree-level effective action 10D perspective
The low energy description of string theory can be obtained by computing scattering amplitudes of massless string excitations. This gives rise to a 10D EFT whose action can be written as S IIB = S bulk + S loc , where S bulk describes the dynamics of the bulk degrees of freedom while S loc is associated to objects localised in the extra dimensions, like D-branes and O-planes. The bosonic bulk action at tree-level and in Einstein frame reads [58]: whereg M N is the 10D Einstein frame metric, τ = C 0 + i e −φ is the axio-dilaton whose imaginary part controls the string coupling (e φ = g s ), and G 3 = F 3 − τ H 3 is the 3-form background flux with: In addition to the equations of motion, the 5-form flux must satisfy the self-duality con-ditionF 5 = 10F5 . Beyond general coordinate invariance, N = 2 supersymmetry and the gauge symmetries of the p-forms, the tree-level bulk action (2.1) enjoys the following accidental symmetries: This symmetry is broken by α and g s corrections. However two subgroups survive at higher order: the axionic shift symmetry of C 0 is unbroken at perturbative level, while SL(2, Z) is an exact symmetry of the whole non-perturbative theory.
• Scale invariance
Scaling the bosonic fields with two arbitrary weights ω and ν as [15]: showing that it enjoys two families of classical scale invariance that are expected to be broken by corrections beyond tree-level. Notice that for ν = 0 the equations of motion are still invariant even if S bulk is not, while the case with ν = 0 reproduces the scale invariance included in SL(2, R) for b = c = 0 and a = 1/d.
Let us stress that the existence of two scaling symmetries is closely related to the fact that the EFT features two independent perturbative expansions: in terms of g s controlled by the dilaton (corresponding to worldsheet topologies/loops in the spacetime theory), and α controlled by the metric (associated to loops in the worldsheet theory/higher derivative terms from the spacetime point of view). This property is shared by all five different 10D superstring theories but it does not hold for the effective action of 11D supergravity since its massless spectrum does not include a dilaton field. This implies that in this case there is just a single perturbative expansion which is reflected in the existence of a single scaling symmetry. In fact, all terms of the 11D supergravity action: scale homogeneously as S 11 → λ 9ω S 11 under the 1-family rescalings: g (11) M N → λ 2ω g (11) M N and Coming back to the type IIB action, let us now include localised sources in 10D. The action of a Dp-brane contains a DBI and a Wess-Zumino (WZ) contribution. It can be shown that, under the rescalings (2.4), both of them scale as [15]: Given that ρ = 4ν ∀p, we realise that the Dp-brane action breaks the 2-family scale invariance of the bulk action down to a 1-family scaling symmetry parametrised by the relation 2(p − 5)ν = (p − 3)ω. This can be easily understood from noticing that the 10D string frame metricĝ M N =g M N / √ Im τ scales with weight 2ν − ω. Hence choosing ω = 2ν, the 10D string frame metric does not rescale and ρ = 2ν ∀p. In this case S showing that S loc can be seen as a higher order effect in the expansion of the action in powers of g s that breaks one of the two scaling symmetries enjoyed by the leading expression. This remaining scale invariance is then expected to be broken by additional g s and α corrections.
4D perspective
Type IIB string theory compactified on a CY threefold X 3 yields an N = 2 4D EFT which can be broken down to N = 1 by the inclusion of O-planes and D-branes. The scaling properties of the 4D fields inherited from the higher dimensional theory can be understood from looking at the decomposition of the 10D metric: where we ignored the warp factor since it does not scale. Thus we realise that (2.4) implies that the 4D metric scales as the 10D one,g µν → λ νg µν , while the Einstein frame CY volume scales as: where we measured V in units of the string length s = 2π √ α . Therefore the 4D Einstein frame metric and Lagrangian scale as: where the scaling of L is fixed by the Einstein-Hilbert term just by knowing the scaling of g µν . The scaling of the overall volume (2.10) implies that also the Kähler moduli rescale since V can be rewritten as: where J is the Kähler form which we expanded in a basisD α of H .., h 1,1 + , are 2-cycle volumes and k αβγ are the triple intersection numbers given by: (2.13) Given that the Kähler moduli are defined as T α = b α + i τ α with b α = C 4 ∧D α and τ α = 1 2 k αβγ t β t γ , (2.4) and (2.10) imply T α → λ 2ν T α ∀α. Since the Kähler moduli are the scalar components of h 1,1 + chiral superfields T α = T α + √ 2θψ α + F α , the superspace coordinate θ has to rescale as θ → λ ν θ together with ψ α → λ ν ψ α to ensure that the fermionic kinetic termψe µ a γ a ∂ µ ψ scales as the bosonic one g µν ∂ µ T ∂ ν T . The h 1,2 − complex structure moduli Z i instead do not rescale.
The implications of these scaling symmetries can be easily understood by using the superconformal formalism which allows to write the Lagrangian in terms of the chiral compensator Φ as (ignoring the contribution from the gauge kinetic function): where K and W denote respectively the Kähler potential and the superpotential. Using (2.11) together with dθ → λ −ν dθ, we obtain: where ω L is the weight of the Lagrangian, with ω L = 4ν at tree-level. These two relations can be used to derive the dependence of K on two combinations of rescaling fields (due to the presence of two scaling symmetries) once the weight of W is known. This can be deduced from direct dimensional reduction which yields the tree-level flux superpotential Since Ω is a function of the complex structure moduli which do not rescale, W 0 scales as G 3 whose weight is ω (see the weight of C 2 in (2.4)). Thus (2.16) can be used to fix the weight of the chiral compensator which in turn determines the weight of the tree-level Kähler potential K 0 from (2.15) as: Using the scaling properties of the dilaton and the volume mode together with the fact that axionic shift symmetries forbid a dependence of the tree-level K 0 on C 0 and C 4 -axions, the relation (2.17) allows us to fix: where A is a scale invariant combination of all other 4D fields. Notice that this expression reproduces the one obtained by direct dimensional reduction: Thus we have seen the dependence of K 0 on φ and V can be fixed without the need to perform any computation but just by symmetry arguments via a combination of supersymmetry, scale invariance and shift symmetries. As shown in [15], these symmetry considerations are also enough to infer that the 4D EFT enjoys a no-scale cancellation where the associated flat direction, the volume mode, corresponds to the Goldstone boson of one of the two scaling symmetries which is spontaneously broken by the vacuum expectation value of the metric. The other scaling symmetry is also spontaneously broken by the vacuum expectation value of the dilaton. However the corresponding would-be Goldstone mode, the dilaton, would become massive in the presence of non-zero 3-form flux quanta which would break the rescaling symmetry explicitly. In fact, as can be seen from (2.4), G 3 rescales with weight ω, and so any 4D EFT with G 3 fixed at a non-zero background value would necessarily break this symmetry explicitly.
10D perspective
As already stressed above, the two rescaling symmetries of the bulk tree-level action are expected to be broken by higher order g s and α effects (we have already seen that any Dp-brane action already breaks one of these two scale invariances). However these breaking effects arise in a controllable manner since the parameters which control these two perturbative expansions are two fields, φ and V, which rescale with a non-trivial weight. We thus expect to be able to infer the dependence on φ and V of any perturbative correction to K at all orders in g s and α . This can be achieved by exploiting again the superconformal chiral compensator formalism together with the scaling properties of the 10D and 4D EFT. Before seeing how this works, let us remind the reader that the 10D type IIB supergravity action can in general be expanded as: Notice that, because of N = 2 supersymmetry, the first higher derivative corrections to the bulk action arise only at order (α ) 3 . Corrections to the action of localised sources are instead expected to emerge only at (α ) 2 order. The higher derivative corrections in (2.19) can be obtained from string amplitudes [59][60][61][62][63], the pure spinor formalism [64,65], via duality to M-theory [27,[66][67][68][69][70][71] or supersymmetry [72][73][74][75][76][77][78][79] (see also [80][81][82][83][84]). While R 2 corrections can arise in the heterotic string, in type II theories the greater degree of supersymmetry forbids R 2 , R 3 as well as all other terms at order α and (α ) 2 [58]. At order (α ) 3 , one finds schematically [5]: In general, an arbitrary correction to the 10D bulk action in string frame at order (α ) m g n s involving the dilaton, the curvature and the 3-form flux H 3 can be written as: (2.22) where • denotes the appropriate index structure and m = p + r since each power of R and H 2 3 contains two derivatives. In (2.22) we ignored potential contributions from gradients of the dilaton since φ is set to be constant by the equations of motion (except in the vicinity of localised sources). Higher derivative corrections are expected to introduce a dependence of the dilaton on the internal coordinates but, given that explicit computations have shown that this dependence can be rewritten in terms of the curvature [19], we expect this effect to be captured by (2.22). Notice that (2.22) is generic enough to describe also contributions of the form R p+1 (∇G 3 ) 2r since they would scale as R p+r+1 G 2r 3 . Moreover in (2.22) we neglected potentialF 5 -dependent higher derivative corrections sinceF 5 = 0 in the absence of warping. Writing H 3 in terms of G 3 and converting (2.22) to Einstein frame viaĝ M N =g M N / √ Im τ , we end up with: (2.23) Notice that for n = m = 0 (2.23) reproduces the correct scaling of two terms in (2.1): the Einstein-Hilbert term for p = r = 0, and the kinetic terms of G 3 for p = −1 and r = 1. Using (2.4), we can easily infer that the generic O ((α ) m g n s ) correction (2.23) rescales as:
4D perspective
Non-renormalisation theorems ensure that the superpotential receives only tree-level and non-perturbative contributions, whereas the Kähler potential can be corrected at all orders in α and g s . The 10D action (2.23) is therefore expected to yield a perturbative correction to the 4D Kähler potential. Using again the two scaling symmetries of the classical action and the chiral compensator formalism, we can work out the dilaton and volume mode dependence of a generic O ((α ) m g n s ) perturbative correction to K. Combining (2.15) with (2.16) for ω L = 4ν − 2n(w − ν) + m(w − 2ν) from (2.24), we realise that: .
(2.25)
This rescaling property, together with τ → λ 2(ω−ν) τ , V → λ 3ν V and the axionic shift symmetries, implies that the perturbative Kähler potential has to take the form [15]: where A (n,m) are scale invariant combinations involving other non-axionic fields. Interestingly, supersymmetry dictates that the quantity which is corrected at a given order in α and g s is e −K/3 and not directly K. This observation explains why some corrections beyond tree-level which break scale invariance can still satisfy a generalised no-scale condition [15] which accounts for the presence of an extended no-scale structure [24]. The expression (2.26) is valid for m = p + r where p controls the number of curvature contributions while r counts the factors of G 2 3 in 10D. Since a non-zero G 3 gives rise to the 4D superpotential W 0 = X 3 G 3 ∧ Ω, when r = 0 A (n,m) should be proportional to W 2r 0 . Knowing that the weight of W 0 is ω, it is easy to deduce that the corresponding scale invariant combination has to be: where (n,m) is another scale invariant combination. As shown in [56], the ratio appearing in (2.27) corresponds exactly to the parameter which controls the 4D superspace derivative expansion since: where F denotes the F-term of the light fields and g is the coupling between heavy KK modes and light states. Thus in the regime where the superspace derivative expansion is under control, i.e. when gF/M KK 1, the leading correction at fixed order in α is expected to be the one corresponding to r = 0. Notice that these higher F-term corrections might not be incorporated into K but they might induce directly a correction to the scalar potential. This difference does not matter for our scaling arguments (which can be applied equally well by extending (2.14) to the more general case of corrections to d 4 θD), and so we shall consider them as 'effective' corrections to K.
Perturbative g s and α contributions to the scalar potential of the 4D EFT can be obtained by plugging W = W 0 and the Kähler potential given by (2.26) and (2.27) into the general expression: where α and β run only over the Kähler moduli and, in the second equality, the dilaton and the complex structure moduli have been fixed supersymmetrically. This yields a generic O ((α ) m g n s ) correction at O(F 2r ) of the form: .
Notice that (2.30) displays clearly the 3 expansion parameters of the EFT associated to higher F-terms, string loops and α effects. The relation (2.31) implies also that an arbitrary contribution of the form V ∼ g s s W 2r From p ≥ −1 one finds also r = m − p ≤ m + 1 which implies that at a fixed (α ) m order, one can have higher F-term corrections up to F 2(m+1) . Moreover, the expression (2.31) reproduces several known perturbative effects: 1. m = n = 0 and r = 1 ⇒ p = −1: This is the standard tree-level scalar potential arising from the 10DG 2 3 term: The coefficient of this term is zero due to the no-scale cancellation:Â (0,0,1) = 0.
2. n = 0, m = 3 and r = 1 ⇒ p = 2: (α ) 3 correction at O(F 2 ) like the one computed by [19], which should arise from 10D terms like R 3G2 3 and R 2 |∇G 3 | 2 : Notice that the dilaton dependence of this correction, when written in terms of the number of closed string loops c , reproduces the scaling expected from modular invariance: 3. n = 0, m = 3 and r = 2 ⇒ p = 1: (α ) 3 contribution at O(F 4 ), like those derived in [26], which should come from 10D terms like R 2G4 3 and (|∇G 3 | 2 ) 2 : , like those worked out in [20] which, from the closed string viewpoint, can be seen as due to the tree-level exchange of KK modes between parallel stacks of branes: where however (2,2,1) = 0 due to the extended no-scale cancellation.
5. n = 2, m = 4 and r = 1 ⇒ p = 1: (α ) 4 open string 1-loop effects at O(F 2 ), like those derived in [20], which can be interpreted as the tree-level exchange of winding modes between intersecting stacks of branes: Im τ This shows that the leading no-scale breaking effects in a large volume expansion seem to be (α ) 3 corrections at O(F 2 ), like the one derived in [19,28]. Interestingly our scaling analysis combined with generalised no-scale relations is powerful enough to argue that (α ) 2 corrections should be absent at any order in g s [15] (unless they come along with ln V-factors [36]). On the other hand, (α ) 1 effects, if they existed at some order in the g s expansion, would dominate over (2.34) for V 1 since they would scale as V −7/3 . However also these perturbative effects might not be generated. In fact, in Sec. 3 we provide evidence for the absence at tree-level in g s (n = 0) of any correction to the 4D scalar potential which scales as V −7/3 . In App. B we discuss instead the effect on moduli stabilisation of potential (α ) 1 corrections arising at loop level.
Leading no-scale breaking effects in type IIB
In this section we shall try to understand what is the leading order no-scale breaking contribution to the 4D scalar potential in the limit where the EFT is under control, i.e. for V 1 and at tree-level in the string loop expansion. We shall first exploit the symmetry considerations of Sec. 2.2, and we shall then confirm our findings with a combination of dimensional reduction and dimensional analysis.
Symmetry considerations
Symmetry arguments led us to the fundamental result (2.32) which implies: 1. Any contribution to the scalar potential at O(F 2r ) should feature r ≥ 1. This implies q ≥ 2+m/3. At tree-level, i.e. m = 0, one has q ≥ 2, and so the first dangerous higher derivative correction arises at order (α ) 1 , i.e. m = 1, corresponding to q ≥ 7/3.
2.
A term as V −7/3 can arise only at order (α ) m F 2r with m = 3 − 2r. For r = 0 one has m = 3, corresponding to the O((α ) 3 )R 4 term. However r = 0 implies F 0 , and so no contribution to the 4D potential. This fits with the fact that the integral of R 4 over a CY threefold gives zero. For r = 1 one has instead m = 1 at O(F 2 ), potentially at different orders in the string loop expansion counted by the powers of g s . However, given that at tree-level there are no (α ) 1 corrections since the bulk action starts being corrected at O((α ) 3 ) while the brane action at O((α ) 2 ), no V −7/3 term can be generated at tree-level in g s . For r = 2, m becomes negative, leading to an absurd result. 3. A correction which scales as V −8/3 would correspond to m = 4 − 2r. For r = 1, we have m = 2, and so an (α ) 2 F 2 term which however should come with a zero coefficient due to the extended no-scale cancellation, regardless of the order in the g s expansion. 2 This can be easily seen from the fact that, in a supersymmetric theory, such a term should come from a V-independent correction c to the Kähler potential of the form e −K/3 = V 2/3 + c that would however satisfy a generalised no-scale relation [15]. For r = 2, m = 0 which would be an F 4 term at tree-level. This would correspond to the V −8/3 term used in T-brane uplifting scenarios [87] since it is a tree-level effect that scales in terms of F-terms of matter fields as (F matter ) 2 where it can be easily seen that they are related to the F-terms of the Kähler moduli as
4.
A perturbative correction which scales as V −3 features m = 5 − 2r. For r = 1, one has m = 3, and so standard (α ) 3 corrections at O(F 2 ) [19,28]. For r = 2 one would have instead m = 1 but we have just recalled that there are no (α ) 1 corrections in 10D at tree-level. The r = 3 case can instead be safely ignored since the α order would become negative.
This analysis, just based on symmetries and the known absence of (α ) 1 corrections at tree-level, implies that the leading no-scale breaking effect in the 4D scalar potential at tree-level should arise from (α ) 3 effects and should scale as V −3 .
Arguments from dimensional analysis
Let us now provide further evidence in favour of this claim from arguments based on a dimensional analysis combined with dimensional reduction. As we have seen above, the order in α and the number of F-terms is dictated by the V and W 0 dependence of a generic perturbative correction. This has been derived in (2.31) using symmetry arguments and it agrees with the expectations from direct dimensional reduction. In fact, when all components of tensors and derivatives are taken along internal directions 3 , the generic O ((α ) m g n s ) 10D correction (2.23) generates a contribution to the 4D scalar potential whose V dependence can be inferred as follows [5]: (i) the Weyl rescaling to 4D Einstein frame yields a V −2 factor; (ii) the integration over X 3 brings a V contribution; (iii) as can be seen from (2.10), each inverse metric factor introduces a V −1/3 dependence.
The number of F-terms and the associated W 0 dependence can instead be easily deduced from the number of G 3 terms in 10D. Hence dimensional reduction is expected to produce: where λ counts the net number of inverse metric factors, and its expression in terms of r and p follows from (2.23). This formula can further be motivated as follows: flux quantisation impliesG 3 ∼ O(α ) as the leading order solution to the 10D equations of motion. At a fixed order m in the 10D α expansion, one has λ = 2r +m+1 net factors of inverse metrics. Hence, for dimensional reasons, each power of (G 3 ) 2 introduces an additional V −2/3 power in (3.1). 4 Using p = m − r it is straightforward to realise that the V dependence in (3.1) agrees with the one in (2.31). We summarise in Tab. 1 the volume scaling and the Fterm order of different α contributions to the 4D scalar potential arising from various 10D terms. For completeness we include also higher derivative terms like R 2 and R 3 which are forbidden in the type IIB action due to supersymmetry [58] and R 4 even if it does not contribute to the 4D scalar potential due to Ricci-flatness and Kählerity of the underlying manifold [88] (see App. A for details).
Coming back to the leading order no-scale breaking effects, we now apply the dimensional analysis to argue against the presence of V −7/3 corrections at string tree-level in the 4D scalar potential. The starting points are two higher dimensional actions: the 10D bulk type IIB action and the 8D D7/O7 DBI and WZ actions. In this type of compactifications, localised D5-brane sources are projected out by the orientifold. Localised D3-branes are instead relevant for the 4D 2-derivative effective action as far as their backreaction on the closed string background is concerned, but higher derivative couplings on their worldvolume can clearly be ignored. We will come back to D3-branes later. Upon dimensional reduction (on CY threefolds and on Kähler twofolds respectively), these two actions potentially give rise to a plethora of perturbative corrections to the 4D scalar potential which we now discuss schematically.
Bulk corrections
The 32 supercharges characterising the 10D bulk theory force the first higher derivative corrections to arise only at order (α ) 3 . Schematically, one finds: where L collects all possible 8-derivative couplings, and we have neglected the classical Chern-Simons term since it is irrelevant for the present discussion. Notice that the axio-dilaton can appear either in P (with gradients involved) or in the modular functions multiplying the various kinematic structures. Moreover all terms in L contain an even number ofG 3 's andF 5 's due to parity invariance. Recall also that, due to Ricci flatness and Kählerity, terms in (3.2) involving only powers of R give vanishing contributions to the 4D scalar potential when integrated on the internal manifold. 5 The same holds true for CY fourfold compactifications of M-theory down to 3D where a 3D scalar potential can be generated only for a non-vanishing G 4 flux [42,89]. Considering an elliptically fibred fourfold and performing the F-theory limit, we therefore conclude that a 4D scalar potential can be generated only by turning on either G 3 orF 5 in the bulk, or F 2 on D7-branes. This is a crucial statement since terms like R 4 or P 2n (∇P) m R 4−n−m with 1 ≤ n ≤ 4, 0 ≤ m ≤ 4, if they were contributing to the 4D scalar potential, would produce corrections which scale as V −7/3 . This is easy to see: each power of R and of ∇P, and each pair of P's need one net factor of inverse metric of the CY threefold to give rise to a Lorentz invariant. Hence λ = 4 and (3.1) yields V ∼ V −7/3 .
As we have already seen, the leading corrections beyond the tree-level V −2 term coming from |G 3 | 2 , originate from reductions of terms like R 3G2 3 and R 2 |∇G 3 | 2 which scale like V −3 with λ = 6 in (3.1) (corresponding to (α ) 3 corrections at F 2 order). Every pair of G 3 's that replaces a power of R introduces an additional V −2/3 suppression. Analogous considerations hold for higher derivative terms containingF 5 , which start contributing at order V −11/3 and acquire an additional V −4/3 suppression each time a pair ofF 5 's replaces an R (recall footnote 4). Notice that, contrary to the purely gravitational sector, in N = 1 compactifications there is no reason to exclude contributions from terms of the form R 3−nG2 3 P 2n with 1 ≤ n ≤ 3 (or analogous terms involving also ∇P). This is because the presence of D7-branes induces non-trivial gradients for the axio-dilaton. 6 To summarise, the classical KK reduction of the 8-derivative bulk 10D action down to 4D on a orientifolded CY threefold gives rise to only (α ) odd corrections to the scalar potential, starting from (α ) 3 at tree-level in g s (sphere level) which yields V ∼ V −3 .
Brane corrections
The 16 supercharges of the 8D worldvolume theory of a stack of D7-branes (or O7-planes) fix to (α ) 2 the order of the leading higher derivative corrections. Schematically, this amounts to: 3) where again we have ignored the classical Chern-Simons couplings to RR forms since they are irrelevant for our discussion. All bulk quantities in (3.3) are meant to be pulled-back to the brane world-volume, g denotes the determinant of the induced metric with R its curvature 2-form 7 , F 2 is the gauge invariant world-volume field-strength and Φ collectively denotes worldvolume scalars (possibly non-Abelian). The τ dependence is again due to the modular functions multiplying the various kinematic structures and we have dropped all terms of the type R 2 since there is no 4D scalar potential generated purely by geometry. It is a common convention (T-duality friendly) to take world-volume fields, like Φ and the gauge field A, to have mass dimension 1 (as opposed to bulk fields). Moreover, Tduality and gauge invariance force any possible correction to be written just in terms of the arguments of L . 8 D7-brane tadpole cancellation guarantees that the classical tension does not contribute to the 4D scalar potential which would have otherwise yielded a V −4/3 dependence from integrating the first term in (3.3). In fact, following the same logic which led to (3.1) with the only difference that now the internal integration gives a V 2/3 instead of a V factor, we can easily infer that a generic term in the localised action (3.3) can in principle generate a contribution to the 4D scalar potential which scales as V ∼ V −(4+λ)/3 where λ counts again the number of inverse metric factors. The classical tension would correspond to λ = 0.
The classical 4D scalar potential thus arises from integrating |F 2 | 2 over the internal 4-cycle. 9 If we have a non-Abelian stack and/or the brane has a non-trivial profile in the 6 Corrections of this type are e.g. those discussed in [28] from a 12D viewpoint. 7 Here it is not relevant to distinguish between curvature of the tangent and of the normal bundle. 8 With the only exception of the implicit dependence of bulk quantities on the normal coordinates √ α Φ, which is often used to encode backreaction effects of the branes on the closed string background. 9 More precisely, only the anti-self-dual part of F2 generates a potential since the self-dual part contributes to D3-brane tadpole cancellation.
normal directions, further contributions to the 4D scalar potential come from integrating |DΦ| 2 and [Φ, Φ] 2 [90]. Notice that all of these terms would produce a scalar potential which scales as V −2 . In fact they all have λ = 2 since they involve 2 pairs of indices (both longitudinal for |F 2 | 2 , both transverse for [Φ, Φ] 2 , while one longitudinal and one transverse for |DΦ| 2 ), and hence need 2 inverse metric factors to give rise to a Lorentz invariant. The term proportional to F 2 is the well-known D-term scalar potential contribution from moduli-dependent Fayet-Iliopoulos (FI) terms [91].
The leading higher derivative corrections to the brane action are all encoded in L and are all quartic in R, F 2 , DΦ and [Φ, Φ]. 10 Thus each term necessarily involves 4 pairs of indices (which may be all longitudinal, all transverse, or mixed) which implies λ = 4 and V ∼ V −8/3 , as expected for (α ) 2 corrections. As stressed above, supersymmetry and generalised no-scale relations should imply the absence of these corrections [15] (in the sense that they might just induce moduli redefinitions [33]). However if they arise with an additional logarithmic dependence on the Kähler moduli, they might still represent the leading no-scale breaking effect [36]. Symmetries and scaling arguments are clearly not enough to provide a definite answer to this important issue.
To summarise, the classical KK reduction of the 8D higher derivative brane action down to 4D on a Kähler twofold gives rise to only (α ) even corrections to the scalar potential, starting from (α ) 2 level which might yield at most a correction of the form V ∼ V −8/3 (starting at the disk and projective-plane level, in the string coupling expansion).
Let us finally mention that we focused above only on stacks of D7-branes in isolation whose physics is accurately described by the DBI and WZ actions. However in N = 1 compactifications D7-branes can also intersect in complex codimension-1 loci where the 8D action fails to fully capture the physics due to the possible presence of massless matter at the intersection. Such special loci may be viewed as 6D defects of the 8D theory with their own EFT. Unfortunately, not much is known about the structure of higher derivative corrections to such a theory. However two intersecting stacks of D7-branes 11 can approximately be described as a single stack (of size the sum of the two sizes) with a non-trivial profile for the worldvolume scalars [92,93]. 12 Such a profile encodes the information of the wavefunctions of localised fields as can be seen by solving the D-term differential equations. 13 Aside from the details, what this reasoning teaches us is that there cannot be higher derivative corrections on the defects which cannot be continuously extracted from corrections already present in the 8D worldvolume action. 10 Again terms of the type R 4 are not expected to contribute to the 4D potential. Moreover terms where a power of F2 is replaced by a pair of D's give the same V dependence. 11 We assume that the stacks wrap homotopically equivalent 4-cycles. We conjecture the same conclusions to hold in the more general case where however we cannot use the continuity arguments employed here. 12 The smaller the intersection angle, the more accurate this description compared to the defect picture. 13 As an easy example in affine space, consider two D7-branes intersecting on z1 = z2 = 0 in C 2 . This system can equivalently be described by a stack of two D7-branes on z1 = 0 with a Higgs field given by Φ = diag(z2, −z2).
Backreaction
The analysis of the previous paragraphs does not take into account the effect of branes and fluxes on the bulk background. A clever way to capture at least some of them is to regard the bulk fields as functions of the brane worldvolume scalars and Taylor expand them. 14 The couplings that arise induce new operators on the worldvolume field theory, which softly break the original 16 supercharges. This phenomenon makes also the D3-brane worldvolume theory contribute to the 4D scalar potential. Imaginary-anti-self-dual bulk fluxes indeed generate terms like g s (α ) 2 15 where the power of g s shows that such effects appear at 1-loop in string perturbation theory. Analogous terms are expected to pop up also on D7-branes and to contribute to the 4D potential after integration on the internal 4-cycle. An interesting example in the case of T-branes is a term which scales as V −8/3 that has been used to achieve dS vacua [87]. Given that the Taylor expansion does not require any metric contraction, the structure of all these terms is such that a Lorentz invariant can be constructed only in the presence of an even number of net inverse metric factors. Thus λ has to be even, and so no V −7/3 correction can be generated this way.
Another important backreaction effect is the generation of warping in the spacetime metric due to branes and fluxes [97]. Thanks to open/closed string duality, by solving for the warp factor the tree-level equations of motion in the closed string sector, we infer a 1-loop correction in the open string (and non-orientable closed string) sector. Following the discussion of [35], the V dependence of such a correction to 4D scalar potential depends on the dimension of the D-branes/O-planes involved. In the type of compactifications we are analysing, this dependence is however bounded from below by V −8/3 (due to graviton exchange between D7-branes).
Final remarks
Altogether, the arguments given above lead us to state with reasonable certainty that V −7/3 corrections are absent in the 4D scalar potential at string tree-level. This is because all 4D corrections at string tree-level must already be present in the higher dimensional (and more supersymmetric) theories whose zero-mode reductions we have analysed in detail. It is starting from the string 1-loop level that new states (such as KK and winding modes) come into play and potentially contribute to amplitudes between low-energy states. Therefore we cannot guarantee that the reduction of supersymmetry down to 4 supercharges caused by compactification does not give birth to novel 4D perturbative corrections. Famous examples of such loop corrections due to exchange of KK and winding modes are those computed in [20] for both N = 2 and N = 1 toroidal orientifolds. They appear at (α ) even order but their origin as higher derivative corrections in the D7 worldvolume theory (or equivalently in M/F-theory) is still unclear.
Apart from the backreaction effects discussed above, little is known about the consequences of supersymmetry breaking on the starting bulk and brane actions. A hint in this 14 This method has been introduced in [90] and later used in [94][95][96] to compute soft supersymmetry breaking terms. 15 They arise by a first order Taylor expansion of the non-Abelian DBI coupling B[Φ, Φ] [94]. direction might be obtained by analysing loop amplitudes of 11D supergravity compactified on elliptically fibred CY fourfolds. Such amplitudes, albeit in the case of toroidal reductions only, were shown in [68,71] to efficiently capture string loop and non-perturbative corrections. 16 In order to able to make an exact-in-g s claim of absence of V −7/3 corrections, one should find a way to perform a dimensional analysis of the kinematic structures expected from loop amplitudes of 11D supergravity on non-toroidal backgrounds. This will be further motivated in Sec. 5. In the remainder of this paper we shall use M/F-theory techniques to infer the form of α corrections at different orders in the 4D superspace derivative expansion without however being able to shed too much light on the g s expansion.
α corrections from dimensional analysis in F/M-theory
Here we come to the core of our study. In Sec. 4.1 we outline the rules for connecting the 4D EFT to the intermediate 3D theory obtained after CY fourfold reductions of M-theory. In Sec. 4.2 we then apply these rules to the reduction of higher derivative structures appearing in 11D supergravity. This will allow us to make general statements about the ensuing α corrections to the 4D F-theory effective action.
The F-theory limit
In this section we will discuss the rules to extract the 4D F-theory effective action from the 3D one arising from the associated M-theory reduction [50,98]. To do so, we will exploit F/M-theory duality, which can be roughly summarised as follows [49]. One compactifies M-theory on an elliptically fibred CY fourfold Y 4 and picks up a basis of fundamental 1-cycles, called A and B-cycles, on the generic smooth fibre. First of all one reduces the 11D theory on the A-cycle and then T-dualises it along the B-cycle, ending up with type IIB compactified on the threefold base of the fibration (over which the axio-dilaton has a non-trivial profile) times the circle T-dual to the B-cycle. The final step is the limit of vanishing volume of the original fibre, the so-called 'F-theory limit', which renders the effective theory Poincaré invariant in 4D, by decompactifying the extra circle in IIB. 17 The best strategy is to compare two actions in 3D: one obtained from a fourfold reduction of the 11D supergravity action, and the other from a preliminary threefold reduction of the type IIB supergravity action, followed by a circle reduction. We first make this comparison at the classical level, i.e. considering all terms at lowest order in M and α , in order to derive all the formulas which are relevant to take the F-theory limit. These formulas are then used in Sec. 4.2 to discuss the structure of α corrections. We will not pay attention to the exact derivation of all 2π factors since our focus will instead be on volume factors and on how the duality between M and F-theory relates M and α .
Since fluxes will play a crucial rôle in what follows, let us first say a few preliminary words about them. According to the F/M-theory duality, Poincaré invariance in 4D forces 16 Examples of that are the bulk couplings (α ) 3 L in (3.2). 17 See [33,34] for earlier formulations of the F-theory limit. The present one differs from them only in the powers of the base volume, V, which is a finite quantity in the F-theory limit. However, since one of the main goals of this paper is to estimate the behaviour of string corrections at large V, it is crucial to derive how the various terms in the action precisely scale with V after the F-theory limit.
the internal M-theory G 4 flux to have 1 leg along the fibre and 3 legs on the base [99]. 18 Depending on whether the 3 legs on the base are along a 3-cycle or a 3-chain, G 4 gives rise respectively to bulk type IIB F 3 and H 3 fluxes or to the D7-brane F 2 flux. The integral flux quanta on both sides of the duality must be the same, translating into the following equality of vacuum expectation values: Here the 4-cycle C 4 is a circle bundle over the 3-cycle/3-chain C 3 with fibre pS 1 A + qS 1 B , where p, q are integers and S 1 A,B are the A and B-cycle respectively. As a consequence, the flux-induced M2/D3 tadpole reads: where Y 4 → B 3 is the elliptic fibration and the factor of 1/2 is combinatorial. Hence one gets a match between the Chern-Simons terms of 11D and type IIB supergravity by requiring: 1 We will find below how to write G 4 in terms of F 3 and H 3 , and C 4 in terms of C 3 , in order to satisfy both (4.1) and (4.3).
On the CP-even side of the 11D action, the kinetic term of the G 4 flux combines with the 8-derivative curvature correction R 4 to give rise, upon using M2-brane tadpole cancellation [100][101][102], to the 3D scalar potential [42,103]: where G 4− denotes the anti-self-dual part of G 4 . Correspondingly, on the side of type IIB compactified on B 3 , one has: which originates from the kinetic term of the G 3 flux, after removing the contribution of its imaginary-self-dual part G 3+ , fixed by the D3 tadpole. The F/M-theory duality in this case amounts to the statement: V Moreover, according to the M/F theory duality, Euclidean M5-branes in M-theory which are 'vertical', i.e. wrapped around a 6-cycle in Y 4 having the structure of an elliptic fibration over a 4-cycle in B 3 , descend to Euclidean D3-branes wrapped on the same 4-cycle. This leads us to equating their tension in the respective units, namely: where v f and v are respectively the volumes of the fibre and a 2-cycle of the base, computed with the M-theory metric in units of M , whereas v b is the volume of a 2-cycle of the base, computed with the type IIB Einstein frame metric in units of α . 19 After reducing the 11D classical action down to 3D on an elliptically fibred CY fourfold, one finds: S whereR (3) denotes the Ricci scalar of the 3D metricg (3) µν , and V 4 is the volume of Y 4 in units of M which can be written in terms of the Kähler form J as: To bring this action to the standard 3D Einstein frame, we have to rescale the metric as: so that we get: where we have defined the 3D Planck mass as On the other hand, the reduction of the type IIB Einstein frame classical action down to 4D on the base B 3 of the elliptic fibration gives: where V is the volume of B 3 in units of α . To bring this action to the standard 4D Einstein frame, the 4D metric has to be rescaled as: so that we get: where the 4D Planck mass has been defined as M p ≡ V /α . Next, we dimensionally reduce this action to 3D on a circle of radius r √ α , obtaining: In order to match (4.11) and (4.15), we first have to write both of them in terms of the same dynamical fields (in this case just the 3D metric). This leads us to perform another Weyl rescaling:ǧ which turns (4.15) into: Now we are ready to match the 3D Lagrangians (4.11) and (4.17). Using (4.6) and the definitions of the 3D and 4D Planck masses, we find: Notice that the first of the above equations is consistent with the expected relation between the M-theory (3D) and the type IIB (4D) Kähler potentials, up to a constant shift (which does not affect the field space metric): The second equation in (4.18) can also be rewritten as: which can be easily seen from the fact that (4. Notice that these are the same relations valid in the trivial fibration case [67]. Moreover it can be easily shown that (4.20) holds also for compactifications to 5D and 7D, indicating that these are universal relations imposed by M/F-theory duality. It is also worth pointing out that the dimensionful volumes of any 2p-cycles (p=1,2,3) of the base of the elliptic fibration (measured with the respective metric and fundamental scale) are the same on both sides of the duality: which is easy to verify using (4.7) and (4.20).
To conclude, we observe that, due to (4.20), the relation: which, again due to (4.20), guarantees that (4.3) is satisfied, provided that Y B / √ α is the angular variable of the circleS 1 B T-dual to the B-cycle, normalised in such a way that (α ) −1/2 S1 B dY B = 1. A few comments are now in order. Even though we talked about 3D actions, the actual duality match concerns the Lagrangians in 3D, which are quantities with the dimension of length −3 . This is why we obtained a relation between the two fundamental scales M and α . Internally, in contrast, we take the dimensionless coordinates used to parametrise the base of the elliptic fibration to be the same on the two sides of the duality. More precisely, we have: The differentials of these coordinates are used to expand the various forms in the respective contexts (e.g. G 4 in terms of dX J M , and F 3 and H 3 in terms of dX J IIB ). For this reason, (4.24) combined with (4.22) allows us to derive the following formulas connecting the form coefficients: These relations essentially state that dimensionless, metric-independent quantities are duality invariant, and this is particularly useful when estimating the volume behaviour of the terms in the 11D Lagrangian generating the low-energy scalar potential after compactification. In addition, using this observation, it is easy to prove (4.6) when the classical scalar potentials are written in terms of the form coefficients. In the following we shall exploit this result to analyse perturbative corrections to the 4D EFT.
General framework
In this section we propose a scheme to argue for or against the existence of certain α corrections in the 4D F-theory effective action. In particular, we shall focus on corrections to the tree-level flux potential due to 8-derivative terms in the 11D M-theory action. 20 In full generality, the 3D action obtained from reducing M-theory on a fourfold Y 4 contains a scalar potential of the form: where: Up to 6 M order, we have (schematically): Strictly speaking, these are only the 8-derivative couplings appearing in the CP-even sector. We ignore CP-odd terms as derived in [62] which do not contribute to V (M) . 21 Recalling that 4D Poincaré invariance requires the 4-form flux to have exactly 1 leg along the fibre, and applying the results of Sec. 4.1, the tree-level contribution in (4.26) reads: where ( r /r) 3 disappears by undoing the Weyl rescaling (4.16) of the 3D metric. Furthermore V −2 reproduces the correct volume scaling of the tree-level scalar potential in 4D Einstein frame. Finally r √ α generates the 4-th dimension upon taking the F-theory limit: where y is the dimensionful coordinate along the circle. This is just the opposite process to the one in Eq. (4.15). Having isolated these prefactors, the terms in V (M) (G 4 , R) that contribute to the 4D scalar potential are those which are independent on v f . All in all, the F-theory limit results in: (4.30) The contribution to the scalar potential of the tree-level term in (4.29) is v f -independent, therefore leading to (4.6). In what follows, we shall focus on metric contractions of the 8-derivative terms in (4.28) and on their behaviour under the F-theory limit to derive the volume scaling of α corrections to the 4D scalar potential.
A metric ansatz for elliptically fibred CY fourfolds
For the subsequent dimensional analysis and in contrast to [5], we require an ansatz for the internal metric in order to distinguish between the scaling with respect to fibre and base volume. The former determines the behaviour of a given metric contraction in the F-theory limit v f → 0, whereas the latter specifies the V dependence of the corresponding correction in V (F) . In this sense, the upcoming analysis goes beyond the type IIB arguments of [5], by identifying all relevant M-theory structures responsible for α effects in F-theory. 22 To start with, we recall that Y 4 is a Kähler manifold and both base and fibre are Kähler submanifolds of Y 4 . For this reason, the various metric components are obtained from a Kähler potential which can be split into two pieces. We denote local complex coordinates on Y 4 as Z A with A = 1, . . . , 4, and divide them as fibre coordinates ζ a with a = 1, and base coordinates z α with α = 1, 2, 3. Then the Kähler potential on Y 4 reads: 21 These couplings might however become relevant once our scheme is applied to deriving corrections to other 4D quantities. 22 Here we focus solely on the zero-mode KK reduction of the M-theory action which, as we argue below, is not sufficient to generate all α effects in F-theory compactifications.
where K b and K f are respectively the Kähler potential of the base and the fibre, and the non-triviality of the fibration is encoded in the dependence of K f on z andz. In the following, we assume that all metric components scale with integer powers of v f and v, resulting in: 23 where k f and k b are two scale-independent functions. Our assumption is justified because the Kähler form can be expanded as J = v f ω f + v ω, where ω f and ω are the harmonic (1,1)-forms Poincaré dual to the horizontal and vertical divisor respectively. In other words, there are no divisors wrapping only a 1-cycle of the fibre. Let us denote the metric components as: Given that in the F-theory limit K f ab ∼ K f aβ ∼ K f αβ ∼ v f and K b αβ ∼ v, the components of the metric and its inverse scale as: implying: This result may also be obtained directly from (4.9) using ω 2 f = −ω f ∧ c 1 (B 3 ), with c 1 (B 3 ) the first Chern class of the base. For the same reason, (4.7) gets corrected as: which implies the following useful formula: Next we compute the connection coefficients and the Riemann tensor. Up to symmetries and complex conjugation, the non-vanishing components on a generic Kähler manifold are: In App. A we give the details of the various components after the fibre/base split Z A → (ζ a , z α ). Concentrating only on the parametric volume dependence, we use (4.33) and (4.34) to find: . . , (4.39) 23 We use the fact that v f does not vary as we move over the base since J is closed in a Kähler manifold.
According to [104], however, deviations from Kählerity are possible due to backreaction effects, starting at order 9 M . The present analysis is purely classical and does not take into account such effects.
Parameter Specification λ f net number of inverse fibre metrics g ab λ b net number of inverse base metrics g αβ λ mix net number of inverse mixed metrics g aβ λ net number of inverse metrics x number of tensors with non-trivial scaling λ crit critical value of λ f for finite results as v f → 0 where . . . encodes additional terms of higher order in v f /v. Similarly, the non-vanishing components of the curvature tensor satisfy:
No (α ) odd terms from dimensional reduction
We are now ready to perform our scaling analysis. The idea is to look at all 8-derivative Lorentz-invariant contractions allowed to appear in (4.28). We will actually be more general and, analogously to (2.23) for 10D type IIB string theory but ignoring dilaton factors, we schematically denote any 11D higher derivative term as: with all indices taken along internal directions. As for the type IIB case, given that we are interested just in scaling considerations, we can set L = 0 without loss of generality since each power of (∇G 4 ) 2 scales as RG 2 4 . We now play a multi-parameter game counting all possible contractions of (4.41), using the various parameters summarised in Tab. 2. We want to build contractions using all possible metric components. The total number of inverse metrics λ satisfies: which we use to eliminate λ b in favour of the other parameters. Contrary to the type IIB discussion in Sec. 2.2, we have to take into account the possible non-trivial scaling of the various connection coefficients and Riemann tensor components as in (4.39) and (4.40). We count these scaling factors with an additional parameter x.
A standard KK reduction of the various 8D contractions results in a scaling of the 4D scalar potential V with respect to base and fibre volume given by: Using (4.34), (4.35) and (4.37), we end up with (setting M p = 1): where the bracket encodes an expansion in powers of v 3/2 f /V 1/3 and λ mix drops out since g aβ ∼ g αβ . Naively from (4.43) one may be worried that the reduction of the general 11D term (4.41) might give rise to divergent contributions in the F-theory limit v f → 0. However our scaling analysis does not allow us to determine the coefficients of α corrections to the 4D scalar potential arising from 11D higher derivative terms. We therefore assume all apparently divergent terms in (4.43) come along with vanishing coefficients so that M/F-theory duality holds at all orders in α . Below we provide concrete evidence for this assumption. Let us stress that the only input in (4.41) is parity invariance which constrains all terms to be even in powers of G 4 . Moreover only particular kinematic structures are expected to appear in the 11D action which is however not fully known yet, even at the 8-derivative level. The special nature of the compact geometry also plays a crucial rôle in determining what terms survive after reduction.
For the above reasons, the terms in (4.43) that are amenable to give non-trivial contributions to the 4D scalar potential are those independent of v f . This allows us to deduce a critical value for the number of inverse fibre metrics λ f . If a finite term in the F-theory limit arises at order o in the expansion in v 3/2 f /V 1/3 , such a critical value is: This relation implies that (λ − 1) must be a multiple of 3 in order to have λ crit ∈ N, 24 i.e. λ = 1 + 3n with n ∈ N. Given that the l M order of the generic term (4.41) is counted by l = 2(P + R + 2L), using (4.42) we can easily infer l = 6(n − R − L) which implies that higher derivative corrections in M-theory should appear only at order 6p M with p ∈ N assuming they contribute in the F-theory limit. Remarkably, this is exactly what follows from the general structure of M-theory higher derivative couplings conjectured by [57].
If we now plug (4.44) back into (4.43), we obtain: Analogously to the type IIB result (3.1), corrections to the 4D scalar potential do not depend on the detailed choice of contractions, but only on the net number of inverse metric factors λ = 1 + 3n which is fixed for a given 11D term. Combining (4.45) with (2.32) for q = 2 3 (n + 2), we realise that the (α ) m order is: where we have used (4.42) and we have set r = R since the order F 2r or D 2r of the F-or D-term expansion of the 4D EFT is counted by the number of G 4 powers in 11D, given that F-term contributions arise when G 4 reduces to G 3 while D-term effects emerge when G 4 reduces to F 2 . The result (4.46) shows clearly that the classical KK reduction of the M-theory action can only give rise to (α ) even corrections to the 4D scalar potential at different F-or D-term order. This is in agreement with [33,34,36,85]. Notice the crucial factor of 2/3 in (4.46) which implies an important difference in the counting of the (α ) m order in comparison with the type IIB analysis performed in Sec. 2.2 (setting L = 0): where the only allowed values of P and R are those such that satisfy 4R + P = 3n with n ∈ N. Primary examples of contributions to the α expansion of the 4D scalar potential from the classical KK reduction of generic 11D terms are: (i) the tree-level 11D term G 2 4 (P = −1, R = 1 and L = 0) which, according to (4.42) and (4.46), gives λ = 4, m = 0 and V ∼ V −2 , that corresponds to either the classical flux potential at order F 2 or to tree-level moduli-dependent FI terms at order D 2 ; (ii) the 6 M 11D term R 4 (P = 3, R = 0 and L = 0) which would yield λ = 4, m = 2 and V ∼ V −2 , and so a potential (α ) 2 correction which however would not contribute to the 4D scalar potential since it is cancelled by the self-dual part of the flux kinetic term [33,42]; (iii) the 6 M 11D term R 3 G 2 4 (P = 2, R = 1 and L = 0) which gives λ = 7, m = 2 and V ∼ V −8/3 corresponding to (α ) 2 corrections at O(F 2 ) (or potential (α ) 2 corrections to FI-terms), in agreement with explicit reductions performed in [34,36,85]. Whether these (α ) 2 effects correct the scalar potential or give rise just to moduli redefinitions is still an open issue. Interestingly (4.47) implies that the 10D higher derivative term R 3 G 2 3 is not naively related to the corresponding 11D R 3 G 2 4 term by a classical reduction since the first corresponds to (α ) 3 effects while the second would generate (α ) 2 corrections. Results for different 11D terms are summarised in Tab. 3. Let us now comment on our metric ansatz (4.31). We worked with a general K f which is not necessarily flat. One might have assumed instead an ansatz for K f like: 25 which would yield a so-called semi-flat fourfold metric which is flat when restricted to the fibre [106]. In particular, since K f is only quadratic in ζ, it satisfies ∂ c g ab = 0. Looking at the expressions listed in App. A, this implies that R α acd , R a bce , R a bcλ , R a bγe are suppressed by an additional factor of v f /v. Combined with (4.40), this suggests that components of R a ••• and R α ••• with more than one fibre index downstairs scale with a positive power of v f . Ultimately, restricting to the ansatz (4.48) causes all corrections to the 3D scalar potential to vanish in the F-theory limit. However (4.48) is the correct expression for K f only away from singular fibres. Thus our analysis proves that the classical reduction of 11D terms captures only effects due to 7-branes in F-theory compactifications of M-theory. Table 3: Summary of results for some (α ) even corrections to the 4D potential at different F-and D-term order from classical reduction of higher derivative M-theory terms in the F-theory limit. For simplicity, we set x = o = 0 in λ crit because the volume scaling in V (V) is independent on both.
This is consistent with the discussion in Sec. 3 where we argued that (α ) even corrections are induced just by higher derivative couplings on D7-brane worldvolumes. On the other hand, the type IIB closed string degrees of freedom are not captured in classical reductions since they would generate (α ) odd corrections to the 4D scalar potential. This raises the obvious question how the well-known (α ) 3 effects in type IIB CY threefold compactifications [19] and orientifold generalisations thereof [28] can actually be recovered from F/M-theory duality. We will discuss this issue in more detail in Sec. 5. Let us conclude with commenting on the limitations of our procedure. Here we are only able to predict the α order of a given correction, and not whether this correction actually appears in the 4D EFT or not. Clearly, some of these terms could be washed away by applying field redefinitions [34,35]. Finally, given that our analysis is only classical, we are unable to account for possible loop effects in type IIB [107,108] and F-theory [36] which could generate ln V-type (α ) 2 corrections in the 4D scalar potential. 26
Absence of divergences in known kinematic structures
In the previous analysis we assumed that all terms that naively would diverge in the v f → 0 limit, are actually multiplied by vanishing coefficients which our analysis is insensitive to. This is essentially the requirement that the M/F-theory duality makes sense beyond treelevel in α . In this section we give some evidence in this direction, following a logic that works for any Kähler metric in the compact space.
where schematically (for definitions and conventions see App. A): We now argue that, even though our analysis predicts divergent terms as v f → 0, they cancel among each other in the 3D scalar potential (effectively due to Kählerity of Y 4 ). First of all J 0 can be expressed in terms of the Weyl tensor C M N P Q as [115,116]: We decompose J 0 into an internal and an external part. As already noted in [42], the external part vanishes because C M N P Q = 0 in 3D. Furthermore the following integral vanishes for Ricci-flat Kähler manifolds (see App. A for details) [117,118]: Nonetheless, within J 0 there are contractions of the form: which would clearly be divergent from (4.45) since λ f = 3 > 1 = λ crit . However the full kinematics proves that this term must be multiplied by a vanishing coefficient. Therefore from (4.49) we realise that the only contributions to the 3D potential are those associated with the 8D Euler density E 8 . Putting all legs along the internal directions, we recover [33,42]: in terms of the 4th Chern class defined in (A.8). By definition this quantity is topological and, in particular, finite in the v f → 0 limit. In fact, it contributes to the M2-tadpole thereby cancelling the self-dual part of G 4 in the tree-level scalar potential (4.4) [42,[100][101][102][103]. We next investigate the higher derivative term R 3 G 2 4 whose 11D kinematics was determined in [62,119] as (see App. A for definitions and conventions): As before, we compactify both terms on Y 4 and argue that there are no divergent terms stemming from (4.55). The term 11 11 R 3 G 2 4 does not contribute to the 3D scalar potential V (M) for dimensional reasons: D vanishes identically when putting all indices along d < D directions. This suggests that only terms within t 8 t 8 G 2 4 R 3 are potentially dangerous in the F-theory limit. In App. A we managed to show that a cancellation of divergent terms does occur in a subset of terms within this kinematic structure. Full absence of divergences is achieved through additional assumptions about the metric ansatz. For instance, it turns out that imposing: 27 guarantees that there are no divergent contractions stemming from R 3 G 2 4 or R 2 (∇G 4 ) 2 , although there remain dangerous terms in e.g. R 4 and R 2 G 4 4 . While (4.56) remains a conjecture, we stress again that the absence of divergences should really hold true due to M/F-theory duality.
α corrections from 11D loops
In the previous section we found convincing evidence that classical KK compactifications on smooth elliptic fourfolds of the M -corrected 11D supergravity action can lead, upon F-theory limit, to only (α ) even corrections to the 4D scalar potential. This procedure restricts however to zero-modes only, ignoring loops of non-zero KK and winding modes. This implies that (α ) odd effects have to emerge from 11D loops, potentially together with additional (α ) even corrections.
This agrees with the findings of [68,71,120] where it has been shown that closed string degrees of freedom at higher order in the α expansion are encoded in non-zero winding states on the T 2 . For instance, the well-known coefficient of the type IIB R 4 coupling is invisible to a classical T 2 reduction of M-theory, while it can be derived by a 1-loop calculation in the 11D superparticle formalism [68]. 28 A direct comparison to string amplitudes in [112] led to the observation that such 1-loop amplitudes in 11D supergravity contain complete information about both perturbative and non-perturbative corrections in g s . This derivation takes into account the interplay between field theoretic loop effects and stringy winding modes when compactifying to lower dimensions, as opposed to the standard classical procedure of simply ignoring non-zero KK and winding states.
We will argue that generalising the computation of 11D loops to elliptic CY fourfold compactifications of M-theory is a way to recover (α ) odd corrections in 4D upon F-theory limit. Performing this computation explicitly is a difficult task since deriving corrections to the 4D scalar potential would require to investigate purely internal 1-loop amplitudes in a non-trivially curved background. 29 As usual, this brings along all sorts of complications such as the factor ordering in the Hamiltonian. In addition, the torus is now the elliptic fibre over a base manifold, and so τ becomes a monodromic function of the base coordinates. A complete evaluation of the amplitude is therefore beyond the scope of this paper. 27 This condition is trivially satisfied for the ansatz (4.48). 28 Specifically, this is a Schwinger-type computation based on a string-inspired formalism [121] applied to the 11D superparticle [68]. A famous example is the Brink-Schwarz superparticle [122,123] as a zero-mode approximation of the Green-Schwarz superstring [124,125]. Rather than using covariant quantisation in the pure spinor formulation [126,127], the calculus is based on light cone quantisation [71]. The framework of a Brink-Schwarz-like superparticle was shown to be equivalent to the 11D pure spinor formalism in [128] (see also [129,130] for discussions). 29 See [121,131] for brief discussions of backgrounds more intricate than T 2 .
Nevertheless determining the volume dependence of the 4D scalar potential is in principle possible. For simplicity we shall focus on 8-derivative terms which arise at 1-loop level for the simple case of trivial fibrations where all classical contributions vanish in the Ftheory limit, according to our findings in Sec. 4.3. Recall that the fibre volume is constant as a function of the base since the fibre itself is a Kähler submanifold of Y 4 (ignoring backreaction effects violating Kählerity) [49]. From the 3D EFT perspective, Q-point effective vertices of the form (4.41) with Q = P + 1 + 2(R + L) read schematically: where Q = 4 + R at the 8-derivative level. The first term is the contribution of zero-modes associated with the classical KK reduction, and so generates the α corrections discussed in Sec. 4.3. In the decompactification limit V 4 → ∞, it leads back to the 11D M-theory action. 30 The non-zero modes are instead encoded in the second term in (5.1) where the fibre volume dependence is partially due to the integral over Schwinger time. Moreover p f = 2R is the number of KK momenta along the fibre appearing in the vertex operators, while the functions F (τ,τ ) are generalisations of the typical Eisenstein series appearing in type IIB. 31 The dots in formula (5.1) mean that in non-trivial fibrations we do not expect such amplitudes to take a factorised form, like the term with K P RL multiplying the factor in brackets (which is instead the full answer for trivial fibrations). Such a factorised structure would also imply that in principle there might be divergent terms from contractions. 32 This is because we potentially multiply by additional negative powers of v f at the non-zero winding level. Similarly to Sec. 4.3, we therefore assume that the coefficient of these terms has to vanish. Moreover the base does not admit non-trivial 1-cycles, and so there is no obvious counterpart for multiple windings of the superparticle worldline around internal directions.
Still, nothing prevents us from applying the techniques to trivial fibrations. The procedure then becomes a 2-stage process where we initially compute the 1-loop amplitude for compactifications on a T 2 , and subsequently reduce the 9D result on a CY threefold before taking the v f → 0 limit. By the duality arguments presented in Fig. 1, the two scenarios: are of course equivalent for trivial fibrations. Proceeding as with the classical reduction in Sec. 4.3, we identify the volume scaling of a generic higher derivative correction to the 4D 30 As explained in [68], the numerical coefficient of this term cannot be determined by the 11D loop amplitude, and must be fixed by the UV completion of the theory, which is M-theory itself. 31 Formula (3.7) in [28] should contain one such generalisation, implicitly given as an integral over the base of the elliptic fibration. In that context, it pops up through a different (12D-inspired) derivation of (α ) 3 corrections of the 4D EFT which makes no use of M-theory. 32 The factorised structure arises from tracing over fermionic zero modes which is independent of the winding sector in T 2 compactifications. Since we are considering here internal contributions in 8D rather than external ones in 9D as in [68], we expect this problem to be alleviated once we examine in more detail the fermion and 11D vertex operators under their decomposition in SO(1, 10) → SO(1, 2) × SO (8).
scalar potential at the non-zero winding level as (setting again M p = 1): 3) where we used (4.43). Focusing on p f = 2R, this gives rise to: Non-zero contributions to the 4D scalar potential arise when λ f = λ crit , implying from (5.3) that they would scale as: This analysis gives evidence that (α ) odd corrections to the 4D scalar potential at different F-or D-term order should arise from the quantum reduction of the M-theory action. Although these results look promising, this is certainly not the full picture since we did not investigate higher loops and non-trivial fibrations. For instance, the results of [23] suggest that (α ) 2 corrections enjoy a non-trivial modular behaviour which can only arise from a proper treatment of KK and winding modes on an elliptically-fibred K3 manifold. We therefore expect that 11D loops, when computed on non-trivial fibrations, should generate also (α ) even effects. Moreover we have not yet considered the presence of non-perturbative degrees of freedom from M2/M5-brane instantons which certainly raises new challenges [132]. These are important questions for F-theory compactifications that deserve further scrutiny.
Conclusions
This paper provides a step towards a systematic understanding of the α expansion in Ftheory, with the final goal of classifying the moduli dependence of arbitrary perturbative corrections to the 4D scalar potential of type IIB string theory, where moduli stabilisation Table 4: Summary of (α ) 3 corrections to the 4D potential at different F-term order from quantum reductions of M-theory compactified on trivially-fibred fourfolds. For convenience, we set x = o = 0 in λ crit because the volume behaviour in V (V) is independent on both.
is best understood. Understanding at which order in α and g s the characteristic no-scale structure of these compactifications gets broken, is fundamental for controlling moduli stabilisation, which is the primary goal to connect string theory to low-energy particle physics and cosmology.
The first part was concerned with the picture of type IIB CY orientifold compactifications. By exploiting the two approximate scaling symmetries of the underlying 10D theory, combined with supersymmetry and shift symmetry, we managed to infer the dependence on the dilaton and the CY volume of an arbitrary perturbative correction in α and g s to the 4D scalar potential at different orders in the low-energy superspace derivative expansion. Due to the absence of (α ) 1 corrections in 10D and 8D, and the fact that (α ) 2 corrections enjoy an extended no-scale cancellation [24], we deduced that the dominant no-scale breaking effects at string tree-level arise from known (α ) 3 corrections [19,28] modulo potential logarithmic corrections.
However higher orders in g s require further scrutiny. This is because we reduce higher dimensional theories with more than 16 supercharges on Kähler manifolds to 4D N = 1 supergravity theories with 4 supercharges by retaining only KK zero modes. At string treelevel, all 4D corrections originate from the higher dimensional effective actions. Starting from string 1-loop, however, additional states such as KK or winding states with nonvanishing charge become relevant by participating in amplitudes with low-energy states. Thus at the loop level it remains obscure whether the severe reduction of the number of supercharges yields additional α corrections.
Such effects were for instance observed in [20] via loop corrections due to the exchange of KK and winding modes in N = 2 and N = 1 toroidal orientifold compactifications. Despite many efforts, their origin from the worldvolume theory of D7-branes wrapped on 4-cycles continues to be vague. A hint in this direction might come from analysing loop amplitudes of 11D supergravity compactified on elliptically fibred CY fourfolds briefly introduced in Sec. 5. Such an undertaking would allow for an exact-in-g s statement regarding (α ) 1 corrections because these amplitudes efficiently capture string loop and nonperturbative corrections. In App. B we have however shown that (α ) 1 loop effects, if present at all, instead of destabilising known LVS vacua, can give rise to new dS minima in a regime where the EFT can be under control.
In the second part of this paper we addressed instead the issue of α corrections in the 4D F-theory effective action from compactifications of M-theory on elliptically fibred CY fourfolds Y 4 . In the context of the F/M-theory duality, we derived scaling relations between the variables in the two duality frames. We utilised a general ansatz for the metric on Y 4 depending only on integer powers of the fibre volume v f and of the 2-cycle volume v on the base. The split of the metric components along base and fibre directions allowed us to define the parametric volume scaling of various tensor components. Subsequently we performed an exhaustive dimensional analysis of a generic higher derivative 11D term constructed from R, G 4 and ∇G 4 . This investigation showed that, in conventional KK reductions of M-theory on Y 4 , only (α ) even corrections survive in the 4D F-theory limit. This procedure does not allow to make statements about possible cancellation effects, as some surviving terms may be identically zero. However, we can state which contributions vanish in the limit v f → 0. In particular, we found that all corrections in 4D necessarily disappear for trivial fibrations and even for the semi-flat ansatz of [105] since they are killed by the F-theory limit.
Overall these findings provide convincing evidence that our treatment of F-theory to extract the low energy effective action needs to be revised in order to capture (α ) odd effects. Historically this might not really come as a surprise given that the (α ) 3 -corrected 10D type IIB action (2.20) cannot simply be recovered from classical KK reductions of M-theory on a T 2 , but only when winding modes along the torus are properly integrated out [68]. In other words, the type IIB bulk or closed string degrees of freedom are associated with winding states on the T 2 in the Vol(T 2 ) → 0 limit. Therefore we argued that incorporating KK and winding states on the elliptic fibration is crucial in understanding the full range of α corrections in F-theory compactifications. We hope to address some of these issues in the future.
A Compactifications on elliptically fibred Calabi-Yau manifolds
In this appendix we summarise useful definitions and identities relevant for the bulk of this paper. We compute Riemann tensor components for an elliptic fibration in order to determine their non-trivial volume scaling in Sec. 4.2. Furthermore, we expand on the discussion in Sec. 4.3 about the absence of divergent terms in higher derivative structures R 4 and R 3 G 2 4 .
A.1 Definitions and conventions
We start by giving some definitions and conventions for the various tensor structures encountered in the bulk of the paper. For 11D coordinates, we use capital letters M, N, P, . . . as indices. We mostly work with quantities along the internal direction of an elliptically fibred CY manifold. We denote n-dimensional complex coordinates Z A with capital letters A, B, . . . = 1, . . . , n. Similarly, complex coordinates on the base are defined as z α using Greek indices α, β, . . . = 1, . . . , n − 1 and on the fibre as ζ a with small letters a, b, . . . = 1.
Hermitian, Kähler and Calabi-Yau manifolds
Let X be a compact Hermitian manifold of complex dimension n with real coordinates {x 1 , . . . , x 2n }. We define complex coordinates Z A , A = 1, . . . , n, as: where √ g = det(g AB ) and J is the Kähler form: The non-vanishing connection coefficients and curvature tensor components are (together with the corresponding complex conjugates): Furthermore, the curvature 2-form is defined as: A Hermitian manifold X is Kähler if its Kähler form J is closed, dJ = 0. The associated metric g AB is referred to as Kähler metric. In local coordinates Z A , it is obtained from a Kähler potential K via: Since dJ = 0 implies ∂ A g BC = ∂ B g AC , the connection coefficients and Riemann tensor components enjoy the additional symmetries: Finally, we call X CY if its canonical bundle is trivial. Then, the 4-th Chern class is given in terms of the curvature 2-form by: where: Higher derivative structures At the 8-derivative level, the M-theory action contains higher derivative corrections of the schematic form summarised in (4.28). In the CP-even sector, the corresponding index structures are nicely encoded in terms of the tensor t 8 as well as the totally anti-symmetric Levi-Civita symbol D in D dimensions. The tensor t 8 is defined as [118,133]: for an anti-symmetric matrix M . It further is symmetric under the exchange of pairs of indices, while anti-symmetric within each pairs of indices, i.e.: In Lorentzian space, we use a convention for the totally anti-symmetric tensor in an orthonormal frame where 0 1 2...10 = +1. In terms of the generalised Kronecker-δ, we write: with s = 1 (s = 0) in Lorentzian (Euclidean) signature. The higher derivative corrections (4.49) to the Einstein-Hilbert term are encoded in the two quantities [67,68,[111][112][113][114]: Furthermore, formula (4.55) expanded reads [62,119]:
A.2 Details on the dimensional analysis
In this appendix we present additional material in support of the analysis in Sec. 4.
Absence of divergences
We now want to prove formula (4.52). This becomes clear when considering the tensor: which for Ricci-flat (but not necessarily Kähler) manifolds is related to J 0 as J 0 = g U T Z U T . The authors of [88] showed that Z U T ≡ 0 on Kähler spaces. This can easily be seen by switching to complex coordinates where: 33 For Kähler manifolds, we can further use: to rewrite the first term in (A.24) in such a way that: Since for Kähler manifolds RB F GĒ = RB GFĒ is symmetric under the exchange of labels G, F , we find: This implies that J 0 vanishes in compactifications of both type IIB on CY threefolds and M-theory on CY fourfolds.In the former case, J 0 encodes the full R 4 dependence of the 10D action which is why there is no contribution to the scalar potential from R 4 . Now we turn our attention to the kinematic structure t 8 t 8 R 3 G 2 4 . We are going to show that at least a specific subset of terms contained within this structure are free of divergences when reduced on Y 4 . Indeed, starting from the definition (A.17) and switching to complex coordinates as above one can show that on Kähler spaces: + c.c. . 33 Here, we make use of gMN = gĀ B + g AB and RMNP Q = R ABCD + RĀ BCD + R ABCD + RĀ BCD , which holds on any Kähler manifold. In particular, one finds the useful identities R AB M N = RĀ BM N = 0.
Although proving this claim in full generality without making any further assumptions about the metric seems out of reach, we may be able to provide clear evidence. We assume that cancellation of divergences must be manifest for all (p, q)-types of fluxes independently. Further, we observe that only (A.30) contains contributions from (4, 0)-flux for which (A.28) reduces to: Each G 4 has at most one index along the fibre such that: where for the (4, 0)/(0, 4)-components: G 2 aᾱbβ = Gāᾱ σρ G bβσρ = g σλ g ρμ GāᾱλμG bβσρ (A.34) G 2 aᾱβγ = 2Gāᾱ λb G βγλb = 2g λμ g bν GāᾱμνG βγλb (A.35) G 2 γᾱδβ = Gγᾱ aλ G δβaλ = g ab g λμ GγᾱbμG δβaλ + g aμ g λb GγᾱbμG δβaλ . (A. 36) B Vacua from potential (α ) 1 loop effects In Sec. 3 we have seen that the leading no-scale breaking effects at tree-level in g s should arise from (α ) 3 corrections. These effects scale as V −3 and are crucial to give rise to LVS vacua [4,39]. Interestingly, V −8/3 corrections cannot come from (α ) 2 10D effects at any order in g s due to the extended no-scale structure [21,22,24,39] (see however [107,108]), while they could emerge at tree-level from non-zero F-terms of matter fields, corresponding to T-brane uplifting contributions [87]. Potentially dangerous V −7/3 corrections can instead arise just at O((α ) 1 ) at string loop level g n s with n > 0. In this appendix we show that, if present an any order n > 0, these corrections would not destroy LVS vacua but would lead to a new class of vacua with potentially interesting phenomenological properties.
Scalar potentials with (α ) 1 loop corrections
We focus on the simple model X 3 = CP 4 [1,1,1,6,9] [18] with 2 Kähler moduli and volume form: The Kähler potential including (α ) k F 2 corrections with k = 1, 2, 3 reads: where, according to our previous discussion,α = α g n s / √ g s with n > 0 (the other powers of g s can be identified from the scaling arguments of Sec. 2.2),β = β/g s andξ = ξ/g After setting the axion to its VEV, the scalar potential obtained from (2.29) and (B.2) in the limit whereα/V 1/3 1 and a s τ s 1 reads: with (setting e Kcs = 1): 2 , λ 2 = 2a s A s g s , λ 3 = g s 8 , λ 4 = 6λ 3 , (B.6) 34 As discussed in [28], the value of ξ is corrected by contributions from O7-planes/D7-branes so that in the weak coupling limit χ(X3) → χ(X3)+2 X 3 D 3 O7 . Since one typically works with Fano bases in F-theory which have ample anti-canonical bundle, the integral contributes with a positive sign, see e.g. footnote 5 in [134]. Crucially, LVS requires ξ > 0 and hence χ(X3) < 0 which could be spoilt here, although this has not been observed in most examples discussed in the literature [135][136][137][138][139][140]. Notice that the term ∼β/V 8/3 is absent due to the extended no-scale structure, which is whyβ appears at leading order only insideξ. Clearly, ifα = 0, the usual balance of terms in LVS is destroyed. Of course, this does not mean that all hope is lost as we now discuss.
Minimisation
We can derive simple conditions for the existence of minima of (B.5) by requiring ∂V /∂V = ∂V /∂τ s = 0 which leads to (in the a s τ s 1 limit): The second equation is identical to the LVS condition: In the α → 0 limit this relation reproduces the standard LVS result τ s = (9 √ 2ξ) 2/3 ∼ 1/g s [4], while for α = 0 we obtain: showing that the volume at the minimum is not exponentially large anymore, unlessα 1. The stationary points of the full potential (B.5) can be obtained by looking at the intersection between (B.10) and (B.12). In the remainder of this appendix, we discuss two classes of minima depending on the sign of α.
AdS vacua for α > 0 We begin our analysis with explicit examples for α > 0. To find the values of V and τ s at the minimum, we compute the intersection of (B.10) and (B.12) numerically. For illustrative purposes, we focus on the following choice of underlying parameters: The potential is shown in Fig. 2. The fact that the vacuum energy has to be negative can be easily inferred from the fact that the first 3 terms in (B.5) scale as V −3 after substituting (B.10), while the last term scales as V −7/3 . Hence for V → ∞ the potential approaches zero from below since the V −7/3 -term has a negative coefficient. Notice that the minimum (B.14) satisfies our approximations sinceα/V 1/3 0.015 and (a s τ s ) −1 0.09. dS vacua for α < 0 Let us now analyse the parameter regime α < 0. In this case the minimum can be dS since the potential (B.5) approaches zero from above for V → ∞ given that the coefficient of the V −7/3 -term is now positive. Hence different choices of the microscopic parameters can give rise to either an AdS or a dS minimum followed by a maximum (or better a saddle point from the 2-field perspective) at larger V-values. The presence of two stationary points can be verified numerically by the existence of two intersections between (B.10) and (B.12) for α < 0. Let us illustrate this situation with two choices of underlying parameters. In the first case we choose: The two minima are shown respectively in Fig. 3 and 4 and are both in the regime where our approximations are trustable. Finally it is worth stressing that this dS minimum exists only in a finely tuned regime of values for α. Nonetheless, given the current debate about the existence of metastable dS vacua in string compactifications [141,142], it is important to find new examples of dS vacua which do not rely on additional sources as anti-branes. We have therefore found that potential (α ) 1 loop effects, rather than being a danger for moduli stabilisation, can provide new ways to achieve dS vacua at values of the CY volume which is not exponentially large, but still large enough to keep the EFT under control. In fact, even if we are balancing (α ) 1 against (α ) 3 corrections, the α expansion is still under control due to the fact that (α ) 1 effects arise at loop-level while (α ) 3 terms are at tree-level in g s . | 24,327.8 | 2021-06-08T00:00:00.000 | [
"Mathematics"
] |
Auditory Inspired Convolutional Neural Networks for Ship Type Classification with Raw Hydrophone Data
Detecting and classifying ships based on radiated noise provide practical guidelines for the reduction of underwater noise footprint of shipping. In this paper, the detection and classification are implemented by auditory inspired convolutional neural networks trained from raw underwater acoustic signal. The proposed model includes three parts. The first part is performed by a multi-scale 1D time convolutional layer initialized by auditory filter banks. Signals are decomposed into frequency components by convolution operation. In the second part, the decomposed signals are converted into frequency domain by permute layer and energy pooling layer to form frequency distribution in auditory cortex. Then, 2D frequency convolutional layers are applied to discover spectro-temporal patterns, as well as preserve locality and reduce spectral variations in ship noise. In the third part, the whole model is optimized with an objective function of classification to obtain appropriate auditory filters and feature representations that are correlative with ship categories. The optimization reflects the plasticity of auditory system. Experiments on five ship types and background noise show that the proposed approach achieved an overall classification accuracy of 79.2%, which improved by 6% compared to conventional approaches. Auditory filter banks were adaptive in shape to improve accuracy of classification.
Introduction
Ship radiated noise is one of the main sources of ocean ambient noise, especially in coastal waters. Hydrophones provide real-time acoustic measurement to monitor underwater noise in chosen areas. However, automatic detection and classification of ship radiated noise signals are still quite difficult at present because of multiple operating conditions of ships and complexity of sound propagation in shallow water. Various signal processing strategies have been applied to address these problems. Most of the efforts focus on extracting features and developing nonlinear classifiers.
Extracting appropriate ship radiated noise features has been an active area of research for many years. Hand designed features always describe ship radiated noise in terms of waveform, spectral and cepstral characteristics. Zero-crossing features and peak-to-peak amplitude features [1,2] were presented to describe rotation of propeller, but their performances were greatly reduced in noisy shallow seas. Features based on wavelet packet [3] were extracted, but they were difficult to determine decomposition series of wavelet. In addition, multiscale entropy method [4] was proposed to detect and recognize ship targets. Spectral [5] features and cepstral coefficients features [5,6] were extracted. However, these methods always suffer a lot from limited priori knowledge of
Architecture of Auditory Inspired Convolutional Neural Network for Ship Type Classification
The human auditory system has a remarkable ability to deal with the task of ship radiated noise classification [21,22]. It is of practical significance to establish the human auditory system mathematically. The human auditory system includes two basic regions of stimulus processing. The first region is the peripheral region [23]. In this region, the incoming acoustic signal is transmitted mechanically to the inner ear, and it is decomposed into frequency components at the cochlea. The second region of the system gets the neural signal to form auditory perception in the auditory cortex; therefore, the listener could discriminate between different sounds. The nature of the auditory model is to transform a raw acoustical signal into representations that are useful for auditory tasks [9].
In this paper, the two regions of the auditory system are simulated for ship type classification in a whole model named the auditory inspired convolutional neural network. The structure of the proposed model is shown in Figure 1. The proposed model includes three parts: the first part is inspired by the cochlea and takes raw underwater acoustic data as an input. This part is performed by a 1D time convolutional layer. The convolutional kernels are initialized by Gammatone filters based on the research foundation of human cochlea. A collection of decomposed intrinsic modes can be generated in the output of this layer. The second part is inspired by the auditory cortex and takes the output of the time convolutional layer as input signals. This part includes a permute layer, energy-pooling layer, 2D frequency convolutional layer and full connected layer. The permute layer and energy-pooling layer could convert the decomposed signal into a frequency domain. The 2D frequency convolutional layers are applied to preserve locality and reduce spectral variations in ship noise. In the third part, the whole model is optimized with an objective function of ship type classification. A more general way to express the process is: the time convolutional layer yields different simple intrinsic modes of ship noise that help the feature learning at deep layers and help ship targets' classification at the output layer. At the same time, the Gammatone filters and features are optimization by CNN to obtain appropriate representations that are correlative with ship categories. Figure 1. Auditory inspired convolutional neural network structure. In the time convolutional layer, four colors represent four groups of auditory filters with different center frequencies and impulse widths. In the permute layer and energy-pooling layer, decomposed signals are converted to frequency feature maps, each of which correspond to a frame. In frequency convolutional layers, convolution operations are implemented in both time and frequency axis. At the end of the network, several full connected layers and target layers are used to predict targets.
Learned Auditory Filter Banks for Ship Radiated Noise Modeling
The response properties of cochlea have been studied extensively. In cochlea, signals are encoded with a set of kernels. The kernels can be viewed as an array of over-lapping band pass auditory filters that occur along basilar membrane. These filters' center frequencies increase from the apex to the base of the cochlea. In addition, their bandwidths are much narrower at lower frequencies [24,25]. This property is appropriate for describing ship radiated noise, since the energy of ship radiated noise is mainly concentrated in lower frequencies.
Auditory Filter Banks and Time Convolutional Layer
One solution of mathematical approximations of cochlea filter banks is Gammatone kernel functions (Gamma-modulated sinusoids) [26], which are linear filters described by impulse responses. The Gammatone impulse response is given by: where f is center frequency in Hz, φ is phase of the carrier in radians, a is amplitude, n is filter's order, b is bandwidth in Hz, and t is time in seconds. Center frequency and bandwidth are set according to an equivalent rectangular bandwidth (ERB) filter bank cochlea model, which is approximated by the following equation [27]: In this paper, 128 Gammatone filters with center frequencies range from 20 Hz to 8000 Hz are generated. Four Gammatone filters are shown in Figure 2a. Figure 2b shows magnitude responses of Gammatone filter banks, and Figure 2c shows the relationship between center frequencies and bandwidths. However, Gammatone filters need to be optimized for the following reasons: (1) There is a fixed bandwidth for a given center frequency. This assumption is not matched by auditory reverse correlation data, which show a range of bandwidths at any given frequency [9]; (2) ERB filter bank cochlea model provides linear filters, which doesn't account for nonlinear aspects of the auditory system [27]; (3) Auditory filter banks designed from perceptual evidence always focus on the properties of signal description rather than the classification purpose [16].
The first layer in the proposed CNN architecture is a time convolutional layer over a raw time domain waveform. CNN is a kind of artificial neural network which performs a series of convolutions over input signals. We use a physiologically derived set of Gammatone filters to initial this layer for sound representation. The output of each filter is mathematically expressed as convolution of the input with impulse response. Then, convolutional kernels can be interpreted as representing a population of auditory nerve spikes. As shown in Equation (4), waveform x is convolved with trainable Gammatone kernel k j and put through activation function f to form the output feature map y j . Each output feature map is given an additive bias b j . The time convolutional operation is only on the time axis. There is one output for each kernel and the dimensionality of each output is identical to the input: To optimize the kernel functions, a gradient-based algorithm is derived to update them along with parameters in deep layers. The optimized convolutional kernels can be viewed as a set of band-pass finite impulse response filters which correspond to different locations of the basilar membrane. The relationship between center frequencies and bandwidths is optimized to match the classification task.
Multi-Scale Convolutional Kernels
Gammatone filters with similar center frequencies are more correlative with each other and they always have similar impulse widths. Gamma envelopes of Gammatone filters are shown in Figure 3a,b. The relationship between impulse widths and center frequencies is shown in Figure 3c. The impulse widths range from 50 to 800 points for 16 kHz sampling frequency. The impulse widths get wider for lower frequencies. As suggested by Arora [28], in a layer-by-layer construction, correlation statistics of each layer should be analyzed by clustering it into groups of units with high correlation. In this paper, filters with similar impulse widths are clustered in one group. The grouping of filters is performed by quartering, with each of the groups having the same number of filters. The width of the four groups are set as 100, 200, 400, and 800 points, respectively. In each group, we create multiple shifted copies of each filter's impulse response. Another parameter to be selected for the time convolutional layer is the number of kernel functions. Filter banks with more than 16 Gammatone kernels have more than necessary, but increasing the number allows greater spectral precision [29]. We used a set of 32 kernel functions in each group.
The multi-scale convolutional kernels have several advantages: first, convolutional kernels with varying lengths could cover multi-scale reception field to provide a better description of sounds. Second, correlation statistics of signal components can be analyzed by filter bank groups. Third, fewer parameters in multi-scale kernels can prevent overfitting and save on computing resources, especially for filters with narrower impulse width.
Auditory Cortex Inspired Discriminative Learning for Ship Type Classification
In an auditory system, cochlear nerve fibers at the periphery are narrowly tuned in frequency [30]. In the proposed model, a time convolutional layer yields representations that correspond to different frequencies. An auditory cortex is involved in tasks such as segregating and identifying auditory "objects". Neurons in the primary cortex have shown to be sensitive to specific spectro temporal patterns in sounds [30], and they are likely to reflect the fact that the cochlea is arranged according to frequency. Inspired by this property, we proposed the permute layer, energy-pooling layer and frequency convolutional layer.
Permute Layer and Energy-Pooling Layer
After the time convolutional layer, the rest of the network is constructed by first converting each output into a time-frequency distribution. This is one way of describing the information our brains get from our ears. Assuming that the output of time convolutional layer has a dimension of l × n × m, where l is frame length, n and m represent the number of frames and time feature maps, respectively. This output is permuted into dimension of l × m × n in the permute layer, thus we get n output feature maps, each of which correspond to a frame and has a dimension of l × m. Then, each output feature map is pooled over the entire frame length by computing root-mean-square energy in the energy-pooling layer, so that the energy of each signal component is summed up within regular time bins. Layer normalization is applied to normalize the energy sequences. Figure 4 illustrates the decomposition of a time domain waveform and time-frequency conversion by using underwater noise radiated from a passenger ship. The filters' outputs show that the waveform is decomposed into corresponding frequency components. The bottom right part of Figure 4 shows that each component is converted into a frequency domain.
Frequency Convolutional Layer and Target Layer
Neurons in the primary auditory cortex have complex patterns of sound-feature selectivity. These patterns indicate sensitivity to stimulus edges in frequency or in time, stimulus transitions in frequency or intensity, and feature conjunctions [31]. Thus, in the proposed model, several 2D frequency convolutional layers are applied to discover time-frequency edge of the ship radiated noise based on the output of pooling layer. Convolution operations are performed in both time and frequency axis. The dimension of convolutional kernel is 3 × 3. The creation of frequency convolutional layer matched to the processing characteristics of auditory cortical neurons. These layers could also preserve locality and reduce spectral variations of line spectrum in ship radiated noise.
The output from the last frequency convolutional layer is flattened to form the input of a full connected layer. To obtain a probability over every ship type for each sample, the end of the network is a softmax target layer and the loss function is a categorical cross entropy. Output y is computed by applying the softmax function to the weighted sums of the hidden layer activation s. The ith output y i is: The cross entropy loss function E for multi-class output is: where t is the target vector. Parameters of both time convolutional layer, frequency convolutional layers and full connected layers are optimized jointly with the softmax target layer. Both auditory filter banks and feature representations are optimized correlative with ship category by an optimization algorithm that reflects the plasticity of an auditory system.
Experimental Dataset
Our experiments were performed on a 64-h measured ship radiated noise acquired by Ocean Networks Canada observatory. Acoustic data were measured using an Ocean Sonics icListen AF hydrophone placed at Latitude 49.00811 • , Longitude −123.33906 • and 144 m below sea level. Sampling frequency of the signal was 32 kHz and it was down sampled to 16 kHz in our experiments. The acquired acoustic data were combined with Automatic Identification System data. Ships during normal operating conditions presented in an area of 2 km radius of the hydrophone deployment site were recorded. To minimize noise generated by other ships, there were no other ships presented in a 3 km radius of the hydrophone deployment site for each recording. Ship categories of interest are Cargo, Passenger ship, Pleasure craft, Tanker and Tug. Classification experiments were performed on the five ship categories and background noise. Spectrograms of signals for these classes are shown in Figure 5. The acquired original dataset has 474 recordings. Each recording can be sliced into serval segments to make up the input of a neural network. The length of segments used for classification can be adjusted according to the acquired signal; then, the input layer of the network should be adjusted accordingly. Every segment was classified independently. The classification results obtained on 3 s segments were more stable and accurate than classified with shorter segments. This may be because burst noise in the acquired signal has a greater negative impact on the recognition of short segments. For a given network structure, longer segments result in greater space complexity. For the limitation of memory capacity, a training network with longer segments requires a smaller batch size and even causes out-of-memory errors. Considering the computational ability and classification accuracy, the experiments in this paper were performed on segments of 3 s duration. Thus, the dataset consists of 76,918 segments. For each category, about 10,000 segments were used for training and 2500 segments were used for testing. In order to simulate real application situation, segments in one recording can't be split into a training dataset and test dataset. Each signal was divided into short frames of 256 ms, so each sample is a 4096 × 12 data matrix.
Classification Experiments
The classification performance of the proposed method was compared to same structure CNNs with a randomly initialed time convolutional layer and untrainable Gammatone initialed time convolutional layer. The proposed method was also compared to CNNs trained on hand designed features. These hand designed features included waveform features, wavelet features, MFCC, Mel-frequency features, nonlinear auditory features, spectral and cepstral features. The two pass split window (TPSW) [32] is applied subsequently after short-time fast Fourier transform for the enhancement of the signal-to-noise ratio (SNR). The TPSW filtering scheme provides a mechanism for obtaining smooth local-mean estimates of the signal. Mel-frequency features were extracted by calculating log Mel-frequency magnitude. Nonlinear auditory filters [33] with 128 channels were utilized to extract nonlinear auditory features. First, 512-cepstral coefficients were extracted as cepstral features. Other features were described in our previous paper [20]. Each signal was windowed into 256 ms frames before extracting features. The extracted features on frames were stacked to create a feature vector. The tensorflow Python library runs on a NVIDIA GTX1080 graphics card (Santa Clara, CA, USA), which was used to perform the bulk of the computations. Table 1 shows the CNNs structure for classification experiments. Table 2 shows the hyper-parameters of the proposed model. Table 3 shows the classification performance of different approaches. The proposed model achieved the highest accuracy of 79.2%. The baseline system, CNN trained on spectral features, had a classification accuracy of 73.2%. The proposed method gave 6% improvement in accuracy compared to the baseline. The accuracy of proposed model was apparently higher than CNNs trained on other hand designed features. CNN with randomly initialed time convolutional layer and untrainable Gammatone initialed time convolutional layer gave an accuracy of 60.8% and 75.3%, respectively. The results indicated that auditory filter banks together with a back propagation algorithm helped CNN to discover better features. Table 4 shows the precision, recall and f1-score obtained from the confusion matrix of the proposed model. The background noise class had the highest recall value of 0.94, while the precision was only 0.73. This indicted that ships cannot be detected when they were far away from the hydrophone. The ship classes with the best results were Cargo and Passenger, with f1-score of 0.87 and 0.86, respectively. The poorest results were obtained for Tug, with precision of 0.73, recall of 0.54 and f1-score of 0.62. This may be because tugs have a similar mechanical system with other classes or some tugs were towing other ships during the recoding period. Receiver operating characteristic (ROC) curves were constructed by the output of a softmax layer obtained on test data, assuming that one class was positive and other classes were negative. Figure 6 shows the ROC curves and area under curve (AUC) values obtained by different approaches. Performances of the proposed model shown in Figure 6j were significantly better than other methods for almost all classes. The accuracies of background noise were always higher than other classes, which indicated that it was easier in detecting ships' presence than classifying ship types.
Visualization and Analysis of Learned Filters
Initializing CNN weights by Gammatone filters, the CNN managed to optimize impulse responses during its training process. Figure 7 shows the optimized Gammatone kernels for ship radiated noise in the proposed model. The algorithm modified amplitude and impulse width of Gammatone filters, but temporal asymmetry and gradual decay of the envelope that match the physiological filtering properties of auditory nerves were reserved. We can also compare population properties of optimized Gammatone filters with those of conventional Gammatone filters. To illustrate spectral properties of optimized filters, we zero-padded every time convolutional kernel w i to 800 entries, and then calculated the magnitude spectrum W i : Center frequency f i c was calculated as the position of the maximum magnitude: where f s is sampling frequency. Bandwidth f i b of each filter can be calculated by equivalent noise bandwidth: (9) Figure 8 shows a scatter-plot of the bandwidths against center frequencies for Gammatone filters and for optimized Gammatone filters. The optimized kernels showed a range of bandwidths at any given frequency. Frequencies of optimized Gammatone kernels did not have exact linear correlation with bandwidths compared to conventional Gammatone filters. Nonlinearities were located at low frequencies. The energy of ship radiated noise is mainly concentrated below 1 kHz. Differences in the dominant frequency of radiated noise were related to ship type [34]. The causes of the distinct spectral characteristics are unknown, but it could be reflected on learned filters to extract the differences of ship type.
Feature Visualization and Cluster Analysis
Feature visualization method t-distributed stochastic neighbor embedding (t-SNE) [35] was used to observe features. One-thousand samples selected randomly from test datasets were used to perform the experiments. Outputs of the last full connected layer were extracted as learned features. The results are shown in Figure 9. Features in the proposed model constructed a map in which most classes were separated from other classes, except for tugs. This result was matched by the previous classification results. In contrast, there were large overlaps between many classes for features learned from hand designed features. The results indicated that features in the proposed model provided better insight into the class structure of the ocean ambient noise data.
Conclusions
In this work, we proposed an auditory inspired convolutional neural network for ship radiated noise recognition on raw time domain waveform in an end-to-end manner. The convolutional kernels in a time convolutional layer are initialized by cochlea inspired auditory filters. The choice of auditory filter banks biases our model to decompose signal into frequency components and reveal the intrinsic information of targets. Correlation statistics of signal components are analyzed by constructing a multi-scale time convolutional layer. The auditory filters are optimized in terms of ship radiated noise recognition tasks. Signal components are converted to a frequency domain by permute layer and energy pooling layer to form the "frequency map" in an auditory cortex. The whole model is discriminative trained to optimize auditory filters and deep features by objective function of ship classification.
The experimental results show that, during the training of a convolutional neural network, filter banks are adaptive in shape to improve the classification accuracy. The optimization of the auditory filter banks shape is reflected in the relationship between center frequencies and bandwidths.
The proposed approach can yield better recognition performance when compared to conventional ship radiated noise recognition approaches.
Our studies developed a robust ship detection and classification model by the fusion of ship traffic data and underwater acoustic measurement. This study facilitates the development of a unique platform which could monitor underwater noise in chosen ocean areas and has automatic detection and classification capability to identify the contribution of different sources in real time.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,223.4 | 2018-12-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Oscillations of 2D ESTER models. I. The adiabatic case
Recent numerical and theoretical considerations have shown that low-degree acoustic modes in rapidly rotating stars follow an asymptotic formula and recent observations of pulsations in rapidly rotating delta Scuti stars seem to match these expectations. However, a key question is whether strong gradients or discontinuities can adversely affect this pattern to the point of hindering its identification. Other important questions are how rotational splittings are affected by the 2D rotation profiles expected from baroclinic effects and whether it is possible to probe the rotation profile using these splittings. Accordingly, we numerically calculate pulsation modes in continuous and discontinuous rapidly rotating models produced by the 2D ESTER (Evolution STEllaire en Rotation) code. This spectral multi-domain code self-consistently calculates the rotation profile based on baroclinic effects and allows us to introduce discontinuities without loss of numerical accuracy. Pulsations are calculated using an adiabatic version of the Two-dimensional Oscillation Program (TOP) code. The variational principle is used to confirm the high accuracy of the pulsation frequencies and to derive an integral formula that closely matches the generalised rotational splittings, except when modes are involved in avoided crossings. This potentially allows us to probe the the rotation profile using inverse theory. Acoustic glitch theory, applied along the island mode orbit deduced from ray dynamics, can correctly predict the periodicity of the glitch frequency pattern produced by a discontinuity or the Gamma1 dip related to the He II ionisation zone in some of the models. The asymptotic frequency pattern remains sufficiently well preserved to potentially allow its detection in observed stars.
Introduction
Much effort has gone into producing realistic models of rapidly rotating stars. This includes the pioneering works by Roxburgh et al. (1965), Ostriker & Mark (1968), and Jackson (1970) and continues on in the present with various 1D codes (e.g. Palacios et al. 2003, Marques et al. 2013) as well as 2D codes such as the one from the Evolution STEllaire en Rotation (ESTER) project (Rieutord & Espinosa Lara 2009, Espinosa Lara & Rieutord 2013, Rieutord et al. 2016). An extensive monograph on the effects of rotation on stellar structure and evolution has also recently been published (Maeder 2009). In parallel, much work has gone into calculating pulsation spectra in such models in order to interpret observations from recent space missions such as CoRoT (Baglin et al. 2009), Kepler (Borucki et al. 2009), and TESS (Ricker et al. 2015). Some of the most recent works include Lovekin & Deupree (2008), Lovekin et al. (2009), Lignières & Georgeot (2008, 2009, Ballot et al. (2010), Reese et al. (2009Reese et al. ( , 2013, Ouazzani et al. (2015), and Ouazzani et al. (2017). Of these works, only Ouazzani et al. (2015) addresses pulsations in baroclinic stellar models, that is, models in which surfaces of constant pressure, temperature, or density do not coincide. This is a major ingredient of realistic models, as rotating stars are expected to be baroclinic (e.g. Zahn 1992). The work by Ouazzani et al. (2015) used stellar models from Roxburgh (2006) in which the rotation profile is imposed beforehand rather than being calculated in a self-consistent way using energy conservation. In contrast, the ESTER code deduces the rotation profile in a self-consistent way when constructing stellar models. Hence, it is important to study pulsation modes in such models.
One of the first signatures of rotation on stellar pulsations is rotational splittings, the frequency differences between consecutive modes with the same radial order and harmonic degree but different azimuthal orders. At slow rotation rates, rotational splittings can be used to invert 1D or 2D rotation profiles using a first-order perturbative approach (e.g. Deheuvels et al. 2014, Schou et al. 1998, Thompson et al. 2003). At high rotation rates, higher-order effects come into play and must be addressed before meaningful information on the rotation profile can be deduced (e.g. Soufi et al. 1998, Suárez et al. 2009). In this con-A&A proofs: manuscript no. article text, a particularly interesting quantity to investigate is the generalised rotational splitting, namely the frequency difference between prograde modes and their retrograde counterparts. In particular, Ouazzani & Goupil (2012) showed that it is possible to distinguish between third-order effects of rotation and latitudinal differential rotation in such splittings. At higher rotation rates, Reese et al. (2009) showed that such splittings are a weighted integral of the rotation profile, provided the degree of differential rotation is not too large. This would potentially provide the basis for carrying out rotation inversions in such stars. This work, however, was restricted to cylindrical rotation profiles and furthermore neglected the influence of the Coriolis force in the integrals. This raises the open questions of whether such weighted integrals can be generalised to general 2D rotation profiles, and if so, how accurate they are.
Another important consideration concerns frequency separations. Indeed, a number of recent studies have shown that the pulsation frequencies of low-degree acoustic modes of rapidly rotating stars follow an asymptotic formula. Such a formula was first explored on an empirical basis , 2009) before being justified using ray dynamics (Lignières & Georgeot 2008, 2009, Pasek et al. 2011. Reese et al. (2017) studied theoretical pulsation spectra with realistic mode visibilities in rapidly rotating 1.8 and 2 M ⊙ stellar models based on the self-consistent field (SCF) method (Jackson et al. 2005, MacGregor et al. 2007. They showed that it may be possible, depending on the configuration, to detect the rotating counterpart to the large frequency separation, or half its value, as well as frequency spacings corresponding to multiples of the rotation rate. More recently, Mirouh et al. (2019) set up a machine learning algorithm to automatically identify to which class a given mode belongs. They went on to characterise the large frequency separation in a large set of models at different rotation rates and with different core compositions (thus mimicking the effects of stellar evolution), and showed a tight scaling relation between it and the stellar mean-density. From an observational point of view, recurrent frequency spacings have been detected in a number of δ Scuti stars (Mantegazza et al. 2012, Suárez et al. 2014, García Hernández et al. 2009, Paparó et al. 2016, including a very recent study involving interferometry, spectroscopy, and space photometry (Bouchaud et al. 2020), and interpreted as the large frequency separation or half its value. García Hernández et al. (2015) studied a number of δ Scuti pulsators in binary systems, for which independent estimates of the mass and radius are available, and have shown that this separation scales with the mean density, as expected based on the calculations in Reese et al. (2008). Ensemble asteroseismology has recently been applied to CoRoT δ Scuti stars by Michel et al. (2017) who also found regular patterns related to the large separation, although we note that Bowman & Kurtz (2018) applied a similar strategy to Kepler δ Scuti stars without the same degree of success. Finally, in the very recent work by Bedding et al. (2020), the pulsation spectra of 57 δ Scuti stars observed by TESS and three by Kepler were matched to axisymmetric ℓ = 0 and ℓ = 1 modes from non-rotating models via echelle diagrams. Such modes were shown to be relatively invariant as a function of rotation rate up to ∼ 0.5 Ω K using pulsation calculations in SCF models, apart from a scale factor related to the mean density, thus justifying the use of non-rotating models.
However, it is unclear to what extent the asymptotic formula would hold in the presence of discontinuities within the stellar model. Based on results previously obtained in non-rotating models with sharp gradients (e.g. Monteiro et al. 1994), one can expect the asymptotic formula to still apply albeit with a supplementary oscillatory component. However, it is not clear how strong this component is, how it behaves in the presence of rapid rotation, and whether it can hinder the interpretation of observed oscillation spectra in rapidly rotating stars, as discussed in Breger et al. (2012). The recent works by Bouabid et al. (2013) and Ouazzani et al. (2017) have shown, using the traditional approximation and full 2D pulsation calculations, respectively, how a sharp gradient around the core of rotating γ Dor stars affects g-modes. In particular, they show the presence of a periodic component in the period spacings of such modes, analogous to what was found in the non-rotating case (Miglio et al. 2008), in agreement with observations from the Kepler mission (e.g. Van Reeth et al. 2015). Likewise, similar observations in SPB stars have also revealed an oscillatory behaviour in the period spacing of their g-modes (e.g. Pápics et al. 2017). A similar study is needed for acoustic modes in rapidly rotating stars.
In order to address the above questions, we investigate lowdegree acoustic modes in rapidly rotating stellar models from the ESTER code. One of the advantages of the ESTER code is its multi-domain spectral approach, ideal for introducing discontinuities while retaining a high numerical accuracy. The pulsation modes are calculated using a multi-domain spectral version of the Two-dimensional Oscillation Program (TOP, Reese et al. 2006Reese et al. , 2009. The article is organised as follows: the following section describes stellar models based on the ESTER code. This is then followed by a description of the pulsation calculations as well as the variational principle, with a particular emphasis on the effects of discontinuities. Section 4 deals with generalised rotational splittings. Section 5 then goes on to describe the effects of discontinuities, both on the pulsation frequencies and on the eigenmodes. This is then followed by the conclusion.
Stellar models based on the ESTER code
The aim of the ESTER project is to produce and evolve selfconsistent stellar models of rapidly rotating stars. Consequently, a fully 2D approach is used in order to solve the relevant fluid equations while taking into account energy conservation when modelling the stationary structure of the star. This leads to centrifugal deformation of the stellar structure, as well as more subtle effects, namely differential rotation and meridional circulation, resulting from baroclinicity. Consequently, the rotation profile depends on both the radial coordinate and colatitude, and the isobars, isochores, and isotherms are distinct.
In terms of microphysics, it is possible to apply various equations of state (EOS) in ESTER. These include: the ideal gas law with or without radiation pressure, the OPAL EOS (Rogers & Nayfonov 2002), and FreeEOS 1 (Irwin 2012). In what follows, we applied the ideal gas law (without radiation pressure) in the discontinuous models (see below) and one of the continuous models, in order to avoid introducing numerical errors coming from a tabulated EOS, and the OPAL EOS in the other continuous model for the sake of realism. In terms of opacities, there are two options currently implemented: Kramer's opacities and OPAL opacities (Iglesias & Rogers 1996). We used Kramer's opacities in conjunction with the ideal gas law to rely entirely on analytical expressions thus reducing numerical errors, and OPAL opacities with the OPAL EOS for the sake of realism and consistency. Models with Kramer's opacity have significantly larger radii and hence lower mean densities.
Currently, the ESTER code has some limitations. Firstly, it is unable to simulate convective envelopes. Indeed, applying a strong entropy diffusion as is done in the convective core is too approximate for the envelope. Various numerical difficulties have so far prevented the code from converging to a convective solution in such regions. Accordingly, ESTER is currently not suitable for stars with masses below ∼ 1.6 M ⊙ . Secondly, the ESTER code is unable to simulate time evolution using a full chain of nuclear reactions. However, it is possible to alter the core composition in order to mimic the effects of stellar evolution or to include a rudimentary implementation of hydrogen combustion.
From a numerical point of view, the star is divided into multiple domains in the radial direction. There are two main reasons for doing this. First, this allows us to overcome the limitations inherent to using a spectral approach with its imposed collocation grid. In particular, it enables us to have a high resolution near the surface where it is needed. The second reason is that one can place a discontinuity between two domains without losing spectral accuracy. This goes hand in hand with the use of a dedicated coordinate system, (ζ, θ, φ), where ζ is a surface-fitting radial coordinate that is constant across the stellar surface and across the surfaces which delimit the boundary between consecutive domains (see Rieutord et al. 2016, for more details).
In this study, we use 2 M ⊙ stellar models at 70% of the Keplerian break-up rotation rate. We note that this value is not too far from the rotation rates of Rasalhague (α Oph) for which Ω ∼ 0.64Ω K (see e.g. Deupree 2011, Mirouh et al. 2017, and references therein) and Altair for which Ω = 0.74Ω K (Bouchaud et al. 2020), two well-studied δ Scuti stars with photometric observations from the space missions MOST and WIRE respectively. These models use a spectral approach based on Chebyshev polynomials in the radial direction, and spherical harmonics in the horizontal directions. The radial direction is subdivided into eight domains, the resolution in each domain being 30, 55, 45, 40, 40, 50, 70, and 70, that is, a total of 400 radial points. In the horizontal directions, 22 or 32 points on half of a Gauss-Legendre collocation grid are used depending on the model, thus corresponding to 22 or 32 spherical harmonics with even ℓ values. Density discontinuities are achieved by modifying the hydrogen content abruptly. Table 1 gives the characteristics of the five models (Mreal, M, M6, M7, and M7b) involved in this study. In all of the models, Z = 0.02 everywhere. Figure 1 shows where the discontinuity is located in model M6, and Fig. 2 gives the density and sound velocity profiles in the M, M6, and M7 models.
Models Mreal and M are our most realistic models, and serve as a reference, since they do not feature ad hoc discontinuities. Models M6 and M7 include a drop in density near the surface for two different radii, while M7b includes an increase in density near the surface. In realistic models, such as Mreal, such discontinuities are not expected. Instead, more subtle phenomena, such as dips in the Γ 1 profile due to the hydrogen and helium ionisation zones, occur near the stellar surface and can lead to a glitch pattern in the frequencies. However, it is still useful to test models with discontinuities as they exaggerate the phenomena we wish to study, namely acoustic glitches, and should thus make it easier to detect its signature in the pulsation spectrum. Furthermore, one can easily modify the different parameters related to the discontinuity such as depth and intensity in order to study its impact on the frequencies. Finally, the lack of radiation pressure in these models leads to flat Γ 1 profiles meaning that the only glitch signatures expected are those arising from the discontinuities, thus simplifying the subsequent analysis. Nonetheless, the realistic model also allows us to test acoustic glitches related to the Γ 1 profile. We do note that stars can have a discontinuity around the core due to the depletion of hydrogen by nuclear reactions. However, acoustic modes are sensitive to the near-surface layers of stars and are thus not the most suitable for studying such discontinuities. This is particularly true of island modes as the ray trajectory orbits around which they are concentrated remain far away from the convective core for Ω 0.2 Ω K (at least for the models in this study). In order to probe such discontinuities, it is more useful to look at gravity-mode glitches (e.g. Miglio et al. 2008, Ouazzani et al. 2017, which is beyond the scope of the present article. In model M7b, the denser layer is on top. At first sight, this may seem unrealistic, but density inversions can occur in the near-surface layers of stars such as the one shown in Fig. 3 for a 2 M ⊙ non-rotating main sequence model from grid B of Marques et al. (2008). Such density inversions typically occur as a result of a sharp temperature drop in low density regions of the star (Marques, private communication). Furthermore, in-A&A proofs: manuscript no. article Table 1. Characteristics of the models used in this study. X int and X ext are the hydrogen contents below and above the discontinuity, respectively. R disc /R eq gives the equatorial radius at the discontinuity, normalised by the star's equatorial radius. We note that models Mreal and M are smooth.
Pulsation calculations
The pulsation modes are calculated using the Two-dimensional Oscillation Program (TOP, Reese et al. 2006Reese et al. , 2009. This program fully takes into account the centrifugal deformation and has been set up to apply a multi-domain spectral approach, in accordance with the models from ESTER. The next subsections describe the set of pulsation equations, the interface conditions that apply between different domains, the boundary conditions, and the numerical approach.
Pulsation equations
The following set of equations are used to calculate pulsation modes. They are, respectively, the continuity equation, Euler's equation, the adiabatic relation, and Poisson's equation: where quantities with the subscript '0' are equilibrium quantities, those with 'δ' in front Lagrangian perturbations, ρ the density, P the pressure, Ψ the Eulerian gravitational potential perturbation, ξ the Lagrangian displacement, Ω the rotation profile (which depends on ζ, the surface-fitting radial coordinate, and θ, the co-latitude), Λ = 4πG, G the gravitational constant, s the distance from the rotation axis, and e s the associated unit vector. The term in square brackets (last line of Eq. (2)) does not cancel, since the stellar model is not barotropic. The Lagrangian density perturbation is eliminated in favour of the Lagrangian pressure perturbation using Eq. (3). In the above set of equations, we have neglected the meridional circulation, given that it is expected to have a negligible effect on the pulsation modes.
Non-dimensionalisation
The following reference length, pressure, and density scales are used: where R eq is the equatorial radius and M the mass. As a result of this choice of reference scales, the frequencies are nondimensionalised by the inverse of the dynamic time scale: Using this non-dimensionalisation leads to the same set of pulsation equations as previously (Eqs. (1)-(4)) except that Λ is now equal to 4π.
Interface conditions
Given that ESTER models are calculated over multiple domains, interface conditions are needed to describe the relation between various quantities on either side of the different boundaries. Furthermore, care is needed when expressing these conditions given that some of the models contain discontinuities. The first condition is simply that the fluid domain is continuous. In other words, the deformation of the boundary must be the same on either side. This yields the following first order expression: where n is the normal to the unperturbed surface, and the subscripts '-' and '+' denote quantities below and above the boundary. This condition does allow the fluid to 'slip' along the boundary. A more detailed derivation is given in App. A.1. A second condition is that the pressure remains continuous across the perturbed boundary. This condition is simply expressed as follows (see App. A.2): The third condition is the continuity of Ψ and its gradient across the perturbed boundary. This is enforced by the following conditions (see App. A.3):
Boundary conditions
As usual, various boundary conditions are needed to complete the system. In the centre, the solutions need to be regular. At the surface, we apply the simple mechanical boundary condition δp = 0. We note that in Reese et al. (2013), a more complex condition was imposed in order to have a non-zero value for δT/T 0 at the surface, useful for mode visibility calculations. However, with such a condition, the pulsation equations do not derive from a variational principle. Here, since we are seeking to obtain accurate frequencies, we prefer the simpler boundary condition (δp = 0), so that we can then apply the variational principle as a supplementary check on the accuracy. Finally, the gravitational potential must match a vacuum potential at infinity. This is achieved by extending the gravitational potential, thanks to Eqs. (9) and (10), into an external domain which encompasses the star and has a spherical outer boundary. The outer boundary condition is then (see Reese et al. 2006): where r ζ = ∂ ζ r = 1 − ε, r ext = 2, and where we have used a harmonic decomposition of Ψ, ℓ being the spherical harmonic degree.
Numerical approach
The above system of equations, as well as boundary and interface conditions, are discretised using the spherical harmonic basis for the angular coordinates (θ, φ), and using Chebyshev polynomials in the radial direction. This leads to a generalised matrix eigenvalue problem of the form Ax = λBx. This problem is modified using a shift-invert approach to target frequencies around a given shift, σ, before being solved through the Arnoldi-Chebyshev approach (e.g. Braconnier 1993, Chatelin 2012. The multi-domain spectral approach used in the radial direction leads to matrices A and B which are block tri-diagonal. The matrix A−σB (which intervenes in the shift-invert approach) can be efficiently factorised using successive factorisations of the diagonal blocks (including a corrective term from the non-diagonal blocks).
Various numerical resolutions
In order to check the accuracy of the frequencies, it is useful to recalculate the pulsation modes using different radial resolutions or numbers of spherical harmonics. Accordingly, we recalculated 28 to 30 axisymmetric (m = 0) modes in three of the models, using various resolutions. Table 2 gives the maximum relative differences on the pulsation frequencies. The first column corresponds to a 50 % increase in the number of spherical harmonics in the pulsation calculations, that is, the pulsation modes are calculated with N θ = 60 rather N θ = 40 spherical harmonics. The second column corresponds to a ∼ 50 % increase of the radial resolution in the pulsation calculations (after having interpolated the model). Specifically, the resolutions in the eight domains are 45, 85, 70, 60, 60, 75, 105, and 105, that is, a total of N r = 605 points. Finally, the third column corresponds to ∼ 50 % increase of the radial resolution both in the model (that is, the model is calculated with ESTER using an increased radial resolution rather than being interpolated) and pulsation calculations. Table 2. Maximum relative differences on pulsation frequencies using various resolutions in three of the models.
7.9 × 10 −4 2.8 × 10 −11 -Two trends can be seen in Table 2. First, modifying the resolution in both the model and the pulsation calculations has a greater impact than only modifying the resolution of the pulsation calculations. This is expected as the higher resolution will be taken into account in the ESTER convergence process when calculating the model in the former case. We note that no value is provided in the last column of Table 2 for model M6 since ESTER was unable to converge in that situation. Secondly, modifying the number of spherical harmonics in the pulsation calculations has a greater impact than modifying the radial resolution. This probably simply illustrates the need for a sufficient harmonic resolution to resolve the intricate island mode geometry, particularly in model M6. Overall, these differences remain small (especially bearing in mind these are the maximal differences), except possibly for the differences related to the harmonic resolution in model M6.
Variational principle
Another way of checking the accuracy of the pulsation calculations consists in comparing the numerical frequencies with those obtained using a variational formula. Such a formula is an integral relation between the frequencies and their associated eigenfunctions. According to the variational principle, the error on the variational frequency scales as the square of the error on the eigenfunctions (e.g. Christensen-Dalsgaard 1982). A general formulation of the variational principle in differentially rotating A&A proofs: manuscript no. article bodies has previously been obtained by Lynden-Bell & Ostriker (1967). However, the formulation of some of the terms, notably the use of Green's theorem for the gravitational potential, is not the most suitable for numerical implementation. Previous, numerically-friendly expressions similar to those in Unno et al. (1989), have been obtained in Reese et al. (2006) and Reese et al. (2009), but these expressions were only obtained for uniform or cylindrical rotation profiles, assumed that the star is barotropic, and did not include the effects of discontinuities. In App. B, we give a full derivation for baroclinic models with 2D rotation profiles and discontinuities. The final expression is: where Λ is 4πG or 4π in the dimensionless case, V i are the different domains over which the stellar model is continuous, S i are the surfaces of the discontinuities (including the stellar surface), the subscripts '+' and '−' represent quantities right above and below the discontinuities, respectively (∇P + 0 = 0 at the stellar surface), and V ∞ is infinite space (including the star). Figure 4 shows the relative differences between the numerical and variational frequencies for our models. In each case a set of 168 modes with quantum numbersñ = 19 to 30,l = 0 to 1, m = −3 to 3, was used. We recall thatñ is the number of nodes along an island mode's orbit whereasl the number of nodes parallel to it. These are related to the usual quantum numbers, (n, ℓ, m), of pulsation modes in the non-rotating case via the relationsñ = 2n+ε andl = ℓ−|m|−ε 2 , where ε ≡ ℓ+m mod 2 ≡ñ mod 2 corresponds to the mode's parity, that is, symmetry with respect to the equatorial plane (Reese 2008). As can be seen in the figure, relative differences range from 10 −11 to 10 −4 , apart from an outlier in model M6 2 . This compares quite favourably with the typical accuracy obtained with space missions. For instance, Kepler observations spanned up to four years during the main mission thus leading to a Rayleigh resolution of 0.008 µHz. The frequency at maximum amplitude of δ Scutis can reach approximately 700 µHz (e.g. Bowman & Kurtz 2018) thus leading to a relative precision as low as 10 −5 in the best cases. This is higher than the errors on most of the variational frequencies, except for model M6.
The very high accuracy which is reached in a number of cases is due to the use of spectral methods. Such an accuracy was not reached straight away but rather by repeating the calculation using the numerical frequency as the shift, σ, in the second calculation and refining the solution through supplementary iterations. Factors that decrease the accuracy (even in the second calculations) are the presence of a discontinuity in the stellar model, especially if it is sharp, and the occurrence of avoided crossings 3 which lead to island modes which are 'polluted' by contributions from neighbouring modes.
An approximate rotation kernel
In order to understand the approximate effects of differential rotation on pulsation frequencies, we consider a prograde acoustic mode and its retrograde counterpart. The azimuthal orders of these modes will be denoted −m and m, respectively 4 . Furthermore, the subscript '+' will designate the prograde mode and '-' the retrograde mode. The variational principle can be expressed in the following approximate form for these two modes: where we have used the following definitions/approximations: ( If the two modes are of sufficiently high frequency so that the Coriolis force only has a small impact, and if the rotation profile is not too differential, then the two modes will be close to symmetric. This means that by taking the difference between Eq. 13 applied to the prograde mode, and the same equation applied to the retrograde mode, the terms 'rest + ' and 'rest − ' nearly cancel. Neglecting the difference between these two terms leads to the two modes, the frequencies do not cross but the modes progressively exchange their geometric characteristics, thereby leading to a mixture of the two geometries when the frequencies are closest. Figure 3 of Espinosa et al. (2004) provides a nice illustration of a rotationally induced avoided crossing. 4 We are using the 'retrograde' convention, that is, retrograde modes have positive azimuthal orders.
Article number, page 6 of 21 D. R. Reese et al.: Oscillations of 2D ESTER models following equation: (18) This can be re-expressed as: Taking the square-root of both sides and assuming C ± ≪ (ω ± ∓ |m|Ω eff ± ) leads to: (ω + − |m|Ω eff . (20) This equation can finally be rearranged to yield: This equation is particularly interesting because it provides a linear relation between the generalised rotational splitting, which only depends on the frequency of the modes, and the rotation profile. The weighting function that intervenes in the integral is known as the rotation kernel and only depends on the eigenfunctions. If the Coriolis force is neglected, this equation reduces to the linearised version of Eq. (32) from Reese et al. (2009). Figure 5 shows what a typical rotation kernel will look like for an island mode. As can be seen, the rotation kernel closely follows the geometry of the island mode much like in Reese et al. (2009). Accordingly, these modes are especially sensitive to the rotation rate in this region, in particular near the surface at mid-latitudes. In Figs. 6 and 7, we compare the generalised splittings with the right-hand sides of Eq. (21) for models M and Mreal, respectively. The latter is for a much more extensive set of modes. As can be seen, a good agreement is obtained in most cases, but there are some notable exceptions. Such exceptions typically occur for avoided crossings. Indeed, the geometry of the modes changes rapidly as a function of the rotation rate during avoided crossings thereby causing prograde modes and their retrograde counterparts to be at different parts of their avoided crossings and to have different geometric structures. Figure 8 provides an example of such modes. As a result, the terms 'rest + ' and 'rest − ' do not cancel each other out. This interpretation is confirmed in Table 3 which provides a detailed comparison between modes in this situation (Solutions 3 and 4) and those which are not undergoing an avoided crossing (Solutions 1 and 2). By including the difference between the terms 'rest + ' and 'rest − ' , it is possible to correct Eq. (21) and improve the agreement by a factor of 20 for Solutions 3 and 4. The supplementary rows in this Table also show that the approximations given in Eqs. (15) and (17) are well justified. Hence, apart from the cases involving avoided crossings, the agreement between the generalised splittings and the weighted integrals of the rotation profile (that is, the righthand side of Eq. (21)) is excellent thus potentially providing the basis for probing the rotation profile via inversions.
Acoustic glitches
We now turn our attention to pulsations in the discontinuous models and focus on acoustic glitches. We recall that glitches are regions in the star with a strong gradient or near discontinuity, which can lead to an oscillatory behaviour in the pulsation spectrum (e.g. Monteiro et al. 1994). Figure 9 shows the pulsation frequencies obtained for the various models for modes withñ = 19 to 30,l = 0 to 1, and m = −3 to 3. As can be seen, these frequencies follow fairly closely the asymptotic formula given in Reese et al. (2009). However, a closer look reveals irregularities in the pulsation spectra of the discontinuous models. This is brought out more clearly with the frequency separations ∆ñ = ω˜n +1,l, m − ω˜n ,l, m . In Fig. 10, we plot averaged large separations, ∆ñ = ω˜n +1,l − ω˜n ,l , where ω˜n ,l is the pulsation frequency averaged over the azimuthal orders m = −3 to 3. This is done in order to reduce the effects of avoided crossings which tend to be more numerous in the discontinuous models and tend to mask the frequency variations caused by the glitch. Even then, the averaged large separations in the discontinuous models are more irregular than in the continuous model. This raises the question whether these variations can be explained by glitch theory.
Glitch analysis and ray dynamics
In order to investigate the behaviour of the frequencies in a more detailed way, we carried out a simplified ray dynamics analysis.
We used the following dispersion relation, valid for axisymmetric modes in the high frequency limit: where k is the norm of the wave-vector. A simple reflection was used at the stellar surface, rather than a more realistic but complex approach involving the cut-off frequency (e.g. Lignières & Georgeot 2009). Furthermore, we applied the Snell-Descartes refraction law at the discontinuity: . Generalised splittings versus weighted integrals of the rotation profile, and different terms from Eq. (13) for two pairs of prograde and retrograde modes. δω var /ω corresponds to the relative error on the variational frequency, and δω approx. var /ω is the same error when ω var is calculated using the approximations in Eqs. (15) and (17). The modes from the second pair, Solutions 3 and 4, are undergoing avoided crossings, whereas the other two modes are not.
Quantity
Solution 1 where ϑ is the angle between the surface normal and the wavevector, c the local sound velocity, and the subscripts '+' and '−' the upper and lower domains at the discontinuity. We neglect the partial wave reflection at the discontinuity, since we are only searching for the island mode periodic orbit. A more complete description of the ray dynamics is provided in App. C. Figure 11 shows the periodic orbit for island modes superimposed on an island mode in model M6. As can be seen, the orbit reproduces very well the location of the mode. Figure 12 then shows the sound velocity, density, and perturbed pressure (δp/ √ P 0 ) profiles calculated along the periodic orbit, both as a function of distance along the profile and acoustic travel time. As expected, a sharp transition in wavelength occurs at the discontinuity. Furthermore, when plotted as a function of acoustic travel time, the wave takes on a nearly sinusoidal behaviour as indicated by the comparison with the simple sine curve, apart from a phase shift at the discontinuity and a variable amplitude.
These observations provide the basis for a simple toy model which is described in App. D. According to this model, the fre-quencies are given to first order by: where τ T = eq. surf. dr c is the acoustic travel time from the surface to the equator along the ray path, and τ 1 = disc. surf. dr c , the acoustic travel time from the surface to the discontinuity, as illustrated in Fig. 11. The quantity ǫ is given by the relation: and is treated as a small parameter. As shown in App. D, even for ǫ = 0.39 (for model M7), Eq. (24) gives an accurate estimate of the glitch period and a rough idea of its amplitude. However, we do not expect the toy model to give an accurate idea of the phase of the glitch pattern on the oscillation frequencies as it would require fully treating surface effects. Table 4 provides the acoustic travel times τ T and τ 1 for the different models in our study. Although model Mreal is continuous, we included the τ 1 value for the He II ionisation zone. Indeed, the Γ 1 profile undergoes a dip in that region, as illustrated in Fig. 13. Based on these values, Fig. 14 compares the predictions from the toy model with the (l, m) = (0, 0) frequencies minus a second or third-order polynomial fit in order to isolate the glitch pattern. Indeed, using the large separations rather than the frequencies would tend to amplify the impact of avoided cross- Table 4. Acoustic travel times in various models. The quantities τ T and τ 1 are illustrated in Fig. 11 ings thus making it harder to see the glitch pattern. We note that a second rather than third-order polynomial fit was used for model M7b given the relatively long period of the glitch pattern which can be mimicked up to some extent by a third-or higher-order polynomials. An ad hoc phase was added to the glitch pattern from the toy model given that this model is not expected to correctly predict the phase as described above. This allows us to focus on the period and amplitude of the glitch pattern to see how accurate the predictions are. As can be seen from Fig. 14, a nice agreement is obtained for models M7b, Mreal, and to a lesser extent M7. This confirms that the toy model is able to correctly predict the periodicity of the glitch pattern, at least in some cases. The agreement on the amplitude is satisfactory for M7b but rather poor for M7. For model Mreal, an ad hoc amplitude was used for the predicted glitch pattern. Indeed, the toy model was specifically constructed for discontinuities and is therefore unable to predict the amplitude of the glitch pattern for a smoother transition such as what takes place in an ionisation zone. It is nonetheless interesting to note that the amplitude of this glitch pattern decreases at higher frequencies, as would be expected for such a transition. Model M is not expected to show a glitch pattern since it contains no discontinuities and the Γ 1 profile is very close to 5/3 throughout the star, as a result of the ideal gas equation of state. The plot shows what is likely to be a fourth order polynomial residual as expected when subtracting a third-order polynomial fit, as confirmed by the much smaller scale of the y-axis. In contrast, no agreement is found between the toy model and the glitch pattern for model M6. The reasons for this lack of agreement are not entirely understood, but we do note that most of its island modes are undergoing avoided crossings in contrast to the other models. Avoided crossings typically cause the frequencies to deviate from their asymptotic values and could therefore easily mask a glitch pattern.
Pulsation mode geometry at the discontinuity
We now investigate in a detailed way the local geometric properties of the islands modes in the region where the periodic orbit intersects the discontinuity. Specifically, we check whether the wave amplitudes match the predictions from a local analysis, and whether the angle between the discontinuity and the orbit matches a numerical estimate based on the island mode.
As recalled in App. E, the pulsation mode including the reflected and refracted waves can locally be approximated as: where the superscripts '+' and '−' designate the upper and lower domains, respectively, k ± 1 and k ± 2 wave vectors, and where the amplitudes, A ± 1 , A ± 2 , are related via the relation: and k ± ⊥ is the wave vector component perpendicular to the surface. When k ≪ k ⊥ , the tangential component, the factor η reduces to: We then investigate several m = 0 island modes in different models to extract the amplitudes of the refracted and reflected waves and verify the above equations. We start by ex-A&A proofs: manuscript no. article tracting the δp/P 0 profile as well as its horizontal and vertical gradients, ∇ (δp/P 0 ), and (∇ ⊥ (δp/P 0 )) ± , just above and below the discontinuity 5 . Since we are focusing on axisymmetric modes, the horizontal gradient is in the meridional planethere is no component in the e φ direction. Figure 15 shows a zoom on part of an island mode in model M6 and Fig. 16 shows the extracted profiles. The amplitudes of these profiles are estimated thanks to their maximum absolute values. Given that the (∇ ⊥ (δp/P 0 )) ± profiles have the opposite sign to the ∇ (δp/P 0 ) profile, these have negative amplitudes. The tangential wave vector component (which is the same above and below) is estimated by calculating the ratio between the amplitudes of ∇ (δp/P 0 ) and δp/P 0 . The normal components above and below the discontinuity are obtained via the dispersion relation, thus enforcing Snell-Descartes' law. The individual wave amplitudes are obtained by calculating appropriate linear combinations of (∇ ⊥ (δp/P 0 )) ± /k ⊥ and ∇ (δp/P 0 ) /k . Table 5 gives the wave vector components and amplitudes for island modes in three of the models. Given that the mode amplitude is arbitrary, we normalised the amplitudes by A − 2 . The quantities 'A + 1 (theo)' and 'A + 2 (theo)' correspond to the amplitudes deduced from A − 1 and A − 2 via Eq. (27). Apart from the A + 1 (theo) for model M6, these values accurately reproduce the numerically obtained amplitudes, A + 1 and A + 2 , thus showing that the relationship on amplitudes is respected. The quantities k and k ± ⊥ are the horizontal and vertical components of the wave vectors. The quantities k /k + ⊥ and k /k − ⊥ are given a negative sign since the amplitudes A ± 2 (corresponding to the wave vectors k ± 2 = k e − k ± ⊥ e ⊥ ) are larger (in absolute value) than the amplitudes A ± 1 .
Another comparison carried out in Table 5 is between the incidence/departure angles of the wave vectors and the predictions from ray dynamics analysis. The quantities k /k ± ⊥ correspond to the numerically determined values of tan ϑ ± where ϑ ± are the angles between the surface normal and the wave vector 6 . The quantities 'k /k ± ⊥ (rays)' are determined via ray dynamics. A comparison between the two shows some discrepancies but the values remain of the same order. These differences are likely due to the limited accuracy of our approach for extracting the wave vector components, and the fact that the mode behaviour is more complex than what is predicted by ray dynamics.
Finally, as can be seen from the values of A ± 1 , the amplitudes of the secondary waves, although smaller, is not negligible. Hence, these can be expected to affect the phase shift that occurs at the discontinuity in the primary wave and may possibly lead to increased coupling with other modes due to the modified mode geometry, thereby leading to more avoided crossings.
Frequency patterns
We now briefly address the question of whether discontinuities can adversely affect frequency patterns to the point of hindering their detection in observed stars. A tool frequently used in solarlike stars is the so-called echelle diagram (e.g. Bedding et al. 2020), in which the frequencies are plotted as a function of the frequencies modulo the large separation. Due to the nearly equidistant frequency patterns in such stars, modes with the same spherical degree line up on vertical ridges in echelle diagrams. In Fig. 17, we produce similar echelle diagrams using the pseudo large separation, ∆ñ, and only plotting m = 0 modes for the sake of clarity. Although the discontinuities lead to a more irregular behaviour, clear ridges remain for the differentl values. Reese et al. (2014) also reached a similar conclusion using histograms of frequency differences for 3 M ⊙ models with discontinuities. Indeed, they found that the pseudo large separation could still be identified in the discontinuous models.
We also carry out a more quantitative comparison between the pulsation frequencies and a fit based on a simplified version of the asymptotic formula for island mode frequencies (e.g. Reese et al. 2009): where ∆ñ, ∆l, ∆m, andα are various parameters related to the stellar structure (Lignières & Georgeot 2009, Pasek et al. 2012, and Ω fit an average value of the rotation rate appropriate for the set of modes under consideration. We therefore fit these parameters to reproduce the pulsation spectra of our models for the same set of modes as described in Sect. 3.6.2. Table 6 provides the root mean square differences and maximal differences between the numerical and asymptotic frequencies, normalised by the pseudo large separation, ∆ñ. As expected, the frequencies of model M are the closest to the asymptotic formula. The mean difference in the realistic model is intermediate between the best and worst model. In terms of maximal differences, model Mreal is among the best whereas model M6 is the worst model, very likely as a result of the increased number of avoided crossings affecting the modes. In all cases, the differences are a few percent of the pseudo large separation (which itself is half the classical large separation), meaning the frequency pattern is still wellpreserved and should be possible to identify with a suitable analysis. Nonetheless, other factors may hinder finding the above frequency pattern. Indeed, the presence of chaotic modes with their own independent semi-random frequency organisation (Lignières & Georgeot 2009, Evano et al. 2019, or the lack of a clear understanding of the mechanisms responsible for mode selection and pulsation amplitudes both contribute to masking the frequency pattern associated with acoustic island modes. D. R. Reese et al.: Oscillations of 2D ESTER models Fig. 15. Zoom in on the island mode shown in Fig. 11 (in model M6). The discontinuity is shown using the dotted line, and the solid line corresponds to the island mode orbit. As can be seen, the wavelength decreases just above the discontinuity. Pressure profiles δp/P 0 ∇ ⟂ (δp/P 0 ) + ∇ ⟂ (δp/P 0 ) − ∇ ∥ (δp/P 0 ) Fig. 16. Extracted δp/P 0 , ∇ (δp/P 0 ), and ∇ ⊥ (δp/P 0 ) ± profiles as a function of θ along the discontinuity shown in Fig. 15. The vertical dashed line shows colatitude where the island mode periodic orbit crosses the discontinuity.
Conclusion
In this work, we calculated, thanks to an adiabatic version of the TOP code, acoustic pulsation modes in rapidly rotating continuous and discontinuous stellar models based on the ESTER code. This allowed us to investigate various topics namely the variational principle for general 2D rotation profiles in discontinuous models, generalised rotational splittings, and acoustic glitches. Some of the important results are: 1. Generalised rotational splittings are well approximated via weighted integrals of the rotation profile using rotation kernels deduced from the variational principle, except for specific cases where avoided crossings lead to discrepancies. This raises the question as to how accurately the rotation profile can be recovered using inverse theory. In a forthcoming article, we plan to investigate this question using a variety of different rotation profiles. In this regard, the automatic mode classification algorithm described in Mirouh et al. (2019) can be used to efficiently identify pairs of prograde-retrograde modes. 2. Discontinuities alter the acoustic frequency patterns, but not to the point of preventing their detection in observed stars (especially taking into account the unrealistic nature of the discontinuities in our models), thus lending credence to recent detections of large frequency separations and ridges in echelle diagrams in δ Scuti stars (e.g. García Hernández et al. 2015, Bedding et al. 2020. The modifications to the frequency spectrum leads to glitch patterns the periodicity of which can be calculated in a simple way. Nonetheless, the presence of avoided crossings and possibly partial wave reflection at the discontinuity cause deviations from theoretical expectations in some cases. Accordingly, it may be possible to determine acoustic depths of sharp transitions using glitch patterns in observed frequencies.
In a forthcoming work, we plan to investigate acoustic pulsations of ESTER models using a non-adiabatic version of TOP. This will allow us to investigate other topics such as mode excitation and mode behaviour near the stellar surface.
A&A proofs: manuscript no. article Appendix A.2: Condition on the pressure perturbation The following condition ensures that the pressure remains continuous during the oscillatory movements: where we have kept the same notation as above and where P Total is the total pressure (equilibrium + perturbation). This equation can be developed as follows: We can then use Eq. (A.2) to develop, say, the right-hand side: where we have neglected second order terms on the third line.
Combining this with the previous equation leads to: where we have used the continuity of the equilibrium pressure. If the boundary coincides with an isobar, the right-hand side of the above equation cancels out because the difference ξ − − ξ + is within the boundary. If the boundary is not an isobar, then in normal circumstances (that is, when the model is continuous), the difference ξ − −ξ + will be 0. Either way, this leads to the final condition, Eq. (8). There are, however, cases where the right-hand side may not cancel, for instance at the boundary of a convective core with a different chemical composition than the rest of the star. In such a situation, baroclinic flows are set up within the equilibrium model (Espinosa Lara & Rieutord 2013), and probably require setting a specific condition which takes these flows into account. Nonetheless, it is interesting to note that the above condition is in fact symmetric with respect to either side of the boundary. Indeed, the term ξ − − ξ + · ∇P + 0 could be replaced by ξ − − ξ + · ∇P − 0 , since it only involves the gradient of the pressure along the boundary and the pressure is continuous across the boundary.
Appendix A.3: Condition on the perturbation to gravitational potential
In much the same way as the pressure, the gravitational potential and its gradient are kept continuous through the following relations: where Ψ Total is the total gravitational potential (equilibrium + perturbation). At this point, however, we will take a different approach than above since we are dealing with the Eulerian rather than Lagrangian perturbation of the gravitational potential. Firstly, the sums r + + ξ + can be replaced by r − + ξ − or vice versa so as to have the same arguments everywhere. Therefore, in what follows we will use the generic notation r + ξ which can be arbitrarily chosen as r − + ξ − or r + + ξ + . Developing both sides of both equations, making use of the continuity of the equilibrium gravitational and its gradient to cancel zeroth order terms, and neglecting second order terms lead to the following equations: Ψ − (r, t) + ξ(r, t) · ∇Ψ − 0 (r, t) = Ψ + (r, t) + ξ(r, t) · ∇Ψ + 0 (r, t), (A.10) ∇Ψ − (r, t) + ξ(r, t) · ∇ ∇Ψ − 0 (r, t) = ∇Ψ + (r, t) + ξ(r, t) · ∇ ∇Ψ + 0 (r, t) . Given that ∇Ψ 0 is continuous, the first equation reduces to: In tensorial notation, the left-hand side of the second equation becomes: where E i is the natural basis,ξ the components of ξ over that basis, and E i the dual basis. Calculating the dot product of the above equation with E − i yields: is the Christoffel symbol of the second kind. With our choice of mapping, only Γ ζ ζζ is discontinuous across the boundary, hence the notation Γ − ζ ζζ and Γ + ζ ζζ . All of the other geometric quantities (E i , E i , Γ k i j with (i, j, k) (ζ, ζ, ζ)) are continuous. Inserting this expression into the left-hand side of Eq. (A.11) and a similar expression in the right-hand side, and simplifying out continuous terms (geometric, Ψ 0 , and ∇Ψ 0 ) yields the following three relations: where we have made use of the fact that ∂ 2 i j Ψ 0 is continuous if (i, j) (ζ, ζ). One will in fact notice that the latter two equations are also a direct consequence of Eq. (A.12).
At this point, it is useful to introduce Poisson's equation in tensorial notation: (A.17) where Λ = 4πG or 4π in the dimensional or dimensionless case, respectively, g i j = E i · E j is the contravariant components of the metric tensor, and R is a sum of terms which are continuous across the boundary. This last expression can then be used to simplify Eq. (A.14): The terms R − and R + cancel out since R is continuous across the boundary. The remaining equation is then where we have introduced ξ ζ , the ζ component of ξ on the alternate basis (see, e.g. Eq. 31 of Reese et al. 2006). At this point, it is useful to recall thatξ ζ and hence ξ ζ are continuous across the boundary (see Eq. (A.4)). Hence, using r − + ξ − or r + + ξ + in Eq. (A.10) leads to the same results.
Appendix B: Variational principle
Appendix B.1: General formula In order to derive the variational formula which relates pulsation frequencies and their associated eigenfunctions, we start by calculating the dot product between Euler's equation (Eq. (2)) and the product of the equilibrium density and the complex conjugate of a second displacement field, η * , which at this point can be different from ξ, and integrate the total over the stellar volume, V: At this point, the goal is to reformulate the above integral so that it is manifestly symmetric (in a Hermitian sense) with respect to ξ and η. In what follows, it is very important to bear in mind that Ω and its associated vector Ω = Ωe z depend on ζ and θ. This is different than the approach taken in Lynden- Bell & Ostriker (1967) where Ω is constant (differential rotation is, instead, taken into account as a background velocity field, v 0 ). Furthermore, given that the equilibrium model may be discontinuous, it will be important, for some of the terms, to decompose the stellar volume into subdomains, V i , such that the model is continuous in each subdomain. Obviously, the relation V = ∪ i V i holds. Finally, when dealing with the gravitational potential, we will introduce the notation V e to represent an external domain which comprises all of the space outside the star, and V ∞ to represent all of space, including the star. Terms I and II can easily be rearranged into the following symmetric forms: Term III can be rewritten as: where we have made use of the hydrostatic equilibrium equation, in which we have neglected viscosity and meridional circulation. It is important to note that on the second and third line, the integral is carried out over i V i . The reason for this A&A proofs: manuscript no. article is that ∇P 0 may be discontinuous, meaning that ∇ (∇P 0 ) has to be calculated over each separate subdomain, V i . The last two terms are symmetric. This can be seen, for instance, by expressing them explicitly in terms of their Cartesian coordinates: η * · ξ · ∇ (∇P 0 ) = η i * ξ j ∂ 2 i j P 0 . The first term cancels out with the last part of term VIII.
When developing term IV, it is important to treat each domain, V i , separately: where B i denotes the bounds of subdomain V i . It turns out that the surface terms cancel out. Indeed, there are two possible cases. In the first case, the surface corresponds to an internal discontinuity. As explained in the previous section, both the normal component of the displacement and the Lagrangian pressure perturbation remain continuous across the discontinuity. Furthermore, there will be two surface terms, one for the domain just below the discontinuity and the other for the domain just above. The vector dS takes opposite signs in both surface terms since it is directed outwards from the domain. As a result, the two terms cancel. In the second case, the surface term corresponds to the stellar surface. As explained in Sect. 3.4, we impose the simple mechanical boundary condition δp = 0, thereby cancelling this surface term. Terms IV and V may be combined as follows: where we have made use of the continuity equation and introduced the Lagrangian density perturbation, δρ associated with η.
Term VII is treated as follows: where we have once more made use of the continuity equation.
In the above calculations, the surface terms do not cancel, but they are symmetric since both the discontinuities and the stellar surface follow isobars. The term on the last line cancels out with the first part of term VIII. Last but not least, we deal with term VI. As has been shown in Unno et al. (1989) and Reese (2006), this term can be rearranged into an integral of the form 1 Λ V ∞ ∇Ψ · ∇Φ * dV, where Λ = 4πG or 4π in the dimensionless case, and Φ is the gravitational potential associated with the displacement field η. However, surface terms and terms arising from internal discontinuities were not dealt with in the above works. In what follows, we re-derive this expression, while keeping track of such terms: whereρ is the Eulerian density perturbation associated with η, the notation 'i + e' stands for internal domains plus the external domain V e . Various steps in the above developments need further explanation. Firstly, on the third line, the external domain was incorporated along with internal domains. This step is justified because the supplementary terms are equal to zero. It must be noted that the surface associated with V e , only includes the lower bound, that is, the stellar surface. Secondly, the divergence (or Ostrogradsky's) theorem was used to transform volume integrals on lines one and four into surface integrals. While straightforward in most cases, it is not as obvious on line four for the external domain V e . Indeed, it is not clear if some external boundary at infinity should be included or not. This problem can be dealt with in a rigorous way by considering an external domain,Ṽ e , which is bounded by a sphere of radius R e and then taking the limit as R e goes to infinity. Such an approach was taken in Reese (2006) who showed that the external surface term goes to zero in such conditions. Finally, it is necessary to show that the surface terms cancel out. We first start by noting that the surface element, dS, is parallel to the vector E ζ , and so may be written asdSE ζ . Hence, the integrand in the surface terms may be written: Now, we recall that each internal boundary (either from a discontinuity or from the stellar surface) gives rise to two surface terms, one from the domain just below the boundary and the other from the domain just above. As can be seen in the above expression, these two surface terms will cancel out. Indeed, based on the interface conditions (Eqs. (9) and (10)) and the continuity of g i j (for our choice of coordinate system), the first part of the above expression is continuous across the boundary. OnlydS changes signs given that the vector dS is always directed outwards from the corresponding domain.
In this section, we consider a 1D toy model representative of a sound wave travelling along an island mode ray path in the presence of a discontinuity. Figure D.1 illustrates half of this model, the other half being deduced by symmetry. For the sake of simplicity, constant sound velocities, denoted c 1 and c 2 , are used over the domains [0, x 1 [ and [x 1 , x T ] (as well as their symmetric counterparts), where x T = x 1 + x 2 . We assume that the density is discontinuous between the two domains whereas the pressure and first adiabatic exponent are continuous, in accordance with our stellar models.
We then consider the following set of simplified pulsation equations: along with the boundary conditions: δp P 0 (x = 0) = 0 (D.4) at the surface and δp P 0 (x = x T ) = 0 or d dx δp P 0 (x = x T ) = 0 (D.5) at the equator (that is, at x T ). The latter conditions come from the fact that pulsation modes are either antisymmetric or symmetric with respect to the equator. Finally, the following interface conditions apply at x = x 1 : The pressure perturbation then takes on the following form: A 2 cos (k 2 (x − x T )) (D.7) where k i = ω/c i . The two options for the solution in the [x 1 , x T ] domain correspond to antisymmetric (or odd) and symmetric (or even) solutions, respectively. Enforcing the interface condition then leads to the following discriminants which define the eigenvalues: sin(ωτ 1 ) cos(ωτ 2 ) k 2 + sin(ωτ 2 ) cos(ωτ 1 ) k 1 = 0 (D.8) for antisymmetric modes, or − sin(ωτ 1 ) sin(ωτ 2 ) k 2 + cos(ωτ 2 ) cos(ωτ 1 ) k 1 = 0 (D.9) for symmetric modes, where τ i = x i /c i . In the simple case where c 1 = c 2 , the solutions are ω k = kπ τ T or ω k = k + 1 2 π τ T (D.10) for odd and even modes respectively, and where τ T = τ 1 + τ 2 . When c 1 and c 2 differ, we can perform a first order perturbative analysis by introducing a small parameter ǫ as follows: k 2 = k 1 (1 + ǫ).
(D.11) This leads to the following corrections on odd and even modes respectively: δω = (−1) k 2τ T ǫ sin(ω 0 (τ 1 − τ 2 )), (D.12) δω = (−1) k 2τ T ǫ cos(ω 0 (τ 1 − τ 2 )), (D.13) where ω 0 corresponds to the unperturbed frequencies given in Eq. (D.10). Combining the even and odd cases and including both the zeroth and first order components yields: where odd values of n correspond to even solutions and vice versa. The period of the frequency perturbation is analogous but somewhat simplified compared to the more general formula given in Monteiro et al. (1994) for non-rotating stars. Figure D.2 compares large frequency separations using the first order expression above (Eq. (D.14)) and those obtained from exact solutions to the discriminants given in Eqs. (D.8) and (D.9) for of the values τ 1 and τ T from model M7, the most extreme case. As can be seen, the first order expression gives an accurate idea of the period of the frequency deviation, and a rough idea of its amplitude.
Appendix E: Wave refraction and reflection
In this section, we recall some of the basic principles behind the Snell-Descartes law including partial wave reflection. A more complete treatment can be found in various textbooks such as Brekhovskikh (1980). We begin with a simple plane-parallel model using Cartesian coordinates. This can also be thought of as a local approximation to a more complex system. A discontinuity in density is located at z = 0. The media below and above this discontinuity is assumed to be uniform. Under these conditions, the fluid dynamic equations take on the following expressions: The interface conditions, as explained in App. A, ensure the continuity of δp P 0 and ξ z . Combining these equations leads to: Because of the partial reflection at the boundary, we cannot consider a plane-parallel wave in isolation but have to include the reflected wave. For the sake of generality, we consider such a combination both above and below the discontinuity. The leads to following generic solution: where the superscripts '+' and '−' designate the upper and lower domains, respectively. The standing wave equivalent to the above solution would take on the expression: δp P 0 ± = A ± 1 cos(k ± 1 · x) + A ± 2 cos(k ± 2 · x) exp(iωt). (E.6) The wave vectors take on the following form: The horizontal wave vector, k , is preserved between the two domains as a result of the continuity of the horizontal gradient of δp P 0 at the discontinuity. When combined with the dispersion relation, this leads to Snell-Descartes' law.
The continuity of δp/P 0 leads to the relation: The continuity of ξ z leads to: Combining these two equations leads the following matrix relation between the amplitudes: When the wave vector is nearly perpendicular to the discontinuity, that is, k ≪ k z , η takes on the following approximate expression: (E.12) | 15,135.4 | 2020-10-21T00:00:00.000 | [
"Physics"
] |
Real-Time Video Processing using Contour Numbers and Angles for Non-urban Road Marker Classification
Received May 9, 2018 Revised Jun 20, 2018 Accepted Jul 11, 2018 Road users make vital decisions to safely maneuver their vehicles based on the road markers, which need to be correctly classified. The road markers classification is significantly important especially for the autonomous car technology. The current problems of extensive processing time and relatively lower average accuracy when classifying up to five types of road markers are addressed in this paper. Two novel real time video processing methods are proposed by extracting two formulated features namely the contour number, , and angle, θ to classify the road markers. Initially, the camera position is calibrated to obtain the best Field of View (FOV) for identifying a customized Region of Interest (ROI). An adaptive smoothing algorithm is performed on the ROI before the contours of the road markers and the corresponding two features are determined. It is observed that the achievable accuracy of the proposed methods at several non-urban road scenarios is approximately 96% and the processing time per frame is significantly reduced when the video resolution increases as compared to that of the existing approach. Keyword:
INTRODUCTION
The number of road accidents in Malaysia increases at an alarming rate of 9.7% every year over the past thirty years [1] contributed by (road users) errors, faulty road environment, and vehicle defects. The road marker, is very important in guiding road users while driving. Different types of road markers in non-urban roads such as double solid (DD), dashed (D), solid-dashed (SD), dashed-solid and single solid (SS), allow drivers to safely decide either to maintain the course in the middle of the lane or to overtake the front vehicles. In earlier works, many efforts were invested in solving lane detection and tracking problems for Auto-Assist Driving System (ADS) [2], [3]. These methods had been used as a warning system to the driver [4]- [8] and surprisingly, the lane detection also was used as part of the system to analyse the driver behaviour [9]. Symbols, alphabets, and custom markers were also being explored and used as the information to alert drivers [10,11].
However, road marker classification [12], [13] still remains an open question due to the varying types of road markers across the globe with limited effort being put to solve the problem in road marker classification. Collado et al. [14] proposed a frequency analysis method for the urban road marker classification, which uses the inverse perspective mapping (IPM) to generate a bird-eye view of the road and produce a modified Hough Transform (HT) for better lane detection. The method then applies power [15] studied lane changing based on road marker recognition surrounding traffic situation but the road markers classified were limited to only; dashed and solid lines. In a work by Lindner et al. [16], an edge detector technique was designed to subsequently search for a group of four objects namely the lines, curves, parallel curves and closed objects in detecting road markers. Although four types of road markers were classified, most of the road markers are dashed lines with different sizes. Meanwhile, another research from Suchitra et al. classified three road markers namely; dashed, solid and zigzag using a modular approach [17]. The road markers are classified as either dashed or solid using the Basic Lane Marking (BLM), which is based on continuity properties. This approach however, applies the temporal information in the classification operation, which renders slower detection when the road marker type changes whilst driving on the road. In addition to these methods, Nedevschi et al. [18] uses a periodic histogram to determine the type of road markers. This ego-localization is observed to enable the classification of road markers into four types namely single solid, double solid, dashed, and merged.
In a recent research, Paula et al. [19] presented an automatic classification technique to classify five types of road markers. The approach uses between three to five features extracted from the image, which is later fed to the artificial neural network. A two-stage method is modelled for the full classification, with the first stage applies the Bayesian classifier for dashed, single dashed and double solid while the second stage differentiates between dashed-solid and solid-dashed lines for road marker detection. However, the results of the classification were found to have changed abruptly on each frame, which caused inconsistent results while driving. Mathibela et al. [20] proposed a new approach using a unique set of geometric features which functions within a probabilistic RUSBoost and Conditional Random Field (CRF) network to classify the road markers into seven types, including; single boundary, double boundary, separator, zig-zag, intersection, boxed junction and special lanes. Even though more types of road markers were classified using this method, unfortunately, the classification only used static images in urban roads.
On the basis of the comprehensive literature review, the extensive classification of road markers had been rarely studied. It is interesting to look at the aforementioned different approaches for road marker classification although no standard databases available to allow fair comparison. In addition, the dimension, colour and size of the markers are varied across the world.
This paper proposes a novel approach to classify these road using a customized Region of Interest (ROI) in a video acquired from a camera with its calibrated position. Two features are derived, namely; the contour number, , and the contour angle, , are later fed into a two-layer classifier. The first layer classifies the D and SS marker types based calculated , values, while the second layer classifies the DD, DS or SD marker types, by using values as shown in Figure 1. Temporal information integration is applied to improve the classification accuracy by validating the marker type's transitions on the road. This algorithm has been demonstrated an overall accuracy of approximately 95% and reduced reduction in the processing time by ~50% shorter than the existing method.
RESEARCH METHOD 2.1. Camera setup for region of interest selection
Many existing methods for selecting the ROI such as the vanishing point, area-based detection and area-based tracking methods had been used in the past. A technique of calculating the vanishing point using Hough Transform (HT) is carried out by obtaining the intersection line and the ROI at the lower half of the image [21]. In [22], [23], the ROI is selected over the whole horizontal axis and a limited range along the vertical axis, before the feature extraction and road marker or lane classification take place.
In this project, a new method in selecting the ROI based on its calibrated camera's height and Field of View (FOV) is proposed. For the initial setup, a camera with a resolution of 1280x720 located at the center of the car is calibrated on its position with its FOV adjusted towards the planar road surface captured by the camera as shown in Figure 2 The image captured is then divided into 40 equally sized sections or squares, each of which having a size of 160x144 pixels as shown in Figure 3(a) and Figure 3(b). The ROI selection method is applied to every video frame to choose (x,y) as the ROI, which contains the road marker, as shown in Figure 3(c). It can be observed that (x,y) is the nearest to the car where the effects of the road curve on the road marker is low. In the case of lane departure, the ROI will contain no marker for a longer duration which indicates that the vehicle is departed from the right track. In our approach, r 1 , containing the RGB information, is converted to greyscale and filtered through Gaussian filter [24] before undergoing the thresholding process using Otsu Thresholding method [25] for binary conversion. Then the two features, which are the contour number and the angles, will be extracted to complete the classification as discussed next.
The number of contours, , and the angle in Method A and B
A contour is a line that connects all points along the boundary of an object of the same colour. The contour number, in the ROI is calculated by counting the number of all contours along its horizontal axis. The calculation of is carried out in the ROIs, (x,y), for every frames in a complete cycle, which is defined as the time taken by all consecutive frames containing a complete road marker pattern. This pattern will repeat itself after every complete cycle until the road marker type changes or ends as in Figure 4.
The proposed road marker classification is based on the contour number, , and the contour angle, , implemented using two methods of two-layer classification as depicted in Figure 5. The formulation of and for both of these methods, which are named as Method A and Method B, will be presented next.
In Method A, road marker D is detected when the minimum over the cycle is zero and the maximum is one. If over a complete cycle, then the road marker is SS. Otherwise, if the maximum over the cycle is two, then the second layer classification using the angle of the centroids will be run to classify the marker as SD, DS or DD. The angle θ, which is measured between a line connecting the two centroids and the horizontal axis, is calculated only on the first frame in a set of frames showing a particular road marker pattern that repeats itself after the last frame in the set before the pattern changes. This can be identified from the change in value, which is from 10 or 12. There are three threshold angles and , which are determined from training dataset, applied for the classification. If , then the road marker is DS. If , then the road marker is SD and if or , then the road marker is DD. In Method B, the significant different is when detecting DD. Instead of detecting DD at the second layer using the angle , DD is detected at the first layer based on , when it is equal to two over a complete cycle. Hence, only one threshold angle, , used to detect DS and SD. In order to find the contour angle, as seen in Figure 6 (d) and formulated as follows: Two centroids, and , are calculated from the moments of the markers, , , surrounded by the contours as shown in Figure 6(a) and Figure 6(b). The red line represents the contours and the red dots denote the centroids. The moments are calculated as ∑ , where represents the region in the contour (the red lines) [26]. In the next section, the threshold angles will be determined to classify the road marker as DD, DS or SD (for Method A), where =1,2,3 for Method A and only is used for Method B.
The threshold angles formulation for the second-layer classification in method A and B
In Method A, the threshold angles are formulated based on the angles between two centroids of each of the 300 frames extracted from four video clips (Vid22_USB, Vid24_USB, Vid26_USB, Vid27_USB) as in Table 2. The angles are then plotted accordingly in Figure 7. As for DD, the values are found to be distributed between the negative angles (denoted as DDlower) and the positive angles (denoted as DDupper) due to the change in the location of the two centroids shifting along a curvy road. The angles for the 1 st frames in every set of frames for DS, SD and DD are found to be separated with each other, as seen in Figure 7. It is clear from this figure that the threshold angles, are the separating angles or lines between four groups of angles where for Method A. The three threshold angles, and which are represented by the blue horizontal lines in Figure 7, are calculated such that the angles or values are separated or classified intro three types; namely DD, SD and DS. Hence, these threshold values the midpoint between the maximum angle of a marker type and the minimum angle of the next neighbouring marker type. The minimum and maximum angles of DDupper, SD, DS and DDlower are stored as the first row (minimum angles) and the the second row (maximum angles) of matrix respectively. Based on the values recorded in Figure 7, the corresponding matrix is given below.
where the threshold angles are calculated using the following formula (2) with °, and , as shown in Table 1 (a) for making the classification decision. As for Method B, only applied for the classification decision, as shown in
Resolving errors in classification
Errors in road marker classification tend to happen during the marker transition period. The frame delay time might also cause errors in classification. Both of these cases can be solved by using the temporal information between the classification results during the transition period. The markers tend to be similarly labelled for a set of adjacent frames. Knowing this tendency, the previous set of classification result will be stored and if there is a transition of marker types, it will wait on the next set of frames classification results for a final validation to avoid unnecessary classification errors.
RESULTS AND ANALYSIS
The proposed algorithm was tested with multiple video sequences captured by two different devices with different car velocities using a lower range USB camera (Logitech c310) and a higher range camera (GoPro Hero Silver). To evaluate the robustness of the new algorithm, other sets of videos acquired with different devices at different resolutions and the velocity of the vehicle is between 60-90 km/h for the test video images, which meets the speed limit in the suburban road area. The videos are also taken when the vehicle was moving on different road conditions including flat, hilly and narrow roads. The videos used to determine the threshold angles were taken with a moderate vehicle speed between 50-60 km. The experiments were carried out using a desktop computer equipped with Intel® core ™ i7-4790 CPU @360GHz and 16 GB RAM. The proposed system was implemented by using the Visual Studio C++ 2010 compiler with the library from OpenCV version 2.4.11.
In the classification process, if , the software will classify it as either D or SS depending on found in the next set of frames. For example, if the next set of frames detected with , the system will classify it as a D types whereas if consistently detected over the complete cycle, the system will classify it as SS types. If the maximum value is two over the complete cycle, the road marker will be classified as either DS, SD or DD, by using the value calculated at the 1 st frame from the set of frames. The classification result will remain and will only be reclassified if , changes from 21. For the case of DD type which expects consistently, a sampling up to its 10th frames will be performed when only two contours are detected and classification result will be updated by using the value on its 11 th frame.
The accuracy of the marker classification is calculated using Equation (3). The total sets of frames classified correctly, Ʃ⍴, will be divided by the total sets of frames in the video image Ʃξ, excluding the total number of 1 st sets of frames when the road marker transition happens, Ʃτ1. The exclusion of Ʃτ1 is due to the road marker transition in which the angle calculation is temporarily paused. The clips and the accuracy results are shown in Table 2.
As shown in Table 1, clips 4 to 7 were trained for identifying . Clips 9 and 10 are recorded when the vehicle velocity is between 60 and 80 km/h. The remaining test clips contain at least two types of road markers with different resolutions and frame rates. It can be observed that the accuracy achieved by Method B is better compared to method A as only one threshold angle is used. The processing time per frame for Method B is slightly lower on average as compared with Method A. It can also be seen in Table 1 that different types of video compression formats like .wmv renders a longer processing time, as shown for clip 6 and 7. Our method is also tested against the existing approach presented by Paula at [19] using a few videos with different resolutions. The resulted average processing time per frame is recorded in Table 3. Our approach has been observed to perform better at larger resolutions. This is contributed mainly by the smaller ROI applied in our approach, besides the two formulated features namely the contour number and angle used to classify the road marker types.
CONCLUSION
A novel approach on the road markers classification process is presented in this paper. A smaller ROI is used to process the video frames and extract the formulated features, which are the contour number and angle, as the inputs for the proposed two-layer classifier algorithm. The average accuracy of ~96% has been achieved using our proposed algorithm, with a relatively lower processing time per frame at large video resolution as compared with the existing approach. The errors found in the road marker classification are caused by the abrupt changes in illumination, vanishing road marker, white elements on the road and markers blocking by the other vehicles. Future work will focus on the illumination and faint marker issues that affect the features extraction for classifying the road marker types. | 4,072.6 | 2018-08-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Genetic Structure of a Local Population of the Anopheles gambiae Complex in Burkina Faso
Members of the Anopheles gambiae species complex are primary vectors of human malaria in Africa. Population heterogeneities for ecological and behavioral attributes expand and stabilize malaria transmission over space and time, and populations may change in response to vector control, urbanization and other factors. There is a need for approaches to comprehensively describe the structure and characteristics of a sympatric local mosquito population, because incomplete knowledge of vector population composition may hinder control efforts. To this end, we used a genome-wide custom SNP typing array to analyze a population collection from a single geographic region in West Africa. The combination of sample depth (n = 456) and marker density (n = 1536) unambiguously resolved population subgroups, which were also compared for their relative susceptibility to natural genotypes of Plasmodium falciparum malaria. The population subgroups display fluctuating patterns of differentiation or sharing across the genome. Analysis of linkage disequilibrium identified 19 new candidate genes for association with underlying population divergence between sister taxa, A. coluzzii (M-form) and A. gambiae (S-form).
Introduction
Throughout sub-Saharan Africa, members of the Anopheles gambiae species complex are primary vectors of the human malaria parasite, Plasmodium falciparum, which is responsible for extensive human morbidity and mortality. Heterogeneity within the A. gambiae complex for ecological preference, feeding behavior, and Plasmodium susceptibility stabilize and expand the malaria vectorial system in nature [1,2]. Phenotypic differences for these traits can vary between population subgroups or among individuals within a subgroup, and are influenced by genetic variation [3][4][5][6][7][8][9].
identification of population substructure. We present an approach to acquire genome-wide variation data from deep samples, while balancing cost and effort.
Comprehensive detection of subdivision in a local population
Using a custom designed SNP chip we analyzed population subdivision in a deeply sampled local vector population in Burkina Faso. We first hybridized a pilot (n = 96) and then an expanded (n = 384) set of samples. The first 96 samples were used to validate array performance and included duplicates (n = 24) to verify reproducibility of genotype calls. The 72 unique samples in the pilot set included indoor-resting collections of A. coluzzii (n = 11), A. gambiae (n = 12), larval collections of Goundry (n = 19) as well as sibling species A. arabiensis (n = 30). Importantly, the expanded set of 384 samples were chosen based solely on their participation in a successful experimental feeding on malaria-infective blood and thus, aside from taking a bloodmeal, constituted an unbiased set of population samples.
Genotypes generated by the uniformly-spaced genome-wide marker set revealed four distinct clusters when analyzed by principal component analysis (PCA). Overlay of species diagnostic results (Fig 1A) indicates the presence of A. coluzzii, A. gambiae, A. arabiensis, and a cluster where both A. coluzzii and A. gambiae species markers are present, the Goundry form, a discrete group with undetermined taxonomic status [14,28]. Behavioral metadata ( Fig 1B) indicate that the clusters of pure A. coluzzii and A. gambiae mosquitoes include individuals captured both from larval pools and as indoor-resting adults, while mosquitoes of the Goundry form were found in larval pools but were absent from collections of indoor-resting adults, consistent with their apparently exophilic behavior [14]. Samples were also overlaid with the karyotype of the paracentric 2La inversion (Fig 1C) as determined by a molecular diagnostic assay [33], and the genotype for the nucleotide mutation of the para gene associated with pyrethroid insecticide resistance (kdr, Fig 1D) [34]. The same four major population groups are detected using half the number of markers (n = 400 randomly chosen SNPs, Fig 2). Similarly, analysis of samples by individual year (i.e., malaria transmission season) yields the same population clusters (Fig 3) with no detectable difference in the relative proportions of the three population groups across the two transmission seasons (chi-square = 0.457, df = 2, p = 0.796). The stability of the PCA results indicates that identification of major subgroups for this local population is comprehensive, and that it is unlikely that other major genome wide subdivision is present in the population sample.
Genetic association for susceptibility to P. falciparum The Goundry subgroup displays significantly higher susceptibility to infection with wild P. falciparum as compared to A. coluzzii and A. gambiae (p<1 Ã 10 −4 ), consistent with previous observations [14] but here confirmed with independent samples. We also find no difference for P. falciparum infection susceptibility between A. coluzzii and A. gambiae (p = 0.31), which is in accord with multiple published reports [35][36][37][38][39].
Genomic patterns of LD and recombination within population subgroups
Genome-wide marker density in the current study is substantially higher than the density of microsatellites previously employed in population-level studies using similarly large sample sizes [11,14], and consequently permits examination of finer patterns of genomic differentiation between taxa. Markers on chromosome 3 have been previously employed as essentially neutral loci to estimate genome-wide differentiation, independent of potentially confounding features such as inversions or major A. coluzzii/gambiae-related elements such as SI [10,11,14,40]. Non-overlapping sliding window analysis of uniformly spaced SNPs across chromosome 3 indicates that there is little or no differentiation between A. coluzzii and A. gambiae across most of the genome (Fig 4A and 4D), consistent with reports of extensive gene flow between them [14,16,17,32,41]. The greatest levels of differentiation between A. coluzzii and A. gambiae are localized in the centromeric SI (Fig 4B and 4C). In distinction, the Goundry group diverges sharply from A. coluzzii and A. gambiae across the genome, even in the windows that do not separate A. coluzzii and A. gambiae (Fig 4A and 4D).
We scanned the genomes of the A. coluzzii and A. gambiae for signals of population genetic differentiation, in order to identify positions displaying long-range LD beyond the well-studied SI of the centromeric regions. Local correlation due to physical linkage on the chromosome is evident across centromeric regions (Fig 5, boxes), consistent with the low recombination rates in centromeres. Marked linkage disequilibrium is also detected across chromosomes between physically unlinked sites (Fig 5, circles), consistent with locations of the centromeric SI [42]. Because the X-chromosome SI is the main driver of the observed genome-wide disequilibrium between A. coluzzii and A. gambiae ( [28] and Fig 5) we screened for genome-wide SNPs that Table). The candidates are distributed over 45 genes, thus some genes carry multiple SNPs. Of these, 24 SNPs in 20 genes lie outside the previously identified centromeric SIs. Only one of these genes (Tep3) has been previously implicated in A. gambiae/A. coluzzii differentiation [43], and thus the other 19 represent novel candidate genes associated with population differentiation between the two species. Known or predicted gene functional categories include immunity, nervous system and development (S2 Table), and offer multiple plausible candidates for follow-up studies, including testing within A. coluzzii and A gambiae populations at other sites where they are sympatric. In Signals of population differentiation between A. gambiae and A. coluzzii. We screened for genome-wide linkage disequilibrium (LD) outside the centromeric Speciation Islands (SI). The individual SNP that is the most informative for the observed genome-wide disequilibrium between A. coluzzii and A. gambiae is position X.23852135, located within the X-chromosome SI (see Methods). This SNP was tested for LD with all other genome wide SNPs at an r 2 >0.5, minor allele frequency !10%. The plot indicates SNPs highly correlated with X.23852135 under these parameters. 66 SNPs outside of centromeric SI met selection and quality criteria as new candidate markers of subgroup/sister taxa differentiation (S2 Table). Circles highlight linkage patterns across chromosomes, while squares indicate the high-LD centromeric regions of each chromosome.
doi:10.1371/journal.pone.0145308.g005 contrast to the above among-subgroup analysis, LD signals within population subgroups appeared as expected for the SNP marker density, detectable mainly at centromeres and segregating inversions (Fig 6).
Candidate diagnostic SNPs for molecular attributes
We identified a set of 21 candidate SNPs that were highly informative for the detection of mosquito genetic attributes. Seven highly informative SNPs were identified for each attribute, i) karyotype of the 2La inversion, ii) genotype of the para gene kdr mutation associated with pyrethroid resistance, and iii) A. gambiae/A. coluzzii differentiation. Sequenom genotyping assays were developed and 80 individual samples were genotyped (S3 Table). Genotype calls from Illumina and Sequenom were highly concordant. The SNPs represent a candidate diagnostic set highly efficient for the local population in Burkina Faso, but as yet untested for samples from other geographic sites. Diagnostic utility of these candidate SNPs for the research community will thus require additional confirmation in other populations.
Population structure determined by local population sampling
We sampled a local West African mosquito population over time and genotyped it with a large number of genome-wide markers selected for information content, but without regard to gene functional category. This approach yielded a comprehensive characterization of local population substructure, an important prerequisite for accurate assessment of vector control interventions, as well as for association studies linking measured phenotype to underlying genotype. The use of 800 markers in a~280 Mb genome was more than sufficient to detect the level of population subdivision that, if left undetected, would likely lead to spurious results in a genome wide association study [45]. As few as 400 random markers (~2 markers/Mb) were adequate to detect the same major subdivisions.
Although whole-genome resequencing has become more accessible, nevertheless the analysis of >400 mosquitoes from one geographic site by resequencing for a single project would be costly. The SNP genotyping results obtained here have been used to identify small numbers of candidate ancestry-informative SNPs for different attributes (S3 Table). However, general applicability of this SNP set for other mosquito populations will require additional validation using samples collected over the species/attribute range. In the end, simplified, ideally field deployable assays allow routine acquisition of deep population genetic information from largescale field surveys done for biological studies or evaluation of vector control.
Regarding the Goundry form, the desirable SNPs for a diagnostic assay would be the fixed differences present in Goundry and absent from non-Goundry individuals. SNPs identified from the current study were ascertained from available A. gambiae and A. coluzzii genome sequence. Some of these variants display under or over enrichment in Goundry and can be used for a partially-efficient probabilistic assay, but by definition the Goundry fixed differences that would be most informative tool cannot be identified from non-Goundry sequence, and must await whole genome sequences from Goundry mosquitoes.
New candidate loci for population differentiation between A. coluzzii and A. gambiae
The mechanisms of mating isolation and assortative mating between A. coluzzii and A. gambiae are not known, but appear to be largely prezygotic because the species hybridize in the laboratory [46,47]. The known genomic regions of highest genetic differentiation between A. Genome wide linkage disequilibrium within population subgroups. LD was measured by r 2 for A) A. coluzzii, B) A. gambiae and C) the Goundry form. At the study site, the 2La inversion is nearly fixed in A. coluzzii and A. gambiae but segregates in the Goundry form, hence the detectable LD across the 2La inversion only in Goundry. Also, the centromeric region of the second chromosome carrying the insecticide resistance mutation, kdr of the para gene [44] is largely fixed in A. gambiae but segregates in both A. coluzzii and Goundry forms. These plots include all SNPs that passed quality control and were not fixed within population taxa.
doi:10.1371/journal.pone.0145308.g006 coluzzii and A. gambiae are the SI in the centromeres [17,32], but this likely stems from ascertainment bias because previous studies used minimal marker density and/or sample depth, and under those conditions the power to detect differentiation is largely limited to regions of extended LD, such as centromeres. It is also likely that centromeric regions will retain a historic signal of differentiation longer due to the diminished rates of recombination. We now find 24 SNPs in 20 genes outside of the centromeric regions that highly correlate with the X chromosome diagnostic for A. coluzzii and A. gambiae. None of these SNPs occur in the 2R non-centromeric island published by Turner et al. [17]. Five of these SNPs occur in a single gene, Tep3, and a 100kb genomic region containing Tep3 was previously highlighted as differentiated between A. gambiae and A. coluzzii by White et al. [43]. Thus, we report previously unrecognized cases of 19 genes that contain a significantly differentiated SNP and represent new candidate loci for association with population differentiation phenomena such as reproductive isolation and subgroup-specific adaptation between A. coluzzii and A. gambiae mosquitoes.
Of the 19 newly-identified non-centromeric genes (S2 Table), one has predicted function in wing imaginal disc development. There are reported differences in wing morphology between A. coluzzii and A. gambiae mosquitoes that are proposed to underlie the production of different wingbeat harmonic frequencies, thus permitting mate discrimination by A. coluzzii and A. gambiae mosquitoes [48,49]. Two new candidates have established roles in immunity (Toll1A, SRPN4 [50][51][52][53]), along with Tep 3. These immune genes could be associated with the previously hypothesized exposure of the population subgroups to distinct pathogen profiles in different ecological habitats [43,54,55]. Finally, four other candidates with predicted central nervous system functions could underlie observed behavioral differences tied to ecological specialization between A. coluzzii and A. gambiae for oviposition site choice, formation of mating swarms, or other phenotypes [21,23,56]. The twelve other candidate genes have little functional data. Together, these genes represent new candidate loci located outside the previously-studied centromeric SI intervals, potentially associated with features of population differentiation between A. coluzzii and A. gambiae. Because we analyzed sympatric mosquitoes collected from a single defined geographic region, geographic variables do not underlie the differentiation signal, although the results cannot necessarily be generalized to populations in other regions of West Africa without sampling and testing at other sympatric sites.
Materials and Methods
Mosquito sampling and P. falciparum infection Mosquitoes were sampled as larvae using the standard dipping method or as adults by aspirator catch, as previously described in detail [14]. Mosquitoes were collected in the Sudan Savanna region of Burkina Faso in the village of Goundry (12°30´N, 1°20´W), 30 km N of the capital city, Ouagadougou, across months of the rainy season during the 2007 and 2008 malaria transmission seasons [57]. Permission was obtained from Goundry village authorities to collect mosquitoes in the village. Larval-caught A. gambiae species complex mosquitoes were brought to the insectary in Ouagadougou where they were raised under standard laboratory rearing conditions to adulthood. Following emergence, 3 day old adults were challenged with wild P. falciparum by experimental infection. Feeding was done on an artificial membrane in a waterjacketed feeding device as described previously using gametocytemic blood obtained from study participants [35]. Unfed mosquitoes were excluded from analysis and infection levels for fed mosquitoes were determined by counting midgut oocysts 7-8 days post infection. Genomic DNA was extracted from carcasses for genotyping.
Illumina chip design and hybridization
To design the custom SNP chip, polymorphism data were combined from individual sources [54,55,58] as well as an analysis of the A. coluzzii and A. gambiae genome sequences available at Vector Base. At the time of the chip design, the A. coluzzii and A. gambiae genome assemblies were not available at VectorBase and raw sequence read data was used for SNP design. SNPs were identified by alignment of the A. coluzzii and A. gambiae sequence reads against the assembled genome of the PEST strain using BLAST. We summarized all high confidence alignments in a simple frequency table. For every position in the PEST genome we recorded the number of A,G,C,T nucleotides observed for that position. To be considered viable for inclusion on the chip, a SNP had to meet the following criteria: i) have a minimum read depth of 10, ii) be surrounded by~200 bp of SNP free-sequence, iii) be variable across any set of samples used for SNP ascertainment, iv) have a minor allele frequency of at least 15%. We submitted to Illumina 5995 candidates, 4840 from shotgun sequence and 1155 from 3 independent deep resequencing projects. The final catalog of 1536 SNPs was selected from 3394 SNPs that passed Illumina design criteria, 1358 from shotgun sequence and 178 from deep sequencing projects. The complete set of SNPs typed on the Illumina chip and their primers is available in S1 Table. The chip includes a uniformly-spaced genome-wide marker set (n = 812), as well as additional marker coverage (n = 724) within certain genomic features such as chromosomal Speciation Islands (SI). Overall, the chip types 1536 SNPs, with an average density of 1 marker every~340 kb for the uniformly-spaced set. The chip is thus well-powered for accurate and comprehensive detection of population stratification and related genome features, although not for genomewide association given that linkage disequilibrium (LD) in A. gambiae decays to uninformative levels on average within <500 bp [54]. Hybridization of the chips was done using standard Illumina procedures in the Boston Children's Hospital Molecular Genetics Core Facility (IDDRC).
Genotyping and data analysis
Due to the low quantity of DNA available from individual mosquitoes, all DNA samples were subjected to whole genome amplification (Genomiphi, GE Health Sciences) using supplied protocols. DNA was then ethanol precipitated, concentrations determined by the Picogreen method [59] and 500 ng submittted for Illumina chip hybridization. We used a two stage approach, hybridizing a pilot (n = 96) and an expanded (n = 384) set of samples. The first 96 samples were used to validate array performance and included duplicates (n = 24) to verify reproducibility of genotype calls and provide quality control metrics. All mosquitoes genotyped in the larger expanded set of samples came from five successful experimental infections as defined previously [4,60], briefly, sessions with oocyst infection prevalence !30% and oocyst intensity in at least one individual mosquito in the infected group of !10 oocysts. This infection quality-control cutoff assures that all analyzed individuals were exposed to an experimental infection with the power to distinguish levels of susceptibility, free from confounding technical or other factors influencing infection success. Of the 456 unique samples genotyped here, only 160 samples (35%) were previously genotyped and analyzed, using <10 microsatellites on chromosome 3 [14]. Thus, genotyping in the current study was carried out at much higher marker density than in the previous study.
Data were analyzed using the BeadStudio package (Illumina) following the manufacturer's guidelines [61]. Quality control was carried out in two steps: i) Manual curation. Following standards recommended by the manufacturer, boundaries of poorly clustered SNPs were either manually redefined or the SNPs were removed. Because we expected distinct population subgroups segregating within our overall sample, we used Hardy Weinberg Equilibrium (HWE) statistics as a trigger for manual inspection but we did not reject well-clustered SNPs violating HWE. In addition, samples with low call rate were removed, which left more than 88% of samples showing a call rate higher than 85%. ii) SNP call rate. SNPs were removed if they failed in more than 25% of the mosquitoes, which resulted in removal from the analysis of only 89 SNPs (~6%). After application of all QC filters, high-quality data remained for 422 mosquito samples for 1447 genome-wide SNPs, yielding a 94% SNP conversion rate. These 422 samples included 56 A. coluzzii, 52 A. gambiae, 284 Goundry form, and 30 A. arabiensis. The distribution of GenTrain scores, a metric of genotype quality for GoldenGate assays (produced by an algorithm implemented in the Illumina software application, BeadArray GenCall [61]) is shown for SNPs passing the above QC filters (S1 Fig). For PCA analyses presented in Figs 1-4, standard multidimensional scaling as implemented in R (cmdscale in the Stats package) was used for clustering.
A subset of samples (n = 24) were hybridized in duplicate, and over 99% of called genotypes were concordant. For additional validation of genotype calls using an independent technology and to test a set of SNPs with high informative value for molecular attributes, a subset of 21 SNPs were converted to Sequenom assays and 80 mosquito samples genotyped by this independent method. Across all 21 SNPs, the genotype concordance between Illumina and Sequenom averaged 95.5%, ranging from 89% to 99% (S2 Fig). Sequenom Mass Array genotyping was done at the University of Minnesota Genomics Center.
Analysis of infection phenotypes
To test for differences in infection susceptibility across subgroups, analyses were carried out with infection as a blocking factor, and p-values were determined for each individual infection using the Chi Square test and combined p-values across infections via the method of R.A. Fisher [62]. Most of the individuals in the expanded sample set (n = 335) had accompanying infection phenotype data. The phenotyped sample set of 335 were generated from five independent experimental infections, with each infection averaging 67 individuals (range 39-89 individuals). Each experimental infection included individuals from each of the 3 population groups, A. gambiae, A. coluzzii and the Goundry form.
Population subgroup differentiation and detection of differentiated SNPs
Linkage disequilibrium (LD), as analyzed and depicted in Figs 5 and 6, was computed using the LD() function from the genetics package in the R statistical package. For plotting the LD map, the image() function was used. The scale bar was drawn with the function image.plot() from within the fields package in R.
To identify SNP genetic correlation across chromosomes as shown in the centromeric regions (boxes in Fig 5), a selection filter was applied to all A. coluzzii and A. gambiae mosquitoes. Centromeric regions were defined as +/-5Mb from the centromere for a total area of 10Mb, 5Mb on each chromosome arm. Initially, we determined the individual SNP that was in LD (r2>0.8) with the maximum number of other SNPs across the genome, imposing a SNP inclusion cutoff at minor allele frequency !10%. This SNP was on the X chromosome at position 23852135. This region of the X chromosome is the most informative for assignment of A. coluzzii and A. gambiae [28]. This SNP was then used in a second screen to find all other genome wide SNPs in LD with this SNP (X.23852135) at an r 2 >0.5, minor allele frequency !10%. These SNPs, each individually highly correlated with the X.23852135, are presented in S2 Table. The 66 SNPs that mark differentiation outside speciation islands were specifically quality-controlled by examining the distribution of their GenTrain scores, and there was no difference between the distribution of these 66 markers and the rest of the markers that passed controls (Wilcoxon rank test p = 0.26 and S1 Fig).
Ethical considerations
For collection of blood from P. falciparum gametocyte carriers for experimental membrane feeder infection of mosquitoes, the study protocol was reviewed and approved by the national health ethical review board IRB (Commission Nationale d'Ethique en Santé) of Burkina Faso, which issued ethical protocol N°2006-032 for the described studies. The study procedures, benefits and risks were explained to subjects and their written informed consent was obtained. The consent procedure was approved by the IRB. Subjects who had given consent were brought to CNRFP the day of the experiment for gametocyte carrier screening. All children were followed and symptomatic subjects were treated with the combination of artemether-lumefantrine (Coartem) according to relevant regulations of the Burkina Faso Ministry of Health. | 5,380.2 | 2016-01-05T00:00:00.000 | [
"Biology"
] |
Vitamin K Epoxide Reductase Complex Subunit 1-Like 1 (VKORC1L1) Inhibition Induces a Proliferative and Pro-inflammatory Vascular Smooth Muscle Cell Phenotype
Background: Vitamin K antagonists (VKA) are known to promote adverse cardiovascular remodeling. Contrarily, vitamin K supplementation has been discussed to decelerate cardiovascular disease. The recently described VKOR-isoenzyme Vitamin K epoxide reductase complex subunit 1-like 1 (VKORC1L1) is involved in vitamin K maintenance and exerts antioxidant properties. In this study, we sought to investigate the role of VKORC1L1 in neointima formation and on vascular smooth muscle cell (VSMC) function. Methods and Results: Treatment of wild-type mice with Warfarin, a well-known VKA, increased maladaptive neointima formation after carotid artery injury. This was accompanied by reduced vascular mRNA expression of VKORC1L1. In vitro, Warfarin was found to reduce VKORC1L1 mRNA expression in VSMC. VKORC1L1-downregulation by siRNA promoted viability, migration and formation of reactive oxygen species. VKORC1L1 knockdown further increased expression of key markers of vascular inflammation (NFκB, IL-6). Additionally, downregulation of the endoplasmic reticulum (ER) membrane resident VKORC1L1 increased expression of the main ER Stress moderator, glucose-regulated protein 78 kDa (GRP78). Moreover, treatment with the ER Stress inducer tunicamycin promoted VKORC1L1, but not VKORC1 expression. Finally, we sought to investigate, if treatment with vitamin K can exert protective properties on VSMC. Thus, we examined effects of menaquinone-7 (MK7) on VSMC phenotype switch. MK7 treatment dose-dependently alleviated PDGF-induced proliferation and migration. In addition, we detected a reduction in expression of inflammatory and ER Stress markers. Conclusion: VKA treatment promotes neointima formation after carotid wire injury. In addition, VKA treatment reduces aortal VKORC1L1 mRNA expression. VKORC1L1 inhibition contributes to an adverse VSMC phenotype, while MK7 restores VSMC function. Thus, MK7 supplementation might be a feasible therapeutic option to modulate vitamin K- and VKORC1L1-mediated vasculoprotection.
Background: Vitamin K antagonists (VKA) are known to promote adverse cardiovascular remodeling. Contrarily, vitamin K supplementation has been discussed to decelerate cardiovascular disease. The recently described VKOR-isoenzyme Vitamin K epoxide reductase complex subunit 1-like 1 (VKORC1L1) is involved in vitamin K maintenance and exerts antioxidant properties. In this study, we sought to investigate the role of VKORC1L1 in neointima formation and on vascular smooth muscle cell (VSMC) function.
Methods and Results: Treatment of wild-type mice with Warfarin, a well-known VKA, increased maladaptive neointima formation after carotid artery injury. This was accompanied by reduced vascular mRNA expression of VKORC1L1. In vitro, Warfarin was found to reduce VKORC1L1 mRNA expression in VSMC. VKORC1L1-downregulation by siRNA promoted viability, migration and formation of reactive oxygen species. VKORC1L1 knockdown further increased expression of key markers of vascular inflammation (NFκB, IL-6). Additionally, downregulation of the endoplasmic reticulum (ER) membrane resident VKORC1L1 increased expression of the main ER Stress moderator, glucose-regulated protein 78 kDa (GRP78). Moreover, treatment with the ER Stress inducer tunicamycin promoted VKORC1L1, but not VKORC1 expression. Finally, we sought to investigate, if treatment with vitamin K can exert protective properties on VSMC. Thus, we examined effects of menaquinone-7 (MK7) on VSMC phenotype switch. MK7 treatment dose-dependently alleviated PDGF-induced proliferation and migration. In addition, we detected a reduction in expression of inflammatory and ER Stress markers.
INTRODUCTION
Oxidative stress and inflammation contribute to adverse cardiovascular remodeling, eventually resulting in atherosclerosis and vascular dysfunction (1). Vascular smooth muscle cells (VSMC) participate in atherosclerotic plaque growth in different ways. In response to atherosclerotic stimuli, VSMC may turn hyper-proliferative and promote neointima formation. Neointima formation is the main driver of in-stent restenosis and is further enhanced by pro-inflammatory stimuli in the vasculature (2,3). The ability of VSMC to limit vascular inflammation and to preserve their physiological phenotype is essential to slow the progression of atherosclerosis and neointima formation in particular.
Vitamin K describes a group of fat-soluble vitamins that are required as co-factors for γ-carboxylation of proteins (4). Vitamin K1 and the K2 vitamins menaquione-4 (MK4) and menaquione-7 (MK7) are the best-known members of the vitamin K group. Among these, MK7 is the most potent K vitamin in human physiology (5). In the vitamin K cycle, vitamin K is recycled by reactions catalysed by vitamin K epoxide reductase complex subunit 1 (VKORC1) and its recently described isoenzyme, vitamin K epoxide reductase complex subunit 1-like 1 (VKORC1L1). These two enzymes serve as targets of vitamin K antagonists (VKA), such as Warfarin, that are used in therapeutic anticoagulation regimen (6,7). Evidence has emerged that VKA also contribute to cardiovascular damage (8)(9)(10). In addition, dietary supplementation with vitamin K was shown to be a safe and feasible option to decelerate vascular disease (11)(12)(13). Furthermore, vitamin K provides anti-oxidative properties and acts as a potent free-radical scavenger (14,15). Hence, elucidating the mechanism of VKA-induced vascular dysfunction and vitamin K-dependent vasculoprotection is of relevance. Compared to VKORC1, VKORC1L1 is expressed at lower levels in the liver, the place where coagulation factors are synthesised (16), and thus has a lower ability to γcarboxylate the coagulation factors (7,17). Upon oxidative stress, VKORC1L1 is upregulated, while VKORC1 is downregulated (18). Furthermore, VKORC1L1-knockout cells are far more sensitive to oxidative stress as compared to VKORC1-knockout cells (19). Thus far, there is no data regarding the significance Abbreviations: CHOP, C/EBP homologous protein; DCFDA, 2' ,7'dichlorofluorescein diacetate; ER, Endoplasmic reticulum; GRP78, Glucoseregulated protein 78kDa; HCASMC, Human coronary artery smooth muscle cells; IL-6, Interleukin 6; MK7, Menaquinone-7; NFκB, Nuclear factor 'kappa-lightchain-enhancer' of activated B-cells; oxLDL, Oxidized low-density lipoprotein; PDGF, Platelet-derived growth factor; ROS, Reactive oxygen species; siRNA, Small interfering RNA; VKA, Vitamin K antagonists; VKORC1, Vitamin K epoxide reductase complex subunit 1; VKORC1L1, Vitamin K epoxide reductase complex subunit 1-like 1; VSMC, Vascular smooth muscle cells. of VKORC1L1 in cardiovascular diseases, although a recent mRNA expression analysis described VKORC1L1 as a putative oxidative-stress-related gene in coronary artery disease (20).
Both VKOR proteins are localized in the membrane of the endoplasmic reticulum (ER) (18). In the ER, proteins are folded under sensitive environmental conditions and at a precisely regulated redox state. Disturbances of these conditions constitute ER stress, a major contributor to vascular inflammation (21,22). The VKOR system is known to interact with protein folding (23), but the role of VKOR proteins in ER stress remains unclear.
Therefore, the aim of this study was to analyse the role of VKORC1L1 in neointima formation and on VSMC inflammation and proliferation.
Animal Procedures and Diets
Animal experiments were performed in accordance with the animal protection law stated in the German civil code and the National Office for Nature, Environment and Consumer Protection in Recklinghausen, North Rhine-Westphalia (Landesamt für Natur, Umwelt und Verbraucherschutz; LANUV). We used six-week-old C57BL/6 wild-type mice (Charles River, Sulzfeld, Germany). Animals were maintained in a 22 • C room with a 12 h light/dark cycle and received food and drinking water ad libitum. The surgical intervention was performed under a dissecting microscope (MZ6; Leica). For the carotid-injury procedure, mice were anesthetized with intraperitoneal injections of 150 mg/kg body weight ketamine hydrochloride (Ketanest, Riemser, Greifswald, Germany) and 0.1 mg/kg body weight xylazine hydrochloride (Ceva, Duesseldorf, Germany). Access to the carotid artery was obtained by performing a midline skin incision from directly below the mandible toward the sternum. Careful preparation of the left common carotid artery and carotid bifurcation was performed. Two filaments were placed in the proximal and distal segments of the external carotid artery, and the distal ligature was then pinched with clamps. The internal and common carotid arteries were temporarily occluded to perform a transverse arteriotomy between the ligatures of the external carotid artery and to insert a flexible wire (0.13 mm in diameter), that is slightly curved (30 • ) at the tip and completely fills out the vessel. For endothelial denudation, a pullback of the wire was performed in a rotating manner for five times per animal and then removed. Then the external carotid artery was closed below the site of puncture with a ligature, and the blood flow of common and internal carotid artery was released. The skin was then sutured. The mice were postoperatively allowed to recover individually. During recovery period, animals were held in a warm environment by use of water-circulating heat pads and were closely observed. Food and water intake were monitored. The recovery period was four h long. Full recovery was confirmed by return of righting reflex and stable respirations. Thereafter, the animals were returned to their littermates and randomized to three different diet groups. Group A received vehicle, group B received a vitamin K1 (1.5 mg/g food)-enriched diet, and group C received a vitamin K1 (1.5 mg/g food) and warfarin (2 mg/g food)-enriched diet. The composition of the food regimes was adapted according to Schurgers et al. (9) After 14 days, the mice were sacrificed. All mice were anesthetized using 2% isoflurane and an intraperitoneal injection of fentanyl (0.05 mg/kg) and midazolam (5 mg/kg), then euthanized by cervical dislocation. All tissue and blood samples were collected and processed immediately after the mice were sacrificed.
For factor x activity measurement, blood samples (100 µl) were collected, immediately added onto citrate-containing tubes in a ratio of 10:1 (blood/citrate) and mixed. After centrifugation, the resulting plasma was diluted in factor x deficient plasma. This step is necessary to achieve that the factor x activity is the limiting factor in the following thromboplastin time measurement. Thromboplastin time corresponding to factor x acticity was then measured and normalized as percentage to a previously determined normal plasma. Factor X activity was measured at the Institute of Experimental Haematology and Transfusion Medicine at the University Hospital of Bonn, Germany.
Vitamin K1 and warfarin were purchased from Sigma-Aldrich (Cat# 47773 and A2250) and provided to Ssniff-Spezialdiäten, Soest, Germany. Ssniff-Spezialdiäten then mixed vitamin K1 or vitamin K1 and warfarin into the regular mice diet.
For siRNA transfection, the cells were transfected with HiPerFect Transfection Reagent (Qiagen, Netherlands; Cat# 301705), using the Reverse Transfection Protocol. Briefly, HiPerFect plus an siRNA directed against VKORC1L1 (Qiagen, # SI04138407) or a scrambled siRNA (Qiagen, # 1027280), not directed toward any mRNA, were diluted in serum-free media and mixed by vortexing. The transfection mixtures were incubated at room temperature for ten min and then transferred into empty cell-culture wells. In the meantime, cells were trypsinised and then seeded on top of the transfection complexes. The final siRNA concentration was 10 nM and 0.5% V/V for HiPerFect Transfection Reagent. Following transfection, the cells were incubated under the above-mentioned conditions for 46 h before a readout was taken.
Viability Assay alamarBlue TM HS Cell Viability Reagent (Thermo Fisher Scientific, USA; Cat# A50100) was added onto pre-treated cells.
The viability reagent and cell mixtures were incubated for 4 h under standard conditions (37 • C, 5% CO 2 , 100% relative humidity) and protected from light. Then the absorbance was measured using an Infinite M200 Microplate Reader (Tecan, Switzerland).
Western Blot
Cells were lysed with RIPA Buffer (Sigma-Aldrich, Cat# R0278) containing 1:25 Protease Inhibitor Cocktail (Roche, Cat# 4693132001). The lysates were centrifuged at 13,000 x g for 10 min at 4 • C. Protein concentration in the supernatant was quantified by a Qubit Protein Assay (Thermo Fisher Scientific, Cat# Q33211) in a Qubit-4 Fluorometer (Thermo Fisher Scientific). 25 µg of the resulting protein were loaded onto an SDS-PAGE gel and electrophoresis was initiated on a Mini Protean system (Bio-Rad). Proteins were then transferred to a nitrocellulose membrane (Carl Roth, HP40.1) for western blotting. For blocking of the membrane, we used 5% BSA for one h at room temperature. Afterwards the membrane was incubated with the respective primary antibody overnight at 4 • C.
The next morning, the membrane was washed three times with 0.1% tris-buffered saline containing 0.1% Tween 20, before addition of the secondary antibody. After an one-h long incubation at room temperature, the membrane was washed again three times before detection was performed with an ECL Western Blot Detection Reagent (Sigma-Aldrich, Cat# RPN2232) on a ChemoCam HR16-3200 Imager (Intas).
Reactive Oxygen Species (ROS) Measurement
The formation of ROS was measured by using L-012 chemiluminescence and 2' ,7'-dichlorofluorescein diacetate (DCFDA, Sigma-Aldrich, USA; Cat# D6883) chemiluminescent assays. L-012, formed by derivation of luminol, has a high sensitivity for superoxide radicals and does not exert redox cycling itself. Chemiluminescence was determined over 15 min in a scintillation counter (Lumat LB 9501, Berthold) at one-min intervals.
For DCFDA assay, cells were seeded one day prior to the experiment unto a dark-bottomed 96-well microplate protected from light and were incubated at standard conditions. After 24 h, the cells were stimulated with 75 µM hydrogen peroxide (H 2 O 2 ) for one h. Since H 2 O 2 is a potent inducer of free radical formation (24), cells were stimulated with H 2 O 2 to measure ROS formation in response to an oxidative stressor. Then, a dilution of 50 µM DCFDA was added for 45 min. Finally, the DCFDA solution was removed and the cells were washed before the addition of PBS. Fluorescence was detected immediately using a microplate reader at maximum excitation and emission of 492 nm and 527 nm, respectively as previously described (25,26).
To specifically measure the source of ROS, we utilized an Amplex TM Red Hydrogen Peroxide Assay Kit (Thermo Fisher Scientific, Cat# A22188). In presence of HRP, Amplex TM Red reacts with H 2 O 2 to a fluorescent product. Cells were seeded onto 12-well-plates and transfected with siRNA against VKORC1L1 or scrambled siRNA. After 46 h, 50 µl of the medium was collected and incubated for one h in the dark at 37 • C with 50 µl of a mixture of HRP (0.1 U/ml) and Amplex TM Red (50 µM). Thereafter fluorescency was measured in an Infinite M200 Microplate Reader (Tecan, Switzerland) microplate reader (excitation 560 nm, emission 590 nm).
EdU Proliferation Assay
Cell proliferation was analysed using a Click-iT TM EdU Proliferation Assay for Microplates (Thermo Fisher Scientific, Cat# C10499). Briefly, the nucleoside analog EdU (5-ethynyl-2'deoxyuridine) was added to live cells for 4 h. Then the cells were fixed and incubated with horseradish peroxidase (HRP), which becomes ligated to the DNA-incorporated EdU moiety by using click chemistry. Thereafter, Amplex TM UltraRed, a reagent that is converted to a fluorescent product in presence of HRP, was added and the fluorescence was measured on a microplate reader at an excitation of 568 nm and an emission of 585 nm.
Scratch Assay
Cultured cells were reverse transfected in 12-well plates as described above. After 48 h, cells were at confluence and a vertical scratch was created using a sterile pipette-tip (200 µl). Cells were washed and then cultured in reduced-serum medium (2.5%) to minimize proliferation without inducing extensive apoptosis. A specific position on each well was marked and images using an Axiovert 200M microscope (Carl Zeiss, Germany) were taken after 0, 4, 8, 12, and 24 h. Th area of cell migration into the scratched region was measured and calculated as a percentage of the initially scratched area.
Quantitative Real-Time Polymerase Chain Reaction
Total RNA from the cells was isolated by using TRIzol TM (Thermo Fisher Scientific, Cat# 15596018) and chloroform phase separation. The amount of RNA collected was quantified using a Nanodrop spectrophotometer (Nanodrop Technologies, USA). 0.5-2 µg of RNA were reverse transcribed by using an Omniscript RT Kit (Qiagen, Cat# 205113). Finally, quantitative real-time PCR was performed on a 7500 HT Real-Time PCR machine (Applied Biosystems, USA) using TaqMan gene expression assays (Thermo Fisher Scientific) and Gene Expression Master Mix (Thermo Fisher Scientific, Cat# 4369542). CT values up to 40 were used for analysis and all samples were run in triplicate. The values were analysed using the CT method, by normalising to 18S ribosomal RNA.
Menaquinone-7 Supplementation Experiments
Cells were seeded into cell-culture wells and allowed to adhere. After 48 h, MK7 (Santa Cruz Biotechnology, USA; Cat# sc-218691) dissolved in DMSO (0.5 mg/ml) was mixed with fresh cell medium and added onto the cells. The incubation time was 24 h and the final MK7 concentrations in the medium ranged from 1 to 10µM. In some cases, cells were co-incubated with oxLDL (Oxidized low-density lipoprotein, Thermo Fisher Scientific, Cat# L34357), tunicamycin (Sigma-Aldrich, Cat# T7765) or platelet-derived growth factor (PDGF, Thermo Fisher Scientific; Cat# PHG0044).
Statistical Analysis
Statistical analyses were performed using the software GraphPad Prism 9. Means of two groups were compared with an unpaired t-test. Means of more than two groups were compared by a oneway ANOVA followed by Bonferroni's multiple comparison test. The number of independent experiments as well as the applied tests for statistical significance are reported in the figure legends. All reported p-values are two-sided.
Neointima Formation After Vascular Injury
In order to understand the possible role of VKORC1L1 in vascular remodeling, we utilised a murine vascular-injury model. C57/Bl6 mice were randomized to receive vehicle, a vitamin K1 (1.5 mg/g food)-enriched diet, or a vitamin K1 (1.5 mg/g) and warfarin (2 mg/g)-enriched diet after injury of the carotid artery ( Figure 1A). Vitamin K1 was co-administrated with Warfarin in order to prevent the internal bleeding that is normally caused by warfarin and to assess warfarin's extrahepatic effect in isolation (27).
Since this is the first study investigating the role of VKORC1L1 in vascular cells, we first performed western blot analyses to confirm expression of VKORC1L1. And indeed, VKORC1L1 is expressed significantly in HCASMC (Figure 2B).
Consistent to the in vivo experiments, we found warfarin treatment to reduce VKORC1L1 mRNA expression in vitro FIGURE 1 | Vitamin K antagonism promotes vascular remodeling. (A) Schematic diagram of the design of the carotid artery injury experiment. After the procedure on 6-8 week-old C57Bl/6J wild-type mice, they were randomized in three groups that received either vehicle (group A), vitamin K1 (1.5 mg/g food)-enriched diet (group B), or vitamin K1 (1.5 mg/g food) and warfarin (2 mg/g food)-enriched diet (group C), n = 6 per group. (B) Images of neointima formation 14 days after carotid artery injury. Vessels were subjected to histological analysis by H.E. staining. (C) Quantitative assessment of neointima formation expressed as the media/intima ratio. (D) VKORC1L1 mRNA expression in the abdominal aorta expressed as 2 −ddCT relative to control. (E) Factor X activity measured in citrate plasma samples after 14 days of treatment. Data are presented as the mean ± SEM; *p < 0.05; one-way ANOVA + Bonferroni's multiple comparison test for (C-E).
Migration of SMC is another major contributor to neointima formation. Therefore, we investigated the effect of VKORC1L1 inhibition on HCASMC migration in vitro. To address this point, wound-scratch assays were performed with transfected HCASMC, and the migration of cells was measured by microscopic visualization after 8 and 12 h. The migration of cells was not different between the VKORC1L1-siRNA transfected cells and the control group after 8 h (0.32 ± 0.09 vs. 0.37 ± 0.09 migration into the wound scratch, p = 0.3), whereas migration was enhanced significantly in knockdown cells compared to control cells after 12 h (0.49 ± 0.08 vs. 0.60 ± 0.05 migration into the wound scratch, p = 0.02, Figures 2E,F). After 24 h, migration into the scratch was fully achieved for both VKORC1L1-siRNA transfected cells and the control cell. Taken together, these data suggest, that VKORC1L1-downregulation enhances HCASMC viability and migration, both of which are crucial steps for neointima formation.
VKORC1L1-Inhibition Promotes Reactive Oxygen Species (ROS) Formation
VKORC1L1 was found to exhibit anti-oxidative effects in human embryonic kidney cells. In those cells, VKORC1L1 deficiency was found to increase ROS formation (18,19). Thus, we sought to investigate, if VKORC1L1 exhibits a similar role in HCASMC.
VKORC1L1 Regulates Vascular Inflammation in HCASMC
ROS formation and inflammation are closely connected with each other in the pathogenesis of cardiovascular diseases, including atherosclerosis. A pathological increase in ROS induces pro-inflammatory pathways including NF-κB signalling (28,29). Consequently, we next investigated, if VKORC1L1-knockdown also results in increased inflammatory signalling.
To study the effect of VKOR1L1 on vascular inflammation, we measured the mRNA expression of the pro-inflammatory markers NF-κB and IL-6 by RT-PCR after transfection of siRNAs directed against VKORC1L1 or a scrambled control sequence. VKORC1L1 downregulation increased the expression of NF-κB mRNA (1.27-fold ± 0.32 vs. control, p = 0.04) and IL-6 (1.18-fold ± 0.12 vs. control, p = 0.02, Figure 4A).
Menaquinone-7 (MK7) Alleviates HCASMC Remodeling, Inflammation and ER Stress
MK7 is the most potent K-vitamin in humans and is discussed to exert protective properties on progression of cardiovascular disease (5,30). Therefore, we sought to investigate its role on VSMC remodeling, inflammation and ER Stress in vitro.
DISCUSSION
In this study, we investigated for the first time the role of the VKOR isoenzyme VKORC1L1 in vascular biology. We demonstrated that VKA-promoted adverse remodeling is accompanied by a downregulation of VKORC1L1 in vivo. In vitro, we showed that VKORC1L1 downregulation promotes maladaptive remodeling and inflammation in vascular smooth muscle cells. Moreover, we describe a novel connection between the vitamin K cycle and oxidative protein folding. VKORC1L1 downregulation promotes ER stress, a major driver of vascular inflammation. Finally, we present data showing that the vitamin K derivative menaquinone-7 is able to alleviate remodeling, inflammation and ER stress in VSMC.
Therapeutic anticoagulation with VKA has been described to be associated with progressive vascular calcification and vulnerable atherosclerotic plaques in both mice and humans (10,31,32). While the role of VKA in calcification by way of the matrix Gla-protein pathway has been extensively studied, further mechanisms of VKA-induced cardiovascular disease remain to be elucidated. Although warfarin intake is known to exert a complex influence on the immune system (33), the role of warfarin in vascular inflammation and anti-oxidation has not yet been resolved.
By utilising a widely used carotid-artery injury model, we could confirm that VKA promotes maladaptive vascular remodeling in mice. Recent studies suggested that the VKOR isoenzyme VKORC1L1 has anti-oxidative properties and is less sensitive to VKA than VKORC1 (18,19). In our injury model, two-week long VKA treatment lead to reduced aortic VKORC1L1 mRNA. In vitro experiments confirmed warfarin treatment to reduce VKORC1L1 expression in VSMC. Our in vitro results hint at an involvement of VKORC1L1-downregulation in warfarin-induced vascular damage. Therefore, the inhibition of VKORC1L1 appears to be a plausible driver of neointima formation and restenosis.
NF-κB is a transcription factor regarded as one of the key regulators of inflammatory processes, including cardiovascular inflammation (34). We showed that siRNA-mediated VKORC1L1-downregulation increases NF-κB expression in HCASMC. Long-term oral intake of warfarin was previously described to be associated with increased IL-6 production in rats (35). In our study, we found that VKORC1L1-downregulation increased the mRNA expression of IL-6. Warfarin-induced vascular inflammation may therefore be at least partly mediated by VKORC1L1 and the NF-κB/IL-6 pathway.
The cellular localization of the VKOR proteins and the effect that VKORC1L1 has on anti-oxidation and inflammation strongly suggest that there is a connection to the UPR. Multiple studies have described that VKOR proteins interact with protein disulfide-isomerases (23,36). Herein, we present data linking VKORC1L1 to the UPR and ER stress pathways.
After VKORC1L1-downregulation, GRP78, a marker for general UPR activation, was found to be increased while CHOP expression remained unchanged. CHOP is mainly activated by the PERK/eIF2α/ATF4 pathway, while downstream activation of GRP78 is induced by the ATF6 pathway (37,38). Hence, specific UPR pathways are activated upon VKORC1L1 inhibition. Our results extend and support recent data of Furmanik et al., who showed that warfarin induces GRP78 expression and subsequent vascular calcification (39).
Finally, after describing putative mechanisms for VKAinduced vascular dysfunction, we found also that MK7 treatment can dose-dependently reduce vascular remodeling, inflammation and ER stress in vitro. K2 vitamins have previously been attributed anti-inflammatory effects (40,41). However, we show here for the first time that vitamin K can alleviate inflammation in vascular cells. As far as we are aware, this is also the first evidence for vitamin Kpromoted repression of ER stress in cells of any type. A possible mechanism underlying the anti-oxidative effects of vitamin K might be due to a vitamin K-induced increasement in VKORC1L1 activity. The increasement in VKORC1L1 activity could then dampen oxidative stress and subsequent inflammation. Further studies are warranted to elucidate these mechanisms.
In conclusion, we found that VKA promote neointima formation and that this is at least partly mediated by VKORC1L1 inhibition. Contrarily, MK7 supplementation reduces aberrant proliferation and inflammation in VSMC. Due to its favourable risk profile, MK7 may represent a feasible therapeutic target to reduce neointima formation. Main limitation of our study are the limited in vivo readouts. In this study, we mainly focussed on the in vitro functions of VKORC1L1 on VSMC. Future studies utilising VKORC1L1 −/− and VKORC1L1/ApoE −/− mice are currently ongoing and will further elucidate the role of VKORC1L1 during vascular injury and its connection to ER Stress. Additionally we did not yet investigate the presumed in vivo effects of MK7 on inflammation and ER Stress. The association of MK7 and VKORC1L1-deficiency will also be subject of upcoming projects. These results will improve our understanding of the manifold effects of vitamin K on the cardiovascular system.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by National Office for Nature, Environment, and Consumer Protection in Recklinghausen, North Rhine-Westphalia (Landesamt für Natur, Umwelt und Verbraucherschutz; LANUV. | 5,824 | 2021-10-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
A Facile Preparation of Multicolor Carbon Dots
Carbon dots (CDs) have raised broad interest because of their great potential in the fluorescence related fields, such as photocatalysis and bioimaging. CDs exhibit different optical properties when dissolved in various solvents. However, the effects of solvents during the process of preparation on the fluorescence emission of CDs are still unclear. In this work, CDs were prepared by a simple one-pot solvothermal route. Typical critic acid and thiourea were used as precursors. Through changing the volume ratio of water to N,N-dimethylformamide (DMF), we have obtained color tunable CDs, with the emission wavelength from 450 to 640 nm. TEM images, Raman and XPS spectra indicate that the particle size of CDs and the content of surface functional groups (C–N/C–S and C≡N bonds) increase with the increasing ratio of DMF to water, which results in the optimal emission wavelength red-shifted. The prepared multicolor CDs may have prospects in the lighting applications. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1186/s11671-022-03661-z.
Introduction
Carbon dots (CDs) have attracted extensive interest in the past decades, due to their distinct characteristics, such as abundant raw materials [1], easy to prepare and low toxicity [2]. CDs also own excellent luminescent properties, including excitation and emission wavelength tunable [3,4]. These unique features endow CDs with the great potential in the optical and biological applications, such as light-emitting devices [5], photocatalysis [6], biosensing [7] and bioimaging [8]. To date, it is still difficult to prepare multcolor CDs, and researchers have put much effort to enlarge the emission spectrum across the entire visible light. For example, Miao et al. have synthesized a kind of CDs with multiple color emission through controlling the extent of graphitization and surface functionalization [9]. With the increasing ratio of critic acid to urea and increasing reaction temperature, the emission wavelengths are shifted from blue to red, due to the increasing conjugation length and the quantity of surface functional groups. Zhu et al. prepared multifluorescence CDs via magnetic hyperthermia method in the three different cations [10]. Wang et al. obtained multicolor emitting N-doped CDs under hydrothermal reaction from ascorbic acid and phenylenediamine precursors [11]. Besides, reported CDs can also have multicolor luminescence owing to changing the concentration Graphical abstract of the precursors and pH in terms of a constant chemical structure [12].
On the other hand, solvents may play an important role in the photoluminescence (PL) of CDs. For example, Wu's group has developed a type of CDs with tunable luminescence independent of the excitation wavelength when dispersed in different solvents [13]. Similarly, Mei et al. have obtained amphipathic CDs with tunable emission from blue to green and excitation-independent property when dissolved in different solvents [14]. Ding et al. successfully prepared CDs with wide range wavelength by changing the solvent in reactions and found that the solvent controlled the carbonization processes during the solvothermal reactions [15]. Affecting tunable optical property is ascribed to the interactions between the surface groups of CDs and solvent molecules, including hydrogen bonding [16] and dipole-dipole interactions [17]. Bai et al. have synthesized multicolor CDs through solvent-responded strategy using r-CDs as initiator. Solvent adhesion or various emissive defects on the surface of CDs can produce tunable luminescence in the various solvents [18]. Tian et al. obtained multicolor CDs through controlling bandgaps emission in three different solvents. The extents of decomposition and carbonization of precursors lead to the emission wavelength shift from blue to red of CDs [19]. Wang's group found that CDs could emit excitation-independent fluorescence from green to red when the as-prepared CDs dispersed in the different solvents, which is attributed to the intramolecular charge transfer [20]. Wei et al. have shown tunable emission luminescence when the as-synthesised NCDs dispersed in different solvents [21].
The above studies are focused on the effects of solvent in the post-treatment of CDs. However, the effects of solvents during the process of preparation on the fluorescence emission of CDs are still unclear. In this work, we prepared a kind of CDs, using a simple one-pot solvothermal route. Critic acid and thiourea were adopted as precursors. Water and N,N-dimethylformamide (DMF) were used as solvents. We mainly investigated the influence of solvents through changing the volume ratio of water to DMF. The obtained CDs are color tunable, with the emission wavelength from blue to red. In addition, TEM images, Raman and XPS spectra were employed to characterize the particle size of CDs and the content of surface functional groups.
Materials
Hydrated critic acidic (C 6 H 10 O 8 ), thiourea (CH 4 N 2 S), DMF (C 3 H 7 NO), ethyl acetate (C 4 H 8 O 2 ), petroleum ether (30-60℃) were used in the preparation of CDs. All these agents were purchased from Shanghai Aladdin Biochemical Technology Co. Ltd. Deionized water was used with 18.2 MΩ cm −1 in all experiments.
Synthesis of Multicolor CDs
Multicolor CDs were prepared by a one-spot solvothermal route (Fig. 1a) using a series of volume ratios of water to DMF. In detail, 1.26 g (0.2 mol/L) hydrated critic acidic and 1.37 g (0.6 mol/L) thiourea were dissolved in 30 ml mixed solution with various volume ratios of H 2 O to DMF, namely, 1:0 (pure water), 1:1, 1:5, 1:9 and 0:1 (pure DMF). Then, each solution was translated into a Teflon-lined stainless-steel autoclave, followed by heating at 160 ℃ for 4 h. The corresponding prepared CDs were denoted as b-CDs, g-CDs, y-CDs, o-CDs, and r-CDs (Fig. 1b), respectively. After that, these solutions were filtrated by 0.22 μm membranes. Then, the purified solutions were added into the mixed solvent of petroleum ether and ethyl acetate to remove redundant DMF. Finally, the obtained CDs were used in the following characterizations.
Characterizations
The absorbance of multicolor CDs was detected by a Shimadzu UV-25500 PC UV/Vis absorption spectrometer. The functional groups of CDs were measured by Fourier transform infrared spectroscopy (FT-IR, Thermo Scientific Nicolet iS50, America) over the range of 800-4000 cm −1 . All fluorescence spectra were performed by a fluorescence spectrophotometer (F-4600, Hong Kong Tian Mei Co., Ltd.). Raman spectra were measured by a HORIBA Scientific LabRAM HR Evolution high resolution Raman spectrometer with laser frequency of 785 nm as an excitation source. The X-ray photoelectron spectroscopy (XPS) experiments were performed by the ThermoFisher ESCALAB 250Xi spectrometer. The Atomic Force Microscopy (AFM, Bruker, Multimode-8) was employed to obtain the heights and sizes of CDs. The X-ray diffraction (XRD) characterization of CDs was conducted by the Rigaku Ultima IV. High-resolution transmission electron microscopy (HRTEM) micrographs were acquired at room temperature by the FEI F200C TEM operating at 200-kV.
Characterizations of b-, g-and r-CDs
Because the fluorescence of CDs is related to the particle size, the type and content of the functional groups, we performed a series of characterizations, taking b-CDs, g-CDs and r-CDs as examples. The size and morphology of CDs are explored by TEM, as shown Fig. 2a-c. Based on the histogram of size distribution, the average sizes of b-CDs, g-CDs and r-CDs are 2 ± 1 nm, 4 ± 1 nm and 6 ± 1 nm, respectively ( Fig. 2d-f ). In addition, the HRTEM images highlight that the b-CDs have a crystalline lattice fringe of 0.21 nm, corresponding to the lattice plane (100) of graphic carbon [22]. The g-CDs and r-CDs possess crystalline lattice fringes of 0.21 nm and 0.32 nm, corresponding to the lattice planes (100) and (002) of graphic carbon [23]. The XRD pattern of b-CDs exhibits a narrow peak centered at 6.8 Å (see Additional file 1: Figure S1a). The XRD patterns of g-CDs and r-CDs show not only a narrow peak located at 6.8 Å, but a broad peak center at 3.4 Å. The XRD results indicate that the b-CDs, g-CDs and r-CDs consist of small crystalline cores with a disordered surface, similar to the graphite lattice spacing [14,24]. The AFM images present the height distribution of b-CDs, g-CDs and r-CDs (see Additional file 1: Figure S1(b-d)). The average height of b-CDs, g-CDs and r-CDs is approximate 3 nm. These results clearly show that the particle sizes of CDs become larger gradually from b-CDs to r-CDs. Figure 3a shows the Raman spectra of b-CDs, g-CDs and r-CDs, in which a G band at 1573 cm −1 and a D band at 1342 cm −1 correspond to the graphitic sp 2 carbon structures and disordered sp 3 carbon structures [25]. The ratios of I G /I D are 1.11, 1.20 and 1.24 for b-CDs, g-CDs , respectively. Obviously, the order of the content of the oxygen-containing groups (especially for O-H) is b-CDs > g-CDs > r-CDs, which is opposite to the order of the particle size. These results demonstrate that the volume ratio of DMF to water in the solvothermal reaction has a significant effect on the particle size and the functional groups of CDs.
Furthermore, the atomic contents and functional groups of b-CDs, g-CDs and r-CDs were characterized Fig. 4a-c, the four diagnostic peaks located at 531 eV, 400 eV, 285 eV, and 163 eV correspond to O1s, N1s, C1s, and S2p, respectively. The ratios of O/C were 75%, 25% and 24% for b-CDs, g-CDs and r-CDs. The C1s spectra are divided into five peaks, namely, C=C/C-C (284.5 eV), C-N/C-S (285.1 eV), C-OH (286.3 eV), C=O (288.3 eV), and O=C-OH (289.0 eV) [26] (Fig. 4df ). The sequence of the content of the oxygen-containing groups is b-CDs > g-CDs > r-CDs, consistent with the FT-IR results. In addition, the high-resolution N1s XPS spectra of b-CDs, g-CDs, and r-CDs are fitted by three components centered at C≡N (397.4 eV), pyrrolic N (399.4 eV), and graphite N (401.2 eV), respectively (see Additional file 1: Figure S2(a-c)). The high resolution spectra of the S2p also clearly show the peaks at 164.5 eV and 165.9 eV, corresponding to S2p 3/2 and S2p 3/1 spectra of the C-S-C bond in thiophene-type structure due to the spin-orbit splitting [29], which is agreement with sulfone bridges(-C-SO X -C) [29] ( Figure S2d-f ). From the detailed analyses of the N1s and S2p spectra in Table S1, the contents of C-N/C-S and C≡N bonds increase with the increasing ratio of DMF to water, compared with the decrease of the contents of oxygen-containing groups.
Optical Properties of b-, g-and r-CDs
The UV-Vis absorption spectra of CDs present a well resolved n-π* transition at 320 nm [30], which originates from the functional groups of C=X (X=N, S, O). Meanwhile, b-CDs, g-CDs and r-CDs exhibit energy absorption bands at about 360 nm, 420 nm and 560 nm, respectively, as shown in Fig. 5a-c. Such energy bands are classically associated with the narrowing of electronic bandgaps, which leads to the fluorescence red-shift [31]. The position of energy absorption bands demonstrates the wavelength region of fluorescent excitation. The optimal emission wavelengths of b-CDs and g-CDs are 440 nm and 530 nm, respectively. Unlike b-CDs and g-CDs, there are dual-emissive wavelengths located at 600 nm and 640 nm for r-CDs. With the increasing excitation wavelength, the PL peaks exhibit slight fluctuations, which elucidates the excitation-dependent properties of b-CDs, g-CDs and r-CDs, implying a possible carbogenic core state emission [32]. Under solvothermal conditions, decomposition performed between critic acid and thiourea to form N, S-doped CDs with abundant-SCN, -NH 2 on their surface. Obviously, the volume ratio of water to DMF can affect the extents of decomposition and carbonization in the reaction process [29]. It can be speculated that the decomposition of precursors and carbonization of solvents gradually increase with the higher volume ratio of DMF to water, resulting in red-shifted absorption and emission bands, which is well consistent with their increased particle sizes and functional groups of CDs. In addition, the PL decay curves (Fig. 5d) of b-CDs, g-CDs and r-CDs are fitted by the dual-exponential curves. The results show that the lifetimes of b-CDs, g-CDs and r-CDs are 2.75 ns, 4.67 ns and 4.88 ns, respectively.
Effects of Reaction Conditions and Solvents on PL Properties
Furthermore, we investigated the effects of reaction conditions (reaction time and temperature) on the preparation of multicolor CDs, taking g-CDs as an example. As shown in Fig. 6a, b, the maximum emission wavelengths have dramatically red-shifted with the reaction time prolonged from 2 to 8 h, and the reaction temperature increased from 140 to 180℃. When the heating time increases from 2 to 8 h at 160 ℃, the maximum emission peaks increase from blue (460 nm) to red (605 nm). Similarly, when the reaction temperature from 140 to 180 ℃ at 4 h, the maximum emission peaks increase from blue (450 nm) to red (610 nm). This phenomenon indicates that the increase of reaction time and reaction temperature leads to the PL red-shift of CDs, which is ascribed to the carbonization in the materials [9]. It has been reported that longer thermal time and higher temperature will promote the carbonization of precursors [9]. Based on these results, we can speculate that the emissive wavelength of CDs strongly depends on the carbonization degree of precursors, which is in line with the particle size and functional groups on CDs.
To explore the effects of solvents on CDs, we have performed additional experiments of g-CDs dispersed in six different solvents, namely, water, DMF, ethanol, acetic acid, acetone, and tetrahydrofuran (THF), which are in the sequence of polarity from strong to weak. In stronger polar solvents, the PL emission wavelength of g-CDs is related to the excitation wavelength (see Additional file 1: Figure S3(a-d)). On the contrary, the PL Fig. 6 Normalized PL emission spectra of g-CDs with a different heating times, b different heating temperature. c Normalized PL spectra (λ ex = 420 nm) and d UV-vis absorption spectra of g-CDs dispersed in six solvents emission wavelength is independent on the excitation wavelength in weaker polar solvents (see Additional file 1: Fig. S3(e-f )). Figure 6c demonstrates that the PL emission wavelength of g-CDs red shifts in weaker polar solvents compared with in stronger polar solvents. This is because weak polar solvents will affect the electronic structure and then reduce the energy gap of g-CDs [33]. UV-vis absorption spectra (Fig. 6d) show that absorption wavelength red shifts of g-CDs in weak polar solvents, which further confirms that weak polar solvents play an important role in affecting n-π* transition, leading to the emission spectra red-shift [34,35].
Conclusions
In summary, we have developed a facile and feasible way to synthesis multicolor CDs, which fluorescence covers a majority of the visible spectrum. Through adjusting the volume ratio of water to DMF, the obtained CDs are color tunable, with the emission wavelength from blue to red. We find that solvent (DMF) plays an important role in preparing multicolor CDs, because DMF is decomposed in the carbonization process. With the increasing ratio of DMF to water, the particle sizes of CDs become larger gradually, and more functional groups are formed on the surface of CDs, which lead to the PL red-shift of CDs. Our method can enlarge the visible spectrum of CDs and the prepared multicolor CDs may have application prospects in the optical and biological fields of lightemitting devices and bioimaging systems. | 3,521 | 2021-08-06T00:00:00.000 | [
"Materials Science"
] |
Targeting ferroptosis as a promising therapeutic strategy to treat cardiomyopathy
Cardiomyopathies are a clinically heterogeneous group of cardiac diseases characterized by heart muscle damage, resulting in myocardium disorders, diminished cardiac function, heart failure, and even sudden cardiac death. The molecular mechanisms underlying the damage to cardiomyocytes remain unclear. Emerging studies have demonstrated that ferroptosis, an iron-dependent non-apoptotic regulated form of cell death characterized by iron dyshomeostasis and lipid peroxidation, contributes to the development of ischemic cardiomyopathy, diabetic cardiomyopathy, doxorubicin-induced cardiomyopathy, and septic cardiomyopathy. Numerous compounds have exerted potential therapeutic effects on cardiomyopathies by inhibiting ferroptosis. In this review, we summarize the core mechanism by which ferroptosis leads to the development of these cardiomyopathies. We emphasize the emerging types of therapeutic compounds that can inhibit ferroptosis and delineate their beneficial effects in treating cardiomyopathies. This review suggests that inhibiting ferroptosis pharmacologically may be a potential therapeutic strategy for cardiomyopathy treatment.
Introduction
Cardiomyopathies are a clinically heterogeneous group of cardiac diseases characterized by heart muscle damage, causing cardiac muscle or myocardium disorders, diminished cardiac function, heart failure, and even sudden cardiac death (Franz et al., 2001;Schultheiss et al., 2019;Li D. et al., 2022). Cardiomyopathies are often related to electrical or mechanical dysfunction, frequently with a genetic origin or etiology (Maron et al., 2006). The 2006 American Heart Association classification categorizes and groups cardiomyopathy into primary or secondary categories (Maron et al., 2006). In primary categories (genetic, mixed, or acquired), the disease process is solely or predominantly confined to the heart. Secondary cardiomyopathies (i.e., dilated, hypertrophic, and restrictive cardiomyopathy) result from systemic conditions, i.e., cardiac involvement occurs as a part of systemic conditions (Brieler et al., 2017;Li T. et al., 2022). Researchers have divided the secondary causes of cardiomyopathy into various categories, including infectious, toxic, ischemic, metabolic, autoimmunogenic, and neuromuscular categories. The burden of ischemic cardiomyopathy (ICM), diabetic cardiomyopathy (DCM), doxorubicin-induced cardiomyopathy (DICM), and septic cardiomyopathy (SCM) is increasing in nearly all countries. The basic pathological mechanism of these cardiomyopathies (ICM, DCM, DICM and SCM) is cell death in cardiomyocytes. The pathogenesis and molecular mechanisms underlying these cardiomyopathies are poorly understood, warranting further investigation (Gilgenkrantz et al., 2021). Therefore, it is important to acquire insights into their pathogenesis to achieve the appropriate management and treatment of these disorders, thus providing support for protecting cardiac function.
In the past decades, ferroptosis, a non-apoptotic iron-dependent and peroxidation-driven regulated cell death (RCD) mechanism, has been rapidly acquiring attention in cardiomyopathies. Novel studies have explored the role of ferroptosis in DICM and ICM in murine models of cardiomyopathy (Conrad and Proneth, 2019;Fang et al., 2019), which demonstrated an association between ferroptosis and cardiac cell death induced by iron overload in vivo. Thereafter, several studies have revealed that ferroptosis plays a vital role in the pathogenesis of cardiomyopathy (Li D. et al., 2022). Meanwhile, certain compounds exert their therapeutic effects on experimental cardiomyopathy models by inhibiting ferroptosis.
In this review, we summarize the core mechanism by which ferroptosis leads to the genesis of cardiomyopathies. We focus on the emerging variety of therapeutic compounds that can inhibit ferroptosis and delineate their beneficial effects for treating cardiomyopathies. This review indicates that inhibiting ferroptosis pharmacologically may be a promising therapeutic strategy for treating cardiomyopathies.
Core molecular mechanisms underlying ferroptosis
Ferroptosis is an iron-dependent, oxidative form of nonapoptotic RCD, characterized by the iron-dependent oxidative modification of phospholipid membranes (Dixon et al., 2012). A delicate imbalance between ferroptosis inducers and inhibitors dictates its execution and induction. The inhibition of the solute carrier family 7 member 11/glutathione peroxidase 4 (SLC7A11/ GPX4) antioxidant system and free iron accumulation are two key signals for inducing ferroptosis (Chen H. Y. et al., 2021). When the levels of iron-dependent ROS and lethal lipid peroxide (LPO), the two promoting factors of ferroptosis, substantially surpass the antiferroptotic capacity of ferroptosis defense systems, peroxidated phospholipid polyunsaturated fatty acids (PUFA-PL-OOH) accumulate on cellular membranes and induce its rupture, eventually resulting in ferroptosis (Lei et al., 2022). Phospholipid polyunsaturated fatty acids (PUFA-PLs) have an intrinsic susceptibility to peroxidation chemistry, which makes them the primary substrates for LPO (Hadian and Stockwell, 2020). Acyl-coenzyme A synthetase long chain family member 4 (ACSL4) catalyzes the addition of coenzyme A (CoA) to the longchain polyunsaturated bonds of arachidonic acid (AA), causing PUFA esterification to form phospholipids. Following the activation of ACSL4, lysophosphatidylcholine acyltransferase 3 (LPCAT3) inserts acyl groups into lysophospholipids and incorporates free PUFAs into phospholipids (PL), participating in ferroptotic lipid signaling. Under the catalysis of oxidase and bioactive iron, PUFA-PLs in the membrane can be converted to phospholipid peroxides by both non-enzymatic Fenton reactions and enzymatic LPO reactions Liang et al., 2022). Iron serves as an essential cofactor for arachidonate lipoxygenases (ALOXs) and cytochrome P450 oxidoreductase (POR) to initiate the non-enzymatic Fenton reaction. ALOXs and POR promote lipid peroxidation. In enzymatic LPO, ACSL4 catalyzes the ligation of free PUFAs [such as AA and adrenic acid (AdA)] with CoA to generate PUFA-CoAs, which include AA-CoA or AdA-CoA (Dixon et al., 2015;Doll et al., 2017). Subsequently, LPCAT3 incorporates PUFA-CoAs into pLs to generate PUFA-PLs, which includes AA-phosphatidylethanolamine or AdA-phosphatidylethanolamine (Dixon et al., 2015;Kagan et al., 2017). Once the PUFA-PLs incorporated lipid bilayers, the iron-dependent enzymes (such as POR and ALOXs) and labile iron use O 2 to perform a peroxidation reaction, generating peroxidated PUFA-PL or polyunsaturated-fatty-acid-containing -phospholipid hydroperoxides (PUFA-PL-OOH) (Hadian and Stockwell, 2020;Zou et al., 2020). Other membrane electron transfer proteins, particularly the NADPH oxidases, are also involved in ferroptosis by contributing to ROS production for LPO (Xie et al., 2017). LPO and its secondary products, namely, malondialdehyde (MDA) and 4-hydroxynonenal (4-HNE), cause pore formation in the lipid bilayers, eventually resulting in cell death and ferroptosis (Tang and Kroemer, 2020). Ferroptosis has acquired substantial attention in cardiomyopathy research. Further, it plays a vital role in the pathogenesis of cardiomyopathies, such as ICM, DCM, DICM, and SCM. Therapeutic strategies targeting ferroptosis may facilitate the treatment of these cardiomyopathies. 2015). MIRI leads to oxidative stress and energy metabolism disturbances, among other issues (Li D. et al., 2021). Therefore, understanding the mechanisms of MIRI is essential for attenuating the triggers of cardiomyocyte cell death and preventing left ventricular remodeling and HF.
A novel study reported on the role of ferroptosis in ischemia/ reperfusion (I/R)-induced cardiomyopathy in murine models (Fang et al., 2019), which established an in vivo correlation between ferroptosis and cardiac cell death (Conrad and Proneth, 2019). Thereafter, emerging studies delved into the pathophysiological role of ferroptosis in the development of MIRI and ICM ( Figure 1). Numerous molecular mechanisms and pathways are related to the genesis of MIRI, including iron homeostasis imbalance, lipid peroxidation, and redox homeostasis imbalance. Since the introduction of ferroptosis in 2012, researchers have revisited the role of iron homeostasis imbalance, lipid peroxidation, or glutathione metabolism disorder in MIRI, thus proposing that ferroptosis participates in MIRI pathogenesis. Among all types of organ ischemia/reperfusion injury (IRI), the role of ferroptosis in the pathogenesis of MIRI has been the most extensively studied.
Role of dysregulation of iron metabolism in MIRI
The accumulation of iron, a core characteristic of ferroptosis, plays a pathogenic role in AMI and MIRI. Excessive iron is transported into the cardiomyocytes, thus predisposing them to undergo ferroptosis by the Fenton reaction and ROS generation after I/R (Li J. Y. et al., 2021). Ferroptosis predominantly occurs in the reperfusion phase of cardiac tissues, characterized by a gradual increase in the ACSL4, Fe 2+ , and MDA levels, along with decreased levels of GPX4 (Tang et al., 2021a). Cardiomyocytes are vulnerable to the dysregulation of iron homeostasis, which is central to MIRI through different pathways to increase the iron content. The heart utilizes several iron uptake transport systems, including L-type (LTCC) or T-type (TTCC) voltage-dependent Ca 2+ channels, transferrin (TF) receptor (TfR1), and divalent metal transporter (DMT1) (Lillo-Moya et al., 2021).
Iron enters the cardiomyocytes principally through TfR1 as TF or through LTCC as non-TF-bound iron, TTCC, and DMT1. During MIRI, the intracellular iron-storing protein, the degraded ferritin to release iron and perform iron-mediated Fenton reaction, resulting in oxidative damage to cardiomyocytes and loss of cardiac function. Studies have demonstrated excessive iron accumulation in
Frontiers in Pharmacology frontiersin.org 03 the myocardial scar in mice MIRI models (Baba et al., 2018;Fang et al., 2019), thereby suggesting iron overload as a primary characteristic of ferroptosis. The ferroptosis inhibitor ferostatin-1 (Fer-1) or iron chelator dexrazoxane (DXZ) inhibits cardiac remodeling and fibrosis induced by IRI (Fang et al., 2019). Increased cellular iron content exists in IRI mice, apart from decreased activities of GPX4 and ferritin heavy chain-1 (FTH1) as well as decreased glutathione (GSH) levels in the cardiac issue after MIRI (Chen et al., 2021c). Moreover, the ubiquitin-specific protease 7 (USP7)/p53 pathway activates TfR1 to exacerbate cardiomyocyte ferroptosis in subsequent I/R (Tang et al., 2021a). The pharmacological inhibition of USP7 results in increased p53 activity and decreased TfR1, thus leading to decreased ferroptosis and MIRI . Therefore, the pharmacological inhibition of TfR1 activity may inhibit ferroptosis in MIRI.
Role of LPO in MIRI
Deferoxamine therapy decreases myocardial injury by inhibiting ferroptosis in I/R-induced rat hearts. The specific redox reactions of PUFA-PLs in ischemia-induced cardiomyocytes initiate oxidative damage in the reperfusion phase. ALOX15 induction by ischemia/ hypoxia initiates the oxidation of PUFA-PLs (particularly PUFA-PE) and results in cardiomyocyte ferroptosis. Further, ALOX15 ablation in mice confers resistance to PUFA-dependent ischemia-induced cardiac injury (Ma X. et al., 2022). The overexpression of activating transcription factor 3 (ATF3) inhibits the classical ferroptosis activators ras-selective lethal small molecule 3 and erastin-induced ferroptosis in cardiomyocytes. ATF3 expression increases in the early phase of reperfusion, whereas its ablation significantly aggravates IRI. The binding of ATF3 to the transcriptional start site of the FA complementation group D2 can enhance its promoter activity, thereby exerting cardioprotective effects against H/R injury through an antiferroptosis mechanism (Liu M. Z. et al., 2022). Bai and colleagues have demonstrated that SENP1 expression is upregulated by hypoxia, which protects cardiomyocytes against ferroptosis through deSUMOylating hypoxia-inducible factor-1α and ACSL4 .
Role of SLC7A11/GPX4 axis inhibition in MIRI
Increased levels of ACSL4, Fe 2+ , and MDA, along with decreased GPX4 levels, are observed in the myocardium after MIRI (Tang et al., 2021a). The inhibition of the GSH-generation pathway, either iron chelation or glutaminolysis, could alleviate IRI by blocking ferroptosis (Gao et al., 2015). A specific ferroptosis inhibitor suitable for animal models, i.e., liproxstatin 1, can protect the mouse myocardium against IRI by decreasing voltage-dependent anionselective channel protein 1 levels and upregulating GPX4 levels (Feng et al., 2019). The expression of USP22, SIRT1, and SLC7A11 is inhibited after IRI injury, whereas p53 is highly expressed in the myocardial tissues. Conversely, the overexpression of USP22, SIRT1, or SLC7A11 reduces the degree of IRI injury by inhibiting ferroptosis and improves the viability of cardiomyocytes (Ma et al., 2020).
Ferroptosis in diabetic cardiomyopathy
DCM, a specific form of cardiomyopathy independent of hypertension and coronary artery disease (Tan et al., 2020), is caused by diabetes mellitus (DM)-associated dysregulated glucose and lipid metabolism (Tan et al., 2020). DM increases oxidative stress and activates multiple inflammatory pathways, leading to cellular injury, cardiac remodeling, and systolic and diastolic dysfunction (Tan et al., 2020;Khan et al., 2021). The eventual outcome is cardiomyocyte cell death. The clinical features and pathogenesis of DCM have been wellcharacterized in the past 4 decades; however, its effective therapeutic regimen is still limited, thus suggesting the need to explore novel mechanisms underlying DCM development. Ferroptosis may be associated with the pathological progression of DCM Wei LY. et al., 2022;Wei Z. et al., 2022). Ferroptosis plays a role in DM (Behring et al., 2014;Bruni et al., 2018;Lutchmansingh et al., 2018;Shu et al., 2019;Krümmel et al., 2021) (Figure 1). A novel study reported on the role of ferroptosis in the heart of diabetic mice in 2022, thus demonstrating that Nrf2 activation attenuates ferroptosis by upregulating SLC7A11 and ferritin levels (Wang D. et al., 2022). GPX4 can inhibit DCM in GPX4 transgenic mouse models (Baseler et al., 2013).
The ablation of cluster of differentiation 74 (CD74; a receptor for the regulatory cytokine macrophage migration inhibitory factor) prevents DM-evoked and oxidative stress. Ferroptosis inhibitors preserve the cardiomyocyte function and inhibit LPO induced by the high glucose/high fat (HGHF) challenge in vitro. Recombinant MIF mimics HGHF-induced LPO and depletes GSH and ferroptosis. Conversely, MIF inhibitors reverse these effects mediated by recombinant MIF. Taken together, CD74 ablation rescues DCM by inhibiting ferroptosis, thus indicating CD74 as a promoter of ferroptosis (Chen H. et al., 2022). FUN14 domain-containing 1 (FUNDC1) insufficiency sensitizes DCM through ACSL4-mediated Frontiers in Pharmacology frontiersin.org ferroptosis, thus indicating FUNDC1 as an inhibitor of ferroptosis (Pei et al., 2021). Further, long non-coding RNAs (LncRNAs) regulate ferroptosis in DCM. The lncRNA-zinc finger antisense 1 works as a competing endogenous RNA that sponges miR-150-5p and downregulates cyclin D2 (CCND2), promoting ferroptosis and DCM development (Ni J. et al., 2021). In summary, ferroptosis plays a significant role in the development of DCM. However, the molecular mechanism warrants further investigation.
Ferroptosis in DOX-induced cardiomyopathy
Anthracyclines are the most widely used anticancer chemotherapeutic agents. However, doxorubicin (DOX) causes cardiotoxicity, resulting in DICM, thereby limiting its clinical efficacy (Herrmann, 2020;Fang et al., 2023). Ferroptosis plays an essential role in the pathogenesis of DICM (Fang et al., 2023) ( Figure 1). Wang et al. have demonstrated that DOX induces heart injury and increases cardiac iron levels, lipid-derived ROS, and the biomarkers of ferroptosis (Fang et al., 2019). They presented novel evidence that the contributions of ferroptosis to DICM in DOX-treated mice and its subsequent inhibition exert cardioprotection (Fang et al., 2019). Their findings were corroborated by other studies which revealed that ferroptosis is a crucial mechanism in DICM and that acyl-CoA thioesterase 1 (ACOT1) plays a critical role during the process. Thus, they demonstrated ACOT1 as a ferroptosis inhibitor and that targeting the inhibition of ferroptosis is a strategy for DICM treatment . Tadokoro and colleagues have revealed that DOX inhibits GPX4 and induces LPO, thus leading to mitochondria-dependent ferroptosis in a DICM mouse model (Tadokoro et al., 2020). Further, the ferroptosis inhibitor ferrostatin-1 (Fer-1) can protect cardiomyocytes against DOX-induced cell injury (Tadokoro et al., 2020). have indicated that DOX upregulates high mobility group box 1 expression, which promotes ferroptosisassociated cardiotoxicity in DOX-treated rats. Fer-1 or DXZ reverse DOX-induced ferroptosis and DICM. In summary, ferroptosis inhibition is a therapeutic target for DICM.
Ferroptosis in septic cardiomyopathy
Sepsis is a life-threatening organ dysfunction resulting from dysregulated immune response to an infection. Seventy percent of patients with sepsis develop septic cardiomyopathy (SCM), which is the leading cause of sepsis-related morbidity and mortality (Nabzdyk et al., 2019;Hollenberg and Singer, 2021). Ferroptosis is involved in SCM ( Figure 1). GSH depletion and the downregulation of GPX4 expression, as well as increased iron content and LPO levels, exist in cecal ligation and punctureinduced sepsis animal model, implying the involvement of ferroptosis in the pathogenesis of SCM . Dexmedetomidine exerts cardioprotective effects through ferroptosis inhibition by decreasing iron accumulation, downregulating the protein levels of HO-1, and inducing GPX4 . The ferroptosis inhibitors deferoxamine and Fer-1 can improve cardiac function and decrease mortality in septic mice by decreasing the levels of ferroptosis in cardiomyocytes . These results support the hypothesis that ferroptosis is involved in the pathogenesis of sepsis-induced myocardial injury. Ferritinophagy-mediated ferroptosis plays a pathogenic role in sepsis-induced myocardial injury . Li et al. (2020) have demonstrated that ferroptosis plays a crucial role in sepsisinduced cardiomyopathy in sepsis-related models, including a lipopolysaccharide (LPS)-induced model of septic cardiomyopathy .
Specific regulators play a role in modulating ferroptosis and SCM. The transmembrane protein 43 (TMEM43), a transmembrane protein related to cardiomyopathy, protects against SCM by inhibiting ferroptosis in LPS-induced mice (Chen L. et al., 2022). The knockdown of TMEM43 in the heart aggravates LPS-induced cardiomyopathy, accompanied by an increased cardiac ferroptosis. TMEM43 overexpression decreases LPS-induced ferroptosis and cardiac injury by inhibiting LPO. TMEM43 silencing promotes ferroptosis and cell injury in LPS-induced rat H9c2 cardiomyocytes. TMEM43 downregulates the expression of P53 and ferritin but upregulates the levels of GPX4 and SLC7A11, thereby inhibiting LPS-induced ferroptosis. Fer-1 can ameliorate TMEM43 knockdown-induced deteriorating effects in LPS-induced cardiac injury. Taken together, TMEM43 protects against SCM by inhibiting ferroptosis (Chen Z. et al., 2022). The islet cell autoantigen 69, which can regulate inflammation and immune response, induces ferroptosis to cause septic cardiac dysfunction through the stimulator of interferon gene trafficking (Kong et al., 2022). The neutrophil-derived lipocalin-2 induces ferroptosis by increasing the labile iron pool in the cardiomyocytes of LPS-induced mouse SCM model (Huang Q. et al., 2022).
Pharmacological inhibition of ferroptosis for treating cardiomyopathy
Ferroptosis was first described in 2012; the studies on its role in cardiomyopathy are still in their infancy. However, existing evidence suggests a strong correlation between ferroptosis and cardiomyopathy. Thus, the inhibition of ferroptosis may be a promising target for treating cardiomyopathy. Ferroptosis reportedly plays a pathogenic role in cardiomyopathy; thus, scientists have begun identifying a targeted antiferroptosis approach for cardiomyopathy treatment. Numerous drugs have been recognized to exert a therapeutic impact on cardiomyopathy treatment by inhibiting ferroptosis. Several experimental compounds and clinical drugs inhibit ferroptosis to achieve therapeutic purposes in cardiomyopathies. The pharmacological inhibition of ferroptosis is becoming a cardioprotective strategy for cardiomyopathy prevention in vitro or in vivo. We try to sort these ferroptosis-inhibiting small molecules by mode of action. These categories maybe include activator of system Xc − , ferroptosis-inhibiting Nrf2 activators, GPX4 activator (direct or indirect), ferroptosis inhibitors through combined mechanisms, or ferroptosis inhibitors through unknown mechanisms. However, it is hard to clearly classify the ferroptosis-inhibiting small molecules into a specific categories.
Icariin
(1), a natural flavonoid compound, is the main component of the Chinese herb Epimedium (also called YinYangHuo in Traditional Chinese Medicine) that has the functions of anti-aging, anti-inflammation, antioxidation, antiosteoporosis, and ameliorating fibrosis (Su et al., 2023).1 is a potent inducer of Nrf2 (Moratilla-Rivera et al., 2023). 1 inhibit hypoxia/reoxygenation (H/R)-induced ferroptosis by increasing GPX4 and decreasing ACSL4 and content of Fe 2+ in cardiomyocytes through activating the Nrf2/HO-1 signaling pathway . Owing to outstanding medicinal properties in preventing and curing many common health issues, 1 and its derivates, icariside II (ICS) and icaritin (ICT) have garnered great interest in drug development. 1 possesses a variety of beneficial effects in regulating cardiovascular inflammation and other biological activities. In China, YinYangHuo and its compound have been used in the treatment of numerous diseases, like AD, stroke, and depression. ICA and its metabolites, which contain flavonoids, polysaccharides, vitamin C, and other active compounds, have been proven to have cardio-cerebrovascular protective benefits . 1 can works as a prodrug was subjected to preclinical studies. We must realize that the oral bioavailability rate is only 12.02% for 1. Studies have shown the addition of cyclodextrins (CDs) to ICA can result in a vast increase in its water solubility, consequently achieving considerably better bioavailability (Cui et al., 2013;Jin et al., 2013). The degradation of ICA into ICS in vivo promotes ICA absorption (Cheng et al., 2015).
Frontiers in Pharmacology frontiersin.org anesthetic agent. 4 mitigated IR-induced ICM through inhibiting ferroptosis by upregulating GPX4 expression, and decreasing the levels of MDA and iron and ACSL4. Nrf2 inhibitors ML385 eliminated the inhibition of 4 on ferroptosis induced by IR, suggesting that 4 attenuated the myocardial injury by inhibiting IR-induced ferroptosis via Nrf2 . Gossypol Acetic Acid (GAA, 5), a natural product taken from the seeds of cotton plants, attenuates ICM through inhibiting ferroptosis by chelating iron content, and downregulating mRNA levels of Ptgs2 in RSL3, and Fe-SP-induced H9c2, inhibiting LPO in oxygen-glucose deprivation/reperfusion (OGD/R)-induced H9c2.5 attenuates IR-induced ICM through inhibiting ferroptosis by decreasing the production of LPO, increasing the Nrf2 and GPX4 protein, while decreasing the mRNA levels of Ptgs2 and Acsl4, and the protein levels of ACSL4 . Dexmedetomidine (6), a highly selective alpha2adrenoceptor agonist with sedative, analgesic, sympatholytic, and hemodynamic-stabilizing properties, posess the protective effect against I/R (Xiao Z. et al., 2021;Deng et al., 2022;Yang et al., 2022;Hu et al., 2023) and H/R (Wu W. et al., 2022; induced cardiomyocyte injury. 6 attenuates ICM through inhibiting ferroptosis by activating AMPK/GSK-3β-dependant Nrf2/SLC7A11/GPX4 (Wang et al., 2022d). Sulforaphane (7) is a naturally occurring dietary phytochemical extracted from cruciferous vegetables .7 is a potent Nrf2 activators and inhibit cardiomyopathy Su et al., 2021;Wang et al., 2022e). 7 is an important member of the isothiocyanates, and is abundant in cruciferous plants with excellent anti-cancer effects (Wei LY. et al., 2022).7 attenuates ICM in diabetic rats through inhibiting ferroptosis by activation of Nrf2/FPN1 pathway (Tian H. et al., 2021). As a well known activator of Nrf2, 7 can upregulate multiple antioxidants and protect against various oxidative damages. 7 prevents rat cardiomyocytes from H/R injury in vitro via activating SIRT1 and subsequently inhibiting ER stress . 7 protects from myocardial ischemia-reperfusion damage through activating Nrf2 (Silva-Palacios et al., 2019).7 inhibit intermittent hypoxia-induced cardiomyopathy in mice through activating Nrf2 . Several clinical studies with 7 for the (supportive) treatment of non-alcoholic fatty liver disease (NCT04364360), chronic kidney disease (NCT05153174, NCT04608903) and anthracycline related cardiotoxicity in breast cancer (NCT03934905) are ongoing. A multi-center, randomized, placebo-controlled clinical trial is needed to be conducted to investigate 7 in adult patients with ICM.
Conclusions and perspectives
The pathophysiology of cardiomyopathies is complex and still undergoing extensive investigation. In this review, we appraised articles that emphasized research progress in the pathological roles of ferroptosis in ICM, DCM, DICM, and SCM and ferroptosis inhibitors to mitigate cardiomyopathies. Meanwhile, researchers have identified novel targeted treatments for these cardiomyopathies through the pharmacological inhibition of ferroptosis. The pharmacological inhibition of ferroptosis is a potential therapeutic target for these cardiomyopathies, with potential novel drug targets and strategies for these diseases. However, the current research on the role of ferroptosis in cardiomyopathies is still in the infancy, and is still poorly understood. And more studies are required to clarify its role and functional mechanisms. Furthermore, most data reported in the literature are derived from experimental studies that do not directly report clinical applications and implications. Although a phase III clinical trial is underway to determine if resveratrol exert the potential heart benefits of resveratrol in patients with non-ischemic cardiomyopathy (phase III, n = 40, NCT01914081). In addition, a multi-center, randomized, placebo-controlled phase II clinical trial is also being conducted to investigate the LCZ696 in adult patients with non-obstructive hypertrophic cardiomyopathy (nHCM) (phase II, n = 45, NCT04164732). However, there is still lacking the study directly targeting ferroptosis to treat ICM, DCM, DICM, and SCM using bioactive compounds. Therefore, more clinical studies need to be conducted to inform practical treatment and management strategies. Despite these considerations, the current evidence strongly indicates that inhibiting ferroptosis marks a significant new direction for treating cardiomyopathies.
Author contributions
Conception and design: HS and XH; administrative support: All authors; collection and assembly of data: All authors; data analysis and interpretation: All authors; manuscript writing: HS; final approval of manuscript: All authors.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 5,441.8 | 2023-04-13T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
Dynamic Watermark Injection in NoSQL Databases
data. Further, dynamic data analysis of such huge data requires Hadoop like map-reduce stacks [3]. For example, Netflix, an online video viewing subscription based platform relies on customer’s viewing preferences to recommend better titles which are more relevant [4]. Face book tracks user interactions with news feed items, friends and pages, photos, text posts etc. to provide better features on its social networking platform.
Introduction
With the advent of high internet accessibility and electronic devices like smart phones & tablets, more and more data is collected and stored in databases.Cloud & Web services by Amazon, Microsoft and alike enabled developers to scale up by making use of distributed computing and clusters to gather much more real time information like sensor data from Internet of Things (IoT) devices, online game data, telemetry of user events on applications and websites, etc.Since the traditional database systems weren't designed or optimized for such enormous size of data with complex data types and structures, the need of Not Only SQL (NoSQL) databases increased [1].These databases may or may not have features of traditional relational databases in favor of better horizontal scaling facilities, which is a problem for RDMS or having schema-less document based objects, which allow capturing complex structures such as data coming from several different sensors and also allowing faster access in some cases [2].
Not only the quantity of data has increased, but the value of the same has also skyrocketed.The data is used by big firms to perform scientific analysis using machine learning and artificial intelligence concepts to improve their products on basis of that identifying the characteristics of watermarking techniques for NoSQL databases.
The rest of the paper is organized as follows.Firstly, prior work in the domain of watermarking relational databases is discussed.Secondly, the proposed database watermarking technique for NoSQL databases for tamper detection is presented followed by the experimental results and Integrity analysis.Lastly, the paper is concluded.
Prior Work
Agrawal and Kiernan [6] introduced the concept of robust watermarking of relational databases.Their technique embeds single bit watermark into secretly selected positions.Several other researchers have contributed in the domain of robust watermarking [7][8][9][10].Watermarking schemes often require manipulation with a snapshot of entire database.Either some values of attributes are modified [7][8][9], or the order of tuples in the database is modified to embed a watermark.
Fragile watermarking techniques are also classified as distortion [11][12] or distortion-free techniques Guo et al [13][14][15][16].Proposed the technique to embed and verify watermark group by group independently according to some secure parameters [11].In this scheme two sets of watermarks are embedded into LSB's of the attributes of a tuple within a group to localize modifications made to the database.Recently, Khan et al. proposed another fragile technique [13].The core idea of their technique is to generate watermark based on local characteristics of database relation like frequency distribution of digit, length and range.The watermark thus produced was not embedded into the databases; however, was secured with trusted third party for future reference.Techniques proposed in [14][15][16] embeds fragile watermark by changing the position of tuples within a database.Such techniques use a watermark string and create partitions using primary key and the binary string of watermark.The tuples in the partitions are then rearranged by comparison of their monotonicity, and relies on order of the tuples in the database to verify status of the database [17].
Literature shows several researches on XML documents as well [18][19].In [18], authors extended the work of Agrwawal et al [6] on XML data by defining locators in XML.Another approach suggested by them, compresses the data before watermarking.This claims to achieve better data security.Clearly, these approach can't be applied to databases where; (i) Data comes every minute or so, making snapshot nearly impossible to take.
(ii) Data is saved in document stores which don't maintain order of documents.
No prior work is done on watermarked protection of such Schema-less databases.To fulfill this gap, new perspective of embedding watermark into such databases is proposed.Our proposal deals with the above mentioned issues by leveraging the flexible schema features provided by NoSQL database.Since tuples need not have uniform schema in document-based databases, one can inject attribute(s) right before database operation to tuples in order to embed watermark.This gives us the required dynamic nature that can work on a live, constantly changing database.
The Proposed Watermarking Technique
In this paper, a tamper detection technique is proposed.The watermark creation is a function of owner decided parameters and tuple signature from sensitive attributes which one wish to track tampering.Figure 1 shows the flow diagram of the proposed watermarking technique.
For every incoming tuple, firstly, the watermark is generated.This process is discussed in detail further in the paper.Generated watermark is saved in a new attribute, i.e. a new attribute is dynamically injected into the tuple before inserting it to database or updating it in database.Figure 2 depicts this process.Injecting of watermark is followed by completion of tuple addition operation on database.Finally, Verification can be done by recalculating the watermark and comparing it with existing value in the "injected attribute".Re-calculation of the watermark and comparison with the extracted value would be required to check whether the tuple was tampered or not.
Since the embedding process for one tuple is completely independent from another one, this approach can actually be applied in practice to databases where order isn't maintained for the documents.Further, isolation of each document allows us to perform verification using concurrent paradigms, thus resulting in near-real-time detection of tampering.
Implementation
Object Relational Mapping (ORM) layers in software stacks allow users to add programmatic hooks to operations like save, delete etc.This is where one should implement this watermarking scheme.Essentially, one can transform a tuple by adding a new attribute with some value.Pseudocode 1, Add_Attribute shows the process of inserting a tuple into DB by adding extra attribute containing watermark.In order to use this proposal, owner would have to define following functions.
getInjectedAttributeName (tuple)
The function returns the name of the attribute that is to be injected into the tuple.Work can be done in deciding an attribute name in order to avoid obvious patterns that can help an attacker to suspect the attribute to have non application related usage.More the names match to application's domain, easier it is to deceive the attackers.
Implementation of this can be done as elucidated in Pseudo code 2, assuming data-type of watermark would be integral.Names are an array that holds list of various possible attribute names and Count.Names tell the number of elements in names.Use of secret key in selection condition of new attribute adds a level of security to the process [20].
Pseudocode 2: getInjectedAttributeName (.) function getInjectedAttributeName(tuple) { const names = ['tweet_media_id', 'user_relation_id', 'tweet_ location_id']; return names[(tuple.id+secret) % count.names]; }
One key point that is to be kept in mind while implementing the function is that the logic for choosing a name should be reversible with the help of tamper-free tuple data so as to detect modifications.
getWatermark (tuple, secret)
This function is arguably the most important part of entire proposal.Owner must implement this carefully in such a way that an attacker can't reverse the process of creating the watermark by observing patterns.The function returns watermark of decided data-type keeping dynamically injected attribute name in mind.A basic strategy that can be employed is as follows in Pseudo code 3. Watermark is prepared by concatenating secret key with the value of attributes in that tuple and then taking hash of it.Attributes in database are of varied data type.Each attribute of a tuple is processed and converted to real number and then concatenated to prepare signature.
To maintain security of the algorithm, secret key is used.Substring of hash is taken as the value of newly added attribute.Many cryptographic hash algorithms exist in literature, e.g.MD5, RIPE-MD, SHA-2, SHA-3, SNEFRU, etc.We implement SHA-2 algorithm as a cryptographic hash function that yields 256bit hash value owing to its improved resilience against attacks [21][22].Hash possesses a strong avalanche effect.Even with a single-bit change in input, large number of bits changes in hashed output.Hence, it is difficult to guess input given the output of the secure hash function.
The signature is prepared using all attributes of the tuple.However same can be modified by concatenating only crucial attributes.In such case, temperedness in participating attributes is detected.This may be used for large databases with large number of attributes; where information of few attributes is crucial and requires protection.
Tamper Detection
For fragile watermarking, signature using crucial or all attributes of the tuple has to be generated.The key idea is to make the process fast, reversible with randomness.A fairly straightforward implementation is mentioned below in pseudocode 4. The function returns a Boolean.It simply recalculates the watermark for a tuple existing in database, and compares it with the one concealed in the injected attribute.If they match, then the database was preserved, i.e. no tampering occurred.However, if they don't match or the attribute itself is missing from the tuple, it would mean that the database was tampered in a particular tuple.Thus, localization up to tuple level is attained.
The proposed technique employs two security levels to complete the entire watermarking process.Firstly, watermark is prepared securely using a secret key making difficult for an attacker to crack the watermark.Secondly, a new attribute where a watermark is embedded is chosen using a secret parameter.
Experimental Results and Analysis
Experiments are performed on the dataset that contains the preprocessed and filtered session data for DePaul CTI web server [23].The data is based on a random sample of users visiting this site for a two-week period during April 2002.The database contains 20509 tuples with varying attributes of maximum limit 10.
Proposed technique is tried on [23] and the performance loss one would achieve due to overhead of watermarking is tested.Results recorded in table 1 show that the change is nominal.
Integrity Analysis
Next, the experiment to check the robustness of proposed technique is performed.The attribute values are altered and DOI: http://dx.doi.org/10.15226/2474-9257/2/1/00108
Dynamic Watermark Injection in NoSQL Databases
Copyright: © 2016 Khanduja the change in regenerated watermark with the extracted one is recorded.The graph in figure 3 shows the changes in extracted watermark by altering attributes that does not contain watermark.Since, I have applied hash of the concatenated attribute values of a tuple; even a single bit change in input will result in change of large number of bits which can be verified from the graph.With 5% change in tuple, the change in watermark extracted is 20% which further increases on increasing the alterations in tuple.
Let us consider the following cases that may arise.We take s W as original signature watermark that was embedded, .Thus, the two watermarks will match stating no perturbations in a tuple.
(b) The content of the any of the tuple attribute was changed but not the embedded watermark.In this case, the re-generated watermark g W will not be the same as the original watermark s W . Thus, the re-generated watermark will not match the extracted watermark, i.e.
x g
W W ≠
and the tampering event will surely be detected.
(c) The positions where watermark is embedded was tampered while other attribute values were not changed.In this case, the extracted watermark x W will not be the same as the original watermark s W . Thus, the re- generated watermark will not match the extracted watermark, i.e.
x g
W W ≠
and the tampering event will surely be detected.
(d) Both the watermark bit positions and attribute values was changed.This case has a very remote chance of the two new watermarks produced as a result of the changes to attribute values and the inserted watermark respectively, turns out to be exactly the same.Hence x g W W ≠ and tampering is detected.
From the above, it is clear that the watermark is highly fragile.Any changes made to the dataset that affects the redacts and/or rules or the embedded watermark or both can be immediately detected.
Conclusion and Future scope
The proposal exploits the option to have schema-less database in NoSQL to inject an attribute to tuples and save the watermark in it.The name of the attribute and its value can be dynamically modified to increase randomness and thereby security.No dependency of watermark for a tuple on other tuples enables the scheme to be used in distributed/sharded systems.Further, absence of sharded data leaves space for concurrent operations for quick runtimes of the watermark embedding phase.Experimental results and analysis proves the fragility of our watermark against perturbations.
The same framework can be extended to work for embedding watermark such that it can help in proving ownership of a particular database.The idea is to make use of several owner secrets that can't be reproduced probabilistically by a datathief.So by only tweaking the get Watermark function, the same framework can be extended to serve as a technique for ownership verification.
Figure 1 :
Figure 1: Flow diagram of the proposed watermarking technique
Figure 2 :
Figure 2: Addition of new attribute within a selected tuple in an Attribute injection process
Pseudocode 4 :
Tamper Detection function verifyWatermark (tuple, secret) { return tuple[getInjectedAttributeName(tuple)] === getWatermark(tuple, secret); } from suspected tuple and x W as watermark extracted from suspected tuple.(a) There were no integrity attacks.If neither the attributes nor the watermark were changed, then g x W W ==
Table 1 :Figure 3 :
Figure 3: Change in extracted watermark by altering the different attribute values | 3,120.6 | 2017-01-02T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Peptides reproducing the phosphoacceptor sites of pp60c-src as substrates for TPK-IIB, a splenic tyrosine kinase devoid of autophosphorylation activity.
TPK-IIB, a spleen tyrosine protein kinase devoid of autophosphorylation activity (Brunati, A. M., and Pinna, L. A. (1988) Eur. J. Biochem. 172, 451-457), has been purified to near homogeneity and assayed for its ability to phosphorylate the synthetic peptides EDNEYTA and EPQYQPA reproducing the two conserved phosphoacceptor sites of pp60c-src (Tyr-416 and Tyr-527). While EPQYQPA was phosphorylated with low efficiency (Km = 16.7 mM, Kcat = 14.4), EDNEYTA is an excellent substrate displaying a Km value of 58 microM and a Kcat value of 31.2. The single substitution, in the latter peptide, of the glutamic acid adjacent to the tyrosine by alanine to give EDNAYTA caused a 6-fold increase in the Km. The positive influence on the phosphorylation of the acidic residues at -3 and -4 relative to the tyrosine is indicated by comparison of the kinetic constants for peptides EDAAYAA (Kcat = 4.6, Km 0.325 mM) and QNAAYAA (Kcat 2.4, Km 1.7 mM). Furthermore, when residues in the peptide NEYTA were replaced by alanine, the phosphorylation of the peptides NAYTA and AAYAA, was almost negligible (in terms of Kcat/Km ratio). However, AEYTA, NEYAA and AEYAA were still phosphorylated, albeit less efficiently than NEYTA. The probability that these peptides will adopt a beta-turn is EDNAYTA = EDNEYTA, NAYTA greater than NEYTA, and no predicted beta-turn for AEYTA, NEYAA, and AEYAA. Therefore these results support the concept that an amino-terminal acidic residue(s) is strictly required by TPK-IIB, irrespective of peptide conformation, although a beta-turn may enhance the phosphorylation of those peptides that satisfy this requirement. Two other spleen tyrosine kinases, TPK-I/lyn and TPK-III, both related to the src family, also have a far greater preference for the peptide EDNEYTA over EPQYQPA. However, they can be distinguished from TPK-IIB by their lower affinity for the peptides EDNEYTA and NEYTA and by their different specificity towards the substituted derivatives of NEYTA. TPK-I/lyn, accepts most of the substitutions that are detrimental to TPK-IIB, the triply substituted peptide AAYAA being actually preferred over the parent peptide NEYTA. The substitution of glutamic acid by alanine is also tolerated by TPK-III, although, in contrast to TPK-IIB, the phosphorylation efficiency is drastically decreased by the substitution of the asparagine at position -2.(ABSTRACT TRUNCATED AT 250 WORDS)
mM) and QNAAYAA (Kc,, 2.4, K , 1.7 mM). Furthermore, when residues in the peptide NEYTA were replaced by alanine, the phosphorylation of the peptides NAYTA and &Y&, was almost negligible (in terms of Kc,JK, ratio). However AEYTA, NEYAA and AE-YAA were still phosphorylated, albeit less efficieztly than NEYTA. The probability that these peptides will adopt a ,&turn is EDNAYTA = EDNEYTA, NAYTA > NEYTA, and no predictedb-turn for AEYTA, NEYAA, and AEYAA. Therefore these results support the concept that an amino-terminal acidic residue(s) is strictly required by TPK-IIB, irrespective of peptide conformation, although a b-turn may enhance the phosphorylation of those peptides that satisfy this requirement.
Two other spleen tyrosine kinases, TPK-I/Zyn and TPK-111, both related to the src family, also have a far greater preference for the peptide EDNEYTA over EPQYQPA. However, they can be distinguished from TPK-IIB by their lower affinity for the peptides ED-NEYTA and NEYTA and by their different specificity towards the substituted derivatives of NEYTA. TPK-I/Zyn, accepts most of the substitutions that are detrimental to TPK-IIB, the triply substituted peptide YAA being actually preferred over the parent peptide NEYTA. The substitution of glutamic acid by alanine is also tolerated by TPK-111, although, in contrast to TPK-IIB, the phosphorylation efficiency is drastically decreased by the substitution of the asparagine at po-* This work was supported by Italian MURST and CNR (Target project on Biotechnology and Bioinstrumentation). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "aduertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. sition -2. Taken together these data would indicate that the highly conserved phosphoacceptor site, homologous to pp60"-"'" Tyr-416, is optimally configured for the specificity requirements of TPK-IIB, suggesting that TPK-IIB or tyrosine protein kinases with similar specificity might be involved in the phosphorylation of the members of the src family.
All the tyrosine protein kinases encoded by cellular genes of the src family contain two major phosphoacceptor sites which are homologous to Tyr-416 and Tyr-527 of pp60""" (reviewed in (1)). The former represents the main autophosphorylation site in vitro, and its phosphorylation correlates with increased kinase activity (2). The latter, which is close to the carboxyl terminus and is absent in the oncogenic forms (v-src), appears to be responsible for the down-regulation of the kinase itself (1, 3). Recently a brain TPK' uncapable of autophosphorylation has been reported to specifically phosphorylate in vitro the carboxyl-terminal phosphoacceptor site of pp60""" (4). Such an unusual lack of autophosphorylation activity is also shared by TPK-IIB (5), a spleen enzyme which can be resolved by chromatographic procedures from three other forms of tyrosine kinase two of which (TPK-I and TPK-IIA) are closely related to each other. While TPK-I and -1IA are immunologically indistinguishable from the product of the lyn oncogene (6), which is a member of the src family (7), TPK-IIB does not cross-react with any of the monospecific antibodies against TPKs of the src family tested so far. Such behavior and its inability to undergo autophosphorylation argue against any close relationship of TPK-IIB to the src family, whose members invariably include a highly conserved autophosphorylation site (see Ref. 1). These findings prompted us to undertake a study aimed at assessing whether TPK-IIB, like the brain "c-src-kinase" (4), might be involved in the phosphorylation and regulation of the TPKs belonging to the src family. Here we present data concerning the phosphorylation of peptides reproducing the two phosphoacceptor sites of the src proteins, and supporting the concept that Tyr-416 but not Tyr-527 is an excellent target for TPK-IIB. were synthesized by solid-phase technique from Fmoc amino acids using a manual synthesizer (Model Biolinx 4175, LKB). The sidechain functional groups of glutamic acid, aspartic acid, threonine, and tyrosine were blocked using acid-labile t-butyl groups. Syntheses were performed in continuous flow on a 0.1-nmol scale using the Fmoc amino acid active esters derived from pentafluorophenol or 3,4dihydro-3-hydroxy-4-oxo-1,2,3-benzotriazine. The synthesis with pentafluorophenol esters was carried out in the presence of N-hydroxybenzotriazole as a catalyst. Synthetic peptides were cleaved from the resin and side chain protecting groups removed using 95% aqueous trifluoroacetic acid for 1 or 2 h. Reagents were evaporated under low vacuum. The residue was dissolved in water and lyophilized. Crude peptides were purified by HPLC on a Waters Delta-Pak CIR 300A column (7.8 X 300 mm) using a Perkin-Elmer 410 LC B10 HPLC apparatus. Elution was performed with linear gradient from 0.1% aqueous trifluoroacetic acid to 30% acetonitrile containing 0.08% trifluoroacetic acid in 25 min a t 3 ml/min with the eluent monitored at 215 nm. The purity of the peptides (95% or more) was checked by analytical HPLC on a reversed-phase Waters Delta-Pak CIS 300A 15-pm (3.9 mm X 30 cm) column and by amino acid analysis. Peptides AEYAA and AAYAA (8), prepared by the traditional method in solution, were kindly provided by Professor F. Marchiori, Department of Organic Chemistry, University of Padova. Monoclonal m AS 327 anti-src antibodies were kindly provided by Dr. J. S. Brugge (State University, New York). Anti-cst-1 antibodies recognizing pp60"""', p 5 P , and pp62'"'' (12) and anti-SEEP antibodies raised against the conserved 330-345 segment of c-src were kindly provided by Dr. R. Kypta (EMBL, Heidelberg, Federal Republic of Germany).
The tyrosine protein kinases conventionally termed TPK-I, TPK-IIB, and TPK-I11 were isolated and partially purified from the particulate fraction of beef spleen essentially by the three-step procedure previously applied to rat spleen (5) and routinally assayed using poly(Glu,Tyr)4:1 as substrate (5). TPK-I is immunologically indistinguishable from the tyrosine protein kinase expressed by the lyn oncogene (6) and will be thereafter termed here TPK-I/lyn. TPK-I11 is also related to the src family as it cross-reacts with anti-cst-1 and anti-SEEP antibodies. Its precise identity however is still undetermined.
TPK-IIB has been further purified by gel filtration through a 2-X 80-cm Sephacryl S200 column equilibrated and eluted with buffer A (5) including 0.5 M NaC1, followed by HPLC on Mono Q H R 5/5 column connected to a Perkin-Elmer 410 LC B10 HPLC apparatus. The column, equilibrated with 20 mM Tris-HC1 (pH 7.5), 10% glycerol, and 15 mM 2-mercaptoethanol was washed with 15 ml of the same buffer and eluted at a flow rate of 0.8 ml/min with a linear NaCl gradient (0-0.5 M). For analytical purposes the fraction with highest tyrosine kinase activity was resubmitted to Mono Q HPLC applying a discontinuous gradient (see Fig. 1). A single sharp peak of absorbance at 280 nm overlapped by tyrosine kinase activity was eluted at 0.18 M NaCI.
Phosphorylation of peptides was performed by incubation at 30 "C in 50 pl of a medium containing 50 mM Tris-HC1, pH 7.5, 10 mM MnC12, 10 p~ sodium vanadate, 20 PM [y-:"P]ATP (specific activity, 1000 cpm/pmol), and 2-10 units of tyrosine protein kinase; in the case of TPK-I/lyn 2 M NaCl was also added to the incubation medium as an activator (6). Unless otherwise indicated the peptide concentration was 2 mM and the incubation time 10 min. The reaction was stopped by adding acetic acid (final concentration, 30%), and 32P incorporated into the peptide was evaluated by combining ion-exchange (9) and isobutanol-benzene extraction (10) as detailed previously (11).
Autophosphorylation was performed by 10-min incubation in the same medium described above except for the absence of NaCl and any peptide substrate. The reaction was started by adding ["PIATP after 10-min preincubation and was stopped by boiling in 2% SDS. '"P incorporated was evaluated by 11% SDS-PAGE and autoradiography as described previously (5).
RESULTS
The spleen tyrosine protein kinase termed TPK-IIB was first characterized by its unique lack of autophosphorylation and its remarkable inhibition by heparin (5). Recently TPK-IIB has been shown not to cross-react with various monospe-cific antibodies raised against tyrosine kinases of the src family, namely those expressed by kk, hck, lyn, and fyn (6).
TPK-IIB also fails to cross-react with anti-c-src antibodies (m AS 327), with anti-cst-1 antibodies which recognize pp62c-yes, pp60"-"", and p 5 P , and with anti-SEEP antibodies raised against the 330-345 segment of pp60c-"", which is highly conserved in all the tyrosine kinases of the src family (not shown).
After the heparin sepharose chromatography to separate TPK-IIB from TPK-IIA (5), TPK-IIB underwent three additional purification steps as shown in Table I. On Mono Q HPLC the final preparation exhibits an individual sharp protein peak coinciding with tyrosine kinase activity (Fig. 1). This peak gives rise to a single prominent protein band of the expected M , (52,000) upon SDS-PAGE ( Fig. 1, inset). The identification of this band as the tyrosine kinase itself is also consistent with the high specific activity of the final preparation of TPK-IIB which is comparable with the activity exhibited by other highly purified preparation of tyrosine kinases (13, 14 and references cited therein). Such a high activity would hardly be compatible with the single prominent band merely representing a contaminating component.
As pointed out previously, TPK-IIB does not undergo any detectable autophosphorylation (5). Its lack of autophosphorylation sites has been now corroborated by its failure to crossreact with antiphosphotyrosine monoclonal antibodies ( Fig. 2): no phosphotyrosine signal could be detected with TPK-IIB, with or without preincubation in the autophosphorylation medium. On similar blots, lysates from jurkat cells expressing high levels of p56lCk, a tyrosine kinase of the src family (l), resulted in strong positive signal, of the expected molecular weight. The same signal is evident with spleen TPK-I/lyn, another src-related tyrosine kinase capable of autophosphorylation (5,6). The specificity of the reaction was established by preincubation of the antibody with phosphotyrosine which totally eliminated the signal.
This finding clearly confirms that TPK-IIB is devoid of phosphotyrosyl sites, a quite unusual feature, distinguishing it from all the known tyrosine kinases of the src-family. TPK-IIB nevertheless phosphorylates synthetic peptides reproducing the two main tyrosyl phosphoacceptor sites that are conserved in all the cellular members of the src family. As shown in Table I1 the peptide EDNEYTA, reproducing the sequence around pp60"-"" Tyr-416, is an especially good substrate, particularly by virtue of its K,,, value (58 p~) , which is one of the lowest ever reported for peptides of comparable size used as substrates for tyrosine kinases (e.g. see Ref. 15). The phosphorylation efficiency of EDNEYTA by TPK-IIB is more than 30 times higher than that of angiotensin 11, a widely employed substrate of tyrosine protein kinases, while the peptide EPQYQPA, reproducing the down-regulation site corresponding to pp60""" Tyr-527, is a much poorer substrate: its K,,, value is more than two orders of magnitude higher and its Kc,, is about two-fold lower than those of EDNEYTA (Table 11).
In order to obtain a better insight into the particular susceptibility of the peptide EDNEYTA to phosphorylation by TPK-IIB, a number of derivatives have been synthesized and analyzed for their kinetic parameters (Table 11). It is evident that, except for the conservative replacement of Glu for Asp at position -3 relative to tyrosine, all the other substitutions tested are unfavorable, giving rise to peptides that invariably display higher K,,, and sometimes also lower Kc,, values. The replacement of the glutamic acid adjacent to tyrosine is especially unfavorable. It is more detrimental however in the pentapeptide NAYTA than in the heptapep- Purification of TPK-IIR from bovine spleen The isolation of'the particulate fraction from 3.5 kg of spleen from freshly slaughtered beef and its extraction with lri Nonidet P-10 were performed essentially as previously described (5). The crude extract was resolved into four fractions (TPK-I, TPK-IIA, TPK-IIB, and TPK-111) by combining DEAE-Sepharose and heparin-Sepharose chromatographies, as in a previous study (5). TPK-IIB was further purified by phosphocellulose chromatography ( 5 ) and by Sephacryl S-200 gel filtration and Mono Q HPLC as described under "Materials and Methods." The activity in the crude extract and after the first purification step is underestimated due to the presence of tyrosine protein phosphatase activity not completely inhibited by the vanadate present in the kinase assay. This may also account for the higher than 100% apparent recovery of activity after the DEAE-Sepharose and heparin-Sepharose steps if the activities of' the other tyrosine kinase fractions, TPK-I/TPK-111 and TPK-IIA, resolved by DEAE-Sepharose, and heparin-Sepharose, respectively (5) , . Table 1) was subjected to a second Mono Q HPLC by applying a discontinuous gradient (dotted line). Proteins were automatically recorded a t 280 nm (solid line). 0.4-ml fractions were collected and 10-pl aliquots were assayed for tyrosine kinase activity tide EDNAYTA, suggesting that the additional acidic residues a t position -3 and -4, present only in the heptapeptide, may exert a positive influence, reinforcing the effect of glutamic acid at position -1. Consistent with this hypothesis ED-NEYTA, EDNAYTA and EDAAYAA are much better substrates than NEYTA, NAYTA, and AAYAA, respectively, and EDAAYAA is phosphorylated more efficiently than QNAAYAA. The striking superiority of EDNEYTA over its triply substituted EDAAYAA derivative, however, highlights the crucial relevance of the amino acids nearer to the tyrosyl residue. The comparative analysis of the individual monosubstituted derivatives of NEYTA, namely, AEYTA, NAYTA, and NEYAA clearly indicates the most harmful substitution to be that of the glutamic acid adjacent to the amino-terminal side of tyrosine, NAYTA exhibiting a 10-fold higher K , and a more than 8-fold lower Kc,, than the parent peptide NEYTA. It should be noted that the propensity to adopt a &turn conformation is similar for NAYTA and NEYTA and for EDNAYTA and EDNEYTA, despite their sharply different susceptibility to phosphorylation. Apparently therefore an acidic side chain adjacent to the amino-terminal side of tyrosine acts as a powerful specificity determinant independent of its ability to confer a &turn conformation.
On the other hand the unfavorable effect of substituting the neutral residues at positions -2 and/or +1 with alanine could be interpreted in terms of conformational alterations, since the singly and doubly substituted derivatives AEYTA, N E Y W a n d AEYAA have lost the predicted P-turn conformation of the parent pentapeptide NEYTA (see Table 11). The same effect could account for the decreased efficiency of EDAAYAA compared with EDNAYTA.
The excellent phosphorylation efficiency of EDNEYTA by TPK-IIB, by virtue of its low K,, is especially remarkable if Chou and Fasman (20) for all the quartets of amino acids including tyrosine. Only the highest value for each peptide is reported and the corresponding predicted &turn is shown if the p, value is higher than the cut-off value of p , = 0.75.10" (20).
Peptides Kc., Apparent K , K..,/K,,, p Kinetic constants of src-derived peptides for partially purified tyrosine kinases TPK-I/lyn and TPK-111 Vmnx (expressed as nmol. min-lmg-l) and apparent K,,, values were determined as detailed in Table I1 for TPK-IIB. In the case of TPK-I/lyn and TPK-111, however, Kc., values could not be calculated since the enzyme preparations were only partially purified. compared with tyrosine kinases of the src family, whose K,,, values for src-derived peptides very similar to EDNEYTA have been reported to lay in the millimolar range (reviewed in Ref. 15). Also consistent with this observation is the fact that the src-related spleen tyrosine kinase termed TPK-I/lyn exhibits a 3 . 8 -m~ K,,, for EDNEYTA (Table HI), a value which compares quite well with the values of 6.25 and 5.0 mM reported for the phosphorylation of the peptides EDNEY-TARQG and IEDNEYTARQ by pp60'."" (16) and ~5 6 "~ (17), respectively. Moreover, those substitutions that decrease the phosphorylation of the parent pentapeptide by TPK-IIB are favorable to TPK-I/lyn whose phosphorylation efficiency is actually higher than the triply substituted derivative AAYAA than it is with NEYTA (Table 111).
Vmmx
TPK-111, another spleen tyrosine kinase ( 5 ) immunologically related with the src family for being recognized by anticst-1 and anti-SEEP antibodies* displays a peptide substrate A. M. Brunati, R. Kypta, A. Donella-Deana, and L. A. Pinna, unpublished data. 2 of src Peptides 17801 specificity distinct from those of either TPK-IIB and TPK-I/lyn. It is reminiscent of TPK-IIB in that the parentage of pentapeptide is preferred over the substituted derivatives (Table 111). The replacement of the asparagine a t position -2 however is much more detrimental than that of the glutamic acid adjacent to the tyrosyl residue (Table 111).
Any more direct and exhaustive comparison of the catalytic efficiencies of the three spleen tyrosine kinases considered here was hampered by the fact that TPK-I/lyn and TPK-I11 are still only partly purified, so that their Kcat values could not be calculated nor compared with those of TPK-IIB.
The possibility that the lower K,,, values of TPK-IIR might merely reflect its higher degree of purification was ruled out by the fact that identical values were obtained when the kinetic experiments were performed with TPK-IIB after heparin sepharose, the degree of purification of which is comparable to that of TPK-I/lyn and TPK-I11 (not shown).
In order to assess the actual ability of TPK-IIB to phosphorylate tyrosine kinases of the src family, TPK-I/lyn, whose autophosphorylation site is exactly reproduced by the eptapeptide EDNEYTA, was incubated with TPK-IIB in the absence and presence of heparin, which is a powerful inhibitor of TPK-IIB (5). while it stimulates TPK-I/lyn activity (5.6). As shown in Fig. 3A TPK-IIB promotes an increased radiolabeling of TPK-I/lyn which is suppressed by heparin. The "P-peptide maps obtained from TPK-I/lyn either autophosphorylated or phosphorylated in the presence of TPK-IIR are identical (Fig. 3 B ) suggesting that the same phosphoacceptor site(s) are involved. This would indicate that the sequence EDNEYTA, is preferentially affected by TPK-IIR even if it is included into the parent proteins.
DISCUSSION
A somewhat paradoxical outcome of this work is that peptides reproducing the highly conserved autophosphorylation site shared by all TPKs of the src family (Tyr-416 of pp60c"'r) are more efficiently phosphorylated by a spleen T P K presumably unrelated to the src family (TPK-IIB) than they are by members of the src family such as pp60""" (161, p56"" 2 kI)a). (18), and by two src-related spleen TPKs, namely TPK-I/lyn and TPK-111. These peptides are excellent substrates for TPK-IIB, displaying K , values in the micromolar range. The fact that substitution of any of the amino acids surrounding tyrosine decreases their ability to serve as substrates for TPK-IIB suggests that all the features of the main phosphoacceptor site of the src family contribute to the specificity for this kinase. The membership of this tyrosine kinase to the src family, though not completely ruled out, is rendered extremely unlikely by its lack of autophosphorylation activity, a property shared by all the src-related TPKs known so far and by its markedly different peptide substrate specificity. Furthermore, the modifications that decrease the phosphorylation of synthetic peptides by TPK-IIB either do not affect or even improve the phosphorylation of the same peptides by TPK-Illyn. Such a behavior, albeit somewhat paradoxical, was not totally unexpected considering that amino-terminal acidic residues (which are invariably found at the src autophosphorylation sites) are not required for the phosphorylation of synthetic peptides by TPKs related to pp60""" (16,19), and may even exert an unfavorable effect (8).
In this connection it would be interesting to establish the identity of TPK-111, which is also a member of the src family according to its reactivity with anti-cst-1 and anti-SEEP antibodies raised against conserved segments of src protein kinases. Its substrate specificity however is significantly distinct from that of TPK-I/lyn. The finding that TPK-I11 is not recognized by a variety of monospecific antibodies raised against the products of c-src, lck, hck, lyn, fyn ( 6 ) , and yes' oncogenes, increases the probability that it might be identical or very closely related to the last member of the src family, namely fgr. Interestingly the fgr tyrosine kinase differentiates for having the least conserved autophosphorylation site among the members of the src family (see Ref. 1). This might correlate with the distinct site specificity of TPK-111.
It is possible that the great susceptibility of EDNEYTA and NEYTA to phosphorylation by TPK-IIB might partially derive from their adopting a @-turn conformation, which is predicted by the method of Chou and Fasman (20). However, the probability of adopting a @-turn conformation is either the same or even higher for EDNAYTA and NAYTA, which are much worse substrates than EDNEYTA and NEYTA, respectively. It is reasonable therefore to conclude that the favorable effect of glutamic acid at position -1 is accounted for predominantly by the acidic nature of its side chain, rather than by any effect of conformation. The intrinsic importance of the acidic nature of the residue adjacent to the aminoterminal side of tyrosine is also corroborated by the very poor susceptibility to phosphorylation by TPK-IIB of EPQYQPA, despite its high probability of adopting a @-turn conformation (see Table 11). The additional finding that the triply substituted heptapeptide EDMYAA, although a much poorer substrate than the parent peptide, is nevertheless phosphorylated more efficiently than its neutral derivative QNAAYAA, discloses the favorable role of the acidic residue(s) a t position -3 and/or -4 as well.
Nevertheless, it is possible that a P-turn conformation might improve the suitability of phosphoacceptor sites that already fulfill the minimum structural requirements. This would account for the decreased phosphorylation efficiency of the peptides AEYTA, NEYAA, and AEYAA which have lost the predicted /?-turn conformation of NEYTA.
Taken together these observations would suggest that TPK-IIB or other TPK(s) with similar site specificity are involved in the phosphorylation of the src products in uiuo. It should be recalled in this connection that although pp60""" Tyr-416 (and the homologous tyrosines of the other src-TPKs) undergo in vitro autophosphorylation, no incontrovertible evidence is available that their in viuo phosphorylation invariably occurs by the same mechanism. Rather, the possibility is still open that heterologous phosphorylation of Tyr-416 by a distinct kinase could contribute to the activation of p p 6 0 (see Ref. 1). The inter-, rather than intramolecular mechanism of in vitro autophosphorylation (21) would be consistent with this hypothesis, assuming that a kinase with higher affinity for the phosphorylation site (like TPK-IIB) is present. On the other hand, it is also possible that the autophosphorylation efficiency of the src-related tyrosine kinase is increased either by conformational features inherent in the tertiary structure of the kinase itself or by so far unidentified endogenous effectors. In such a case Tyr-416 phosphorylation/activation by an heterologous kinase may still become relevant whenever the cellular src-TPKs have an intrinsically low activity, because of down-regulation by concomitant Tyr-527 phosphorylation. In any event the hypothesis that the autophosphorylation site(s) of src tyrosine kinases might be targeted by TPK-IIB has been corroborated, at least in vitro, by showing that TPK-IIB promotes an increased phosphorylation of TPK-I/lyn, giving rise to the same "P-peptide map obtained with autophosphorylated TPK-I/lyn.
In this respect TPK-IIB seems to be different from the brain T P K reported to specifically phosphorylate pp60""" a t its carboxyl-terminal site (Tyr-527) (4) despite sharing the same inability to perform autophosphorylation, a most unusual property among tyrosine kinases. TPK-IIB actually displays a very low phosphorylation efficiency toward the synthetic peptide EPQYQPA reproducing the sequence around Tyr-527. It should be noted however that such a peptide is an extremely poor substrate for TPK-I/lyn too (Table 11), consistent with the concept that the carboxylterminal tyrosine is not an autophosphorylation site but, rather, a target for another TPK(s) involved in down-regulation of src-TPKs.
Although TPK-IIB poorly phosphorylates the peptide EPQYQPA, it is still conceivable, that its efficiency might increase once this sequence is included in a more extended protein domain. Considering, however, the opposite regulatory functions of Tyr-416 and Tyr-527 it seems unlikely that they might be targets for the same kinase. All the data obtained with synthetic peptides support the concept that TPK-IIB displays a remarkable affinity for the site including Tyr-416 but not for the one including Tyr-527. A similar preference for EDNEYTA over EPQYQPA is shared by the other spleen tyrosine protein kinases characterized SO far. In this respect the synthetic peptide EPQYQPA could prove a suitable substrate for monitoring and detecting the tyrosine protein kinase(s) which are able to down-regulate the TPKs of the src family by phosphorylating their carboxyl-terminal site. | 6,133.4 | 1991-09-25T00:00:00.000 | [
"Biology",
"Chemistry"
] |
A METHOD FOR EXTRACTING SUBSTATION EQUIPMENT BASED ON UAV LASER SCANNING POINT CLOUDS
Smart grid construction puts higher demands on the construction of 3D models of substations. However, duo to the complex and diverse structures of substation facilities, it is still a challenge to extract the fine three-dimensional structure of the substation facilities from the massive laser point clouds. To solve this problem, this paper proposes a method for extracting substation equipment from laser scanning point clouds. Firstly, in order to improve the processing efficiency and reduce the noises, the regular voxel grid sampling method is used to down-sample the input point cloud. Furthermore, the multi-scale morphological filtering algorithm is used to segment the point cloud into ground points and non-ground points. Based on the non-ground point cloud data, the substation region is extracted using plane detection in point clouds. Then, for the filtered substation point cloud data, a three-dimensional polygon prism segmentation algorithm based on point dimension feature is proposed to extract the substation equipment. Finally, the substation LiDAR point cloud data collected by the UAV laser scanning system is used to verify the algorithm, and the qualitative and quantitative comparison analysis between the detected results and the manually extracted results are carried out. The experimental results show that the proposed method can accurately extract the substation equipment structure from the laser point cloud data. The results are consistent with the manually extracted results, which demonstrate the great potential of the proposed method in substation extraction and power system 3D modelling applications.
INTRODUCTION
In the digital era, smart grid construction puts forward higher demands on substation engineering construction (Guo et al., 2013;Zhang et al., 2013). Through the innovation of technical means, it is a must to further improve the construction and management level of substation engineering. At present, the main direction of China's power grid construction and development is to ensure the construction quality, improve the operation efficiency and reduce the construction cost.
Digital power grid is the data source of intelligent analysis and management for power grid, and it is the cornerstone of building smart grid. The key of digital power grid lies in the digital modelling and storage of pivotal components such as power grid facilities and so on. The results of digital modelling will become the basic data of the whole life cycle digital system of the project. Through the development of corresponding digital modelling and design standards, and with the help of 3D digital design means, the digital achievements of new power grid project can be transferred. However, for the existing power grid, it is faced with many problems, such as outdated or incomplete design data, nonstandard construction earlier, which leads to the inconsistency between the design and the final construction results. Therefore, it is necessary to use efficient three-dimensional model acquisition means to collect the actual construction scene data for accurate as-built model building, thus to provide the digital data basis for the construction of the smart grid (Tang et al., 2009, Lv et al., 2016. * Corresponding author
RELATED WORKS
Laser scanning equipment can achieve 3D high-precision point cloud scanning of electrical equipment, and directly obtain highprecision 3D data of substations without affecting the safe operation of substations. The laser point cloud data with precise geometry description can also be fused with RGB or multispectral camera data. In addition to obtaining 3D geometric information, the material and texture information of the equipment under scanning can be obtained. Compared with the traditional modelling methods, 3D fine modelling using laser point cloud data fusion of multi-source spectral image data has the advantages of higher accuracy and efficiency, and lower labour cost and so on (Guan et al., 2016;Liu et al., 2010). 3D laser scanning measurement technology has important applications in the extraction of power grid equipment (Cheng et al., 2014).
The substation is an important part of the power grid, but compared with other ground objects (such as buildings, roads, etc.), the structure of substation facilities is more complex and the types are more diverse. How to extract the fine threedimensional structure of substation facilities from massive laser point cloud data is still an important problem to be solved in terms of the construction of digital power grid. At present, the target extraction of power equipment based on laser point cloud data mainly focuses on power grid components with simple geometry such as tower and power line. For example, Chen et al. (2015), and Lin et al. (2016) used 3D laser point cloud data obtained from UAV and airborne platform to extract power lines and conducted diagnostic analysis on safe distance of transmission channel; extracted tower location from UAV point cloud data and established three-dimensional models of tower. However, compared with other targets (such as transmission lines, towers, buildings, roads, etc.), the structure of substation facilities is more complex and the types are more diverse. How to extract the fine three-dimensional structure of substation facilities from massive laser point cloud data is an important problem in digital power grid engineering construction. The research on extracting substation equipment from laser point cloud data is relatively rare. Li et al. (2016) proposed a 3D modelling method for substations based on massive point cloud data. In this method, the filtered point cloud is divided into different priority point cloud sets through point cloud data clustering, and then local surface fitting is performed for different point cloud sets respectively. Finally, different fitting surfaces are integrated to generate the final threedimensional model. This method can deal with massive point cloud quickly after clustering segmentation, but it requires complex geometric and topological relationship construction to ensure the integrity of local fitting surface and the merging rules between different surfaces. proposed a division method based on spatial region complexity for fine measurement of substations, and then used automatic and manual methods for point cloud de-noising, filtering and 3D visualization. Tian et al. (2018) used Streetview panoramic camera and IMS3D mobile mapping system to collect 3D laser point cloud of substations, and then used 3D Max 3D modelling software to realize 3D model of substations. Based on the manually labelled training samples, Fang et al. (2015) used random forest to segment the substation point cloud and extracted the geometric structure of the substation point cloud.
Due to the large number of internal components and complex scene, there are many special-shaped structures and power components in the substation. The shape of the internal parts of the substation is irregular, and the structure of the feature points, characteristic lines and feature planes are complex and diverse. At present, the 3D modelling technology of the substation still needs large manual interaction, and it is still a difficulty to automatically extract the three-dimensional structure of the substation from the Massive point cloud (Fang et al., 2015;Peng et al., 2018). Therefore, this paper proposes a new method to extract the equipment and facilities from the Massive point cloud data.
Figure.1 Overall workflow of the method
The whole algorithm can be divided into the following parts, and the work flow is shown in Figure 1. 1) Regularization of point cloud data. Aiming to solve the problems of large redundancy, long reconstruction time and low efficiency of 3D point cloud data, a point cloud simplification algorithm based on regular voxelized mesh down sampling is adopted for subsequent processing. 2) Point clouds separation. The original point cloud data contains a large number of ground data. The ground and the non-ground data must be separated. In this paper, morphological filtering is used for the separation. The non-ground data are used for further extraction. 3) Point cloud extraction of the substation equipment. This paper designs a method of extracting substation area based on the consistency estimation of point cloud normal to filter out the irrelevant non-surface point data outside the substation area. Based on the linear characteristics of substation equipment model, the point cloud dimension features are calculated to extract the fine equipment structure.
Regularization of Substation Point Clouds
In order to improve the processing efficiency, it is necessary to simplify the point cloud and eliminate the redundancy in the point cloud data. This paper adopts a simplified point cloud processing method based on regular voxels. The point cloud data is compressed by regular voxel down-sampling, and the spatial shape and structure features of the point cloud are saved at the same time, which improves the efficiency of subsequent point cloud filtering and feature calculation.
By calculating the side length L of the cube grid, the original point cloud is decomposed into m n l small grids. The values are obtained by traversing all point clouds, and the length of three sides of the 3D voxel grid are calculated according to formula (1). In order to ensure that the laser point points are in the threedimensional bounding box, the bounding box are extended outward, and the distance correction of is increased, which is usually set as the side length of single voxel.
The minimum 3D bounding box Cube(L x , L y , L z ) are calculated for preparing regularized 3D point cloud data by the data statistics. According to the set edge length V length , the minimum 3D bounding box space is divided into m × n × l voxels. Firstly, each laser point is assigned into the corresponding voxel; Secondly, the original point cloud is normalized by calculating the centre of mass P of each voxel and the point clouds in this voxel is replaced with P. After the voxel regularization, a large number of redundant points in the point cloud are eliminated.
Separation of the point clouds
The original substation point clouds contain a large number of ground points. Thus, it is necessary to separate the point clouds into ground points or non-ground points. In this paper, the morphological method (Zhang et al., 2003, Balado et al., 2020 is adopted to filter the point clouds. Two basic operations of expansion and corrosion are defined as equation 2, 3 in morphology. . Corrosion: Expansion: Where f is the image to be processed g is the morphological structure element Z(i, j) is the corresponding value of the (i, j) pixel position of the image after morphological operation The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) w is the structure element window Z 0 (s, t) is the value of pixel position (s, t) in the corresponding area of the structure element window in the original image.
When the morphological theory is applied to LiDAR point cloud data filtering, the size of structural elements is set larger than the size of ground objects, and carry out opening operation to filter out the surface objects. The mathematical expression is as follows: First, the structural element g is used to corrode f, and then the corrosion result is expanded. The mutil-scale morphological is designed to segment the point clouds. To be specifically, the filtering window adopts a linear increasing gradient structure element according to the characteristics of different features. The detail process is as follows: (1) The corrosion. Set the structure element window size to w × w, and use this window to traverse the laser point clouds.
For each point in the structural window, the elevation distribution is calculated, and the minimum elevation in the window of the structure is taken as the elevation value after corrosion.
(2) The expansion. The structure element window is set to be the same size as in step 1 (w × w). After traversing the laser point clouds, the data obtained after step 1 is processed by dilation operation with a w × w structure window. Firstly, the original point cloud elevation value is replaced with the output point cloud elevation value in step 1. Then comparing with the elevation of each point in the structure window, and the elevation maximum value in the w × w window is taken as the expanded elevation value.
(3) Ground point extraction. Whether the point is a ground point or not can be determined by the elevation difference generated via the process of corrosion and expansion. Let Z P be the original elevation of point p. After steps 1-2, the absolute value of the elevation difference between the expanded elevation value Z P1 of point P and the original elevation value Z P are calculated. If the absolute value of the elevation difference is less than or equal to the hard threshold t, then point p is classified as a ground point. Otherwise, it will be classified as a non-ground point.
Substation Equipment Extraction
The substation equipment point clouds are encircled in the nonground point clouds after the morphological filter segmentation. The necessary step is to localize the cover range of the substation, that is, to calculate the convex polygon of the substation, which could be obtained based on the levelling property of the substation. To be specifically, the localization of the substation is based on an assumption that the ground plane of the substation is horizontal. By estimating the normal distribution of the ground point clouds, the plane that contains the largest number of points could be extracted. The convex polygon construction method, alpha-shape (Santos et al., 2019), is used to calculate the convex polygon boundary of the . Appearently, the substation equipment's boundary is the same with the 's boundary. However, the convex boundary could only provide a rough range of substation. In order to extract the fine structure of substation, a dimension feature based method is designed to refine the substation equipment point clouds.
Point cloud dimension feature (Hackel et al., 2016, Weinmann et al. 2017) can describe the shape distribution of point cloud, and it has been widely used in point cloud segmentation and classification (Yang et al., 2013). And substation equipment usually has the characteristics of line and plane distribution. Planar devices (e.g. transformers) are usually connected to linear devices (e.g. conductors, insulators). Thus, in this paper, the dimension feature and region growth algorithms are combined to extract substation equipments. The dimension feature of LiDAR point cloud is mathematically defined as: Where a 1D + a 2D + a 3d =1 λ 1 , λ 2 , λ 3 , (λ 1 ≥ λ 2 ≥ λ 3 ) are the matrix eigenvalues obtained from the covariance matrix constructed by the point cloud neighbourhood point set.
In order to determine the optimal size of the neighbourhood radius and avoid the inaccurate description of local geometric features of point clouds, this paper adopts the method of minimizing the dimension feature entropy function to realize the adaptive calculation of the neighbourhood size and select the optimal neighbourhood radius. The basic principles are as follows: Firstly, the minimum and maximum neighbourhood search radius and interval step size, r min (initialized as 0.5m), r max (initialized as 2m), r step (initialized as 0.1M) are fixed empirically; Secondly, matrix decomposition is conducted in a certain radius to calculate the eigenvalue λ 1 , λ 2 , λ 3 and it calculate corresponding (a 1D , a 2D , a 3D ) value; Finally, entropy function is defined by equation 6: Where E f is the entropy of the sum of dimension feature.
According to formula (6), radius that corresponds to the minimum value of E f is the optimal neighbourhood radius, and the obtained (a 1D , a 2D , a 3D ) describes the feature description of point cloud geometry distribution.
Mathematically, for a linear target's point cloud, the eigenvalue of the neighbourhood point set in its own principal direction is much larger than that in the other two directions, namely the eigenvalue λ 1 ≫ λ 2 ≅ λ 3 . Thus, the point clouds of linear equipment have good distinguishability, leading them to be selected as the seed points for region growth, in which the points set with the same linear distribution dimension characteristics are generated. Through the spatial connection of different devices, linear and planar equipment could be extracted, so as to realize the extraction of structured substation facilities.
EXPERIMENTAL
The whole approach proposed in this paper is implemented by C++ and PCL open source library under Windows 10. The point clouds of a typical substation collected by UAV LiDAR system is used to verify the method. The data acquisition and sensor calibration is not the main topic studied in this paper. Thus the data quality is assumed to full fill the demand. The data coverage of the laser point cloud substation is 480m ×370m. The average point density is 17 points / m 2 , a total of 4574697 points. The original LiDAR point cloud data is shown in Figure 2 The morphological filtered results are shown in Figure 4. Figure 4 (a) is the extracted substation ground point cloud data, mainly including the substation area ground point cloud (flat area) and surrounding undulating terrain. Figure 4 (b) shows the extracted non-ground point cloud data, including substation equipment point cloud data and substation outside non-ground point cloud data. The non-ground point cloud outside the substation is mainly composed of vegetation and trees. Compared with Figure 4 (a) (b), it can be seen that after morphological filtering, the ground point cloud data is well filtered. But due to the interference of the surrounding non-substation point clouds, there are a large number of non-equipment point clouds in the extracted nonground points. Therefore, this paper designs a method for extracting substation area based on the consistency estimation of point cloud normal, in which the irrelevant non-ground point data outside the substation area are filtered out.
(a) Extracted ground points. The ground points mainly contain the levelling area point clouds and the terrain points.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) (b) Extracted non-ground points. The non-ground points mainly contain the substation equipment points and the vegetation points After the substation area is located, the point cloud data in the substation area are classified based on the dimension feature. By calculating the dimension feature of the point clouds pointwisely, the point clouds with linear and plane feature are retained to form the point clouds of substation equipment, as shown in Figure 6. Compared with Figure 5 (b), the point cloud data outside the substation area in the non-ground points and the point cloud data of non-substation equipment inside the substation have been well filtered, and the equipment point cloud in the substation has been well preserved, thus laying a good data foundation for the subsequent modelling and analysis.
In the quantitative comparative analysis, the manually selected power substation equipment point clouds are taken as the ground truth of the substation equipment. And the results extracted by the method proposed in this paper are compared with the ground truth. At the same time, the time costs in each step of the algorithm are calculated. The experimental results are shown in Table 1. According to statistics, the time consumption of the three stages of point clouds regularization, point clouds filtering and substation equipment extraction are 48s, 92s and 50s respectively. And additionally, the substation levelling area localization's time consuming is 43s The extraction accuracy and recall rate of point clouds filtering, substation levelling area localization and substation equipment extraction are 72% / 90%, 91% / 81% and 83% / 84% respectively. The point clouds distance between the automatically extracted and the manually extracted is almost smaller than 0.1m, which is accurate enough for the following fine modelling.
At the same time, the efficiency of this algorithm is compared with that of manual extraction. The substation extraction time per square metre of each two different extraction process are treated as the efficiency metric. The smaller time, the higher efficiency. The efficiency of this algorithm is 7 s/m 2 , and that of manual algorithm is 87 s/m 2 . The extraction efficiency of this algorithm is about 12.4 times higher than that of manual algorithm, which greatly improves the extraction efficiency of equipment laser point clouds in substation.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIV-4/W3-2020, 2020 5th International Conference on Smart City Applications, 7-8 October 2020, Virtual Safranbolu, Turkey (online) Table 2. Efficiency comparison between automatic extraction and manual extraction
CONCLUSION
The extraction of equipment laser point clouds in substation is the key and foundation of analysing the pivotal size of substation equipment and auxiliarily modelling equipment entity. This paper proposes a method for extracting equipment point clouds from UAV point clouds. And the detection results are evaluated qualitatively and quantitatively from point clouds collected by UAV LiDAR system. The accuracy of the proposed method is about equal to the traditional manual extraction method. efficiency. The experimental results show that the proposed method can significantly improve the efficiency by 12 times in this test area. That also shows the automation of this work. Future work will focus on the automatic construction of entity model under fine scale for the extracted substation facility data. | 4,932.2 | 2020-11-23T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
The Tumor Suppressor Protein Fhit
FHIT (fragilehistidine triad) is a candidate human tumor suppressor gene located at chromosome 3p14.2, a location that encompasses the FRA3B chromosomal fragile site. Aberrant transcripts have been detected in a variety of primary tumors, and homozygous deletions in the FHIT locus have been detected in different tumor cell lines. The gene product Fhit in vitro possesses the ability to hydrolyze diadenosine 5′,5′′′-P1,P3-triphosphate (Ap3A). The mechanism of action of Fhit as a tumor suppressor is unknown. Because the tubulin-microtubule system plays an important role in cell division and cell proliferation, we investigated the interaction between wild-type Fhit or mutant Fhit (H96N) and tubulin in vitro. The mutant form of Fhit (H96N) lacks Ap3A hydrolase activity but retains tumor suppressor activity. We found that both wild-type and mutated forms of Fhit bind to tubulin strongly and specifically with K d values of 1.4 and 2.1 μm, respectively. Neither wild-type nor mutant Fhit cause nucleation or formation of microtubules, but in the presence of microtubule-associated proteins, both wild-type and mutant Fhit promote assembly to a greater extent than do microtubule-associated proteins alone, and the microtubules formed appear normal by electron microscopy. Our results suggest the possibility that Fhit may exert its tumor suppressor activity by interacting with microtubules and also indicate that the interaction between Fhit and tubulin is not related to the Ap3A hydrolase activity of Fhit.
Multiple deletions in the short arm of chromosome 3 have been frequently found in different human cancers (1)(2)(3)(4), implying the presence of tumor suppressor genes. Recently the FHIT 1 gene was mapped by positional cloning to chromosome region 3p14.2, a location encompassing FRA3B, the most active constitutive chromosomal fragile site known. Aberrant transcripts have been detected in a variety of primary tumors, and homozygous deletions in the FHIT locus have been detected in different tumor cell lines (2)(3)(4).
FHIT encodes a 147-amino acid protein that has a HIT (histidine triad) sequence motif (positions 94 -99) identified by the presence of a histidine triad, HXHXHX, where X is a hydrophobic residue (1). Fhit has Ap 3 A hydrolase activity in vitro (5). FhitH96N, generated by site-directed mutagenesis of FHIT, has negligible Ap 3 A hydrolase activity, which suggests that the central histidine of the triad is essential for hydrolase activity (5). Suppression of tumors in nude mice injected with tumor cells transfected with either FHIT or FHIT-H96N provides the strongest data that Fhit is a tumor suppressor (6).
The results also indicate that the Ap 3 A hydrolase activity of Fhit is not necessary for its tumor suppressor activity (6). The mode of action of Fhit as a tumor suppressor and the relationship of the Ap 3 A hydrolase activity to tumor suppression are not yet understood. The Fhit gene and protein have been reviewed recently (7,8).
Microtubules are ubiquitous cytoskeletal organelles found in most eukaryotic cells; they play critical roles in mitosis and other cellular processes (9). Since microtubules participate actively in mitosis, they are the prime target of taxol, vinblastine, and other anti-tumor drugs (10). These microtubules are composed of the protein tubulin, consisting of two subunits called ␣ and .
We decided to explore the relationship between microtubules and Fhit. Most tumor suppressor proteins are DNA-directed and function at the transcriptional level (11). APC is the only tumor suppressor protein known to interact directly with microtubules and to promote microtubule assembly. A domain of APC has sequence homology with the microtubule-associated protein Tau (12). Mutations in APC cause the dissociation of APC from the microtubule cytoskeleton (13). Since the mechanism of Fhit as a tumor suppressor is still unknown, we examined the interaction of Fhit and tubulin in vitro. In this paper, we demonstrate for the first time that both Fhit and FhitH96N interact specifically with unfractionated tubulin with K d values of 1.4 and 2.1 M, respectively. Fhit and FhitH96N do not promote microtubule assembly of tubulin by themselves, but in the presence of MAP2 and Tau, both forms of Fhit induce assembly to a greater extent than do either MAP2 or Tau alone, as determined by turbidimetry, electron microscopy, and sedimentation. Thus, the relationship of Fhit with the tubulinmicrotubule system may help to understand the role of Fhit in the suppression of tumorigenesis. tubule protein; it contains tubulin and MAPs and was used for further purification. Tubulin was purified from microtubule protein by phosphocellulose chromatography (15). Experiments were done in the following buffer: 0.1 M MES, pH 6.4, 1 mM EGTA, 0.5 mM MgCl 2 , and 0.1 mM EDTA (15). For assembly of tubulin into microtubules, 1 mM GTP was added to the above buffer.
Preparation of Microtubule-associated Proteins-Microtubules were purified from bovine cerebra as described above, and Tau and MAP2 were purified from microtubule protein by the procedure of Fellous et al. (15) in which tubulin and MAP1 are precipitated by boiling and Tau and MAP2 are separated from the supernatant by gel filtration.
Preparation of Fhit and FhitH96N-FHIT and FHIT-H96N cDNAs were kindly provided by Dr. Kay Huebner. The original cloning of FHIT (1) and the site-directed mutagenesis of FHIT (5) have been described in detail. Subcloning of FHIT, expression in Escherichia coli, and purification of Fhit to homogeneity were done as described previously (16). FhitH96N was purified using the same procedure as described for Fhit (16).
Labeling of Fhit and FhitH96N with 6-IAF-Fhit contains one cysteine residue per monomer, and this cysteine residue was targeted for covalent modification with 6-IAF. Purified Fhit and FhitH96N, each at 30 M, were incubated separately in the presence of 200 M 6-IAF at 37°C for 30 min. After incubation, samples were centrifuged at 4000 rpm for 10 min at 4°C using a centricon-10 unit to remove residual 6-IAF. The centrifugation was repeated 10 -12 times to ensure that the filtrate lacked any unincorporated 6-IAF as detected spectroscopically. Fluorescently-labeled samples were stored at Ϫ80°C.
Fluorescence-For fluorometric measurement of the binding of tubulin to Fhit, aliquots of 6-IAF-labeled Fhit (1 M) were incubated with different concentrations of tubulin (0 -6 M) at 37°C for 30 min. Samples were then excited at 492 nm in the Hitachi F-2000 spectrophotometer, and emission was measured at 515 nm. The fluorescence of the fluorescently labeled Fhit was quenched with increasing concentrations of tubulin. The observed fluorescence-quenching values were fitted to a rectangular hyperbolic curve using MINSQ (Scientific Software, Salt Lake City, UT; version 3.2) nonlinear curve-fitting software for either a one-or two-site binding model equation as follows: For the one site model, F ϭ F m ϫ L/(K d ϩ L), where F is the fluorescence value at any ligand concentration, F m is the maximum fluorescence, L is the ligand concentration, and K d is the apparent dissociation constant for the tubulin-Fhit complex.
For the two-site model, where F 1 and F 2 are the observed corrected fluorescence values at any ligand concentration (L) for high and low affinity sites, respectively. F m1 and F m2 are the maximum fluorescence values for high and low affinity sites, respectively. K d1 and K d2 are the apparent dissociation constants for high and low affinity sites, respectively, and F is the total fluorescence value at any given ligand concentration.
Microtubule Assembly-Tubulin in assembly buffer containing 1 mM GTP was mixed with either MAP2 or Tau and incubated at 37°C in a cuvette in a Beckman DU 7400 spectrophotometer. Microtubule assembly was monitored by the increase in turbidity at 350 nm (17). Coldsensitivity of assembled microtubules was determined by measuring the decrease in turbidity at 350 nm after incubating microtubule samples in ice for 30 min.
Sedimentation-Samples of microtubule protein or tubulin containing MAP2 or Tau were incubated with or without Fhit at 37°C for 45 min. The samples were then centrifuged at room temperature for 4 min in the Beckman Airfuge at 175,000 ϫ g. To determine the polymer concentrations before centrifugation, the pellets were resuspended in 100 l of 10 mM Tris, pH 9.2, and the concentration of protein was determined; from this figure, the polymer concentration was then calculated.
Electron Microscopy-Microtubule structures were examined on negatively stained grids using a JEOL 100 CX electron microscope at an accelerating voltage of 60 kV as described previously (18). Samples were mixed with an equal volume of glutaraldehyde (1%) followed by mounting on carbon-coated grids for 30 s. The grids were then washed se- Fluorescence data were fitted to a one-site binding model using the nonlinear curve-fitting MINSQ software as described under "Experimental Procedures." quentially with cytochrome c, water, and finally with uranyl acetate (1%). The grids were air-dried before examination under the microscope.
Other Methods-Microtubules were subjected to electrophoresis on 10% polyacrylamide gels in the presence of 0.5% sodium dodecyl sulfate (19). Protein concentrations were determined by a modification of the method of Lowry et al. (20) using bovine serum albumin as a standard (21).
RESULTS
To study the interaction of Fhit with tubulin in vitro, we covalently labeled Fhit and FhitH96N with 6-IAF and studied the interaction between Fhit and tubulin fluorometrically. The fluorescence at 515 nm of both labeled forms of Fhit was quenched by increasing concentrations of tubulin when samples were excited at 492 nm. Unlabeled Fhit was able to reverse about 80% of the tubulin-induced quenching of fluorescence (Fig. 1). Since Fig. 1 shows hyperbolic behavior with a limiting non-zero plateau at about 20% control, half-maximal reversal of fluorescence quenching was calculated at 60% to yield an unlabeled Fhit concentration of 0.9 M (inset, Fig. 1). Furthermore, we also found that GTP and GDP at different concentrations (1-20 M), unlabeled Fhit, and the tubulin-associated proteins MAP2 and Tau do not quench or change the fluorescence of labeled Fhit under the same conditions (data not shown).
The fluorescence quenching of 6-IAF-labeled Fhit and FhitH96N in the presence of tubulin was analyzed using a nonlinear curve-fitting program applied to the one-and twosite models. The results were most consistent with a protein model with one class of binding site. The one-site model that best fits the data is shown in Figs. 2, A and B Because tubulin forms cylindrical microtubular structures in the presence of its associated proteins in vitro, we studied the effect of Fhit and FhitH96N on microtubule protein assembly. We found that with both forms of Fhit, microtubule assembly was enhanced significantly (Figs. 3, A and B), and the extent of microtubule assembly was a function of the concentration of Fhit (Fig. 3A). Microtubules formed in the presence of Fhit and FhitH96N appear to have typical microtubular structures as determined by electron microscopy (Figs. 3, C and D). The microtubules formed in the presence of Fhit and FhitH96N are also cold-sensitive (data not shown), which is a property of normal microtubules formed in the absence of Fhit (22).
Neither Fhit nor FhitH96N initiate microtubule assembly of pure tubulin in the absence of MAPs as detected by turbidimetry (Fig. 4), electron microscopy ( Fig. 5 A and F), or sedimentation assay (Table I). However, in the presence of either Tau or MAP2, both Fhit and FhitH96N promote the assembly of tubulin more than do either Tau or MAP2 alone (Fig. 4). In all cases with either Fhit or FhitH96N, the assembled microtubules appear to be normal in structure as determined by electron microscopy (Figs. 5). Sedimentation analysis (Table I) shows that when Fhit and tubulin are incubated with either MAP2 or Tau, the presence of Fhit increases the polymer mass by 34% and 106%, respectively. These data are in good agreement with the data shown in Figs. 4, A and B, demonstrating that the effect of Fhit on Tau-induced assembly is greater than on MAP-2-induced assembly. In an analogous experiment using unfractionated microtubule protein, Fhit increases polymer mass by only 7%. 2 The pelleted samples were analyzed by polyacrylamide gel electrophoresis (Fig. 6). The results demonstrated that some Fhit copolymerized with tubulin. DISCUSSION We found that both Fhit and FhitH96N bind to tubulin with similar apparent affinities (K d values for wild-type and mutant Fhit ϭ 1.4 and 2.2 M, respectively). This observation suggests that the mutation at histidine 96, which is the central histidine of the histidine triad, has no significant influence on the interaction between Fhit and tubulin. This mutation causes loss of the Ap 3 A hydrolase activity of Fhit (5) but not loss of the tumor suppressor capacity of Fhit (6). Binding of both wild-type Fhit and FhitH96N to tubulin is compatible with the tumor suppressor function of Fhit being independent of Ap 3 A hydrolysis. This finding also predicts that the Ap 3 A catalytic domain and the tubulin binding domain of Fhit are distinctly different and that they do not significantly influence each other.
Since tubulin forms microtubules in the presence of GTP and associated proteins, we studied the effect of the Fhit protein on FIG. 6. Electrophoretic analysis of microtubules polymerized in the presence or absence of Fhit. Samples (200 l) of microtubule protein (2 mg/ml) (A) or phosphocellulose-purified tubulin (1 mg/ml) (B) containing MAP2 (0.3 mg/ml) or Tau (0.15 mg/ml) were incubated with or without Fhit (12 M) at 37°C for 45 min followed by centrifugation at room temperature for 4 min at 175,000 ϫ g. The supernatants of all the samples were then carefully decanted, and the pellets were washed thoroughly with 100 mM Mes buffer and then dissolved in 100 l of 10 mM Tris, pH 9.2, and kept at room temperature for 1 h with occasional stirring. Aliquots (15 l) of each sample were subjected to electrophoresis on 10% polyacrylamide gels containing SDS (19). The gels were stained with Coomassie Brilliant Blue R-250. A, samples were Fhit standard (lane 1), pellets (lanes 2 and 3), and supernatants (lanes 4 and 5) of microtubule protein assembled in the absence (lanes 2 and 4) and presence (lanes 3 and 5) of Fhit. B, samples were pellets (lanes 1-5) and supernatants (lanes 6 -10) of tubulin assembled in the presence of Fhit (lanes 1 and 6), MAP2 (lanes 2 and 7), MAP2 ϩ Fhit (lanes 3 and 8), Tau (lanes 4 and 9), and Tau ϩ Fhit (lanes 5 and 10).
TABLE I Effect of Fhit on MAP2-and Tau-induced polymerization of tubulin
Aliquots (200 l) of tubulin (1 mg/ml) containing MAP2 (0.3 mg/ml) or Tau (0.15 mg/ml) were incubated with or without Fhit (12 M) at 37°C for 45 min. Tubulin alone and tubulin with Fhit served as controls. After the incubation, the samples were centrifuged in a Beckman Airfuge at room temperature for 4 min. The pellets were dissolved in 100 l of 10 mM Tris, pH 9.2, and the total protein mass was measured. Each reaction was done in duplicate. ND, nondetectable. the assembly process. We found that both Fhit and FhitH96N promote assembly in a concentration-dependent manner and that the assembled microtubules had normal structures as revealed by electron microscopy. These results are consistent with our observation that mutation at histidine 96 does not influence the binding of Fhit to tubulin. Our sedimentation assay also shows clearly that Fhit-mediated assembly of tubulin increases the tubulin microtubule mass and that Fhit is physically associated with microtubules. The data obtained using different methodologies to quantitate the increment of Fhit-induced microtubule assembly vary, but the phenomenon that Fhit promotes assembly by increasing microtubule mass is highly consistent. The data obtained by electrophoretic analysis indicates that Fhit probably behaves sub-stoichiometrically in promoting assembly of tubulin. Reversal of fluorescence quenching by Fhit and the cold-sensitivity of Fhit-induced microtubules also strongly suggest that the interaction between Fhit and tubulin is specific and that the increment of light scattering and the polymer mass by Fhit is due to formation of normal microtubules.
Our results suggest that Fhit has a unique binding site on tubulin and that the site probably does not overlap with any of the MAP binding sites as Fhit is copelleted along with other microtubule-associated proteins. Sequence comparisons using the FASTA3 program (23) of Fhit and proteins known to interact with tubulin also support this hypothesis as the analyses failed to find any significant sequence similarities among Fhit and such proteins. In contrast, the tumor suppressor protein APC, which is known to bind to tubulin, has sequence similarity with a particular domain of Tau protein (12). There are distinct differences between APC and Fhit in terms of their interactions with tubulin. APC promotes the assembly of tubulin by itself (12) without requiring the presence of any MAPs. In contrast, neither Fhit by itself nor FhitH96N by itself induce the assembly of tubulin, but in the presence of Tau or MAP2, both Fhit and FhitH96N promote the assembly of microtubules more than do Tau or MAP2 alone. Since the C-terminal domains of both ␣ and  subunits are flexible and exposed even on the surface of microtubules (24,25), these regions can be selectively removed by limited proteolysis with subtilisin (26,27). The C terminus-cleaved tubulin exhibits MAP-independent microtubule assembly due to reduction of electrostatic repulsion and forms microtubule-like sheets (26,27). Interestingly, Fhit alone can enhance the assembly of subtilisin-cleaved tubulin. 2 This observation is also consistent with the hypothesis that Fhit interacts at a unique site other than the C-terminal regions, where MAPs are thought to interact (26,28). Because the electrostatic repulsion among the C-terminal regions of tubulin hinders microtubule assembly, Fhit can not induce microtubule assembly by itself. It needs other proteins such as MAP2 or Tau that bind at the C-terminal regions to exhibit its microtubule-inducing property.
In view of the critical roles microtubules play in mitosis and other cellular processes, it is not unexpected that they would also interact with tumor suppressor gene products such as APC and Fhit. The precise nature of the physiological connection between our results and tumor suppression is, however, not clear. Both APC and Fhit enhance microtubule assembly. It is conceivable, therefore, that they may either diminish microtubule dynamic behavior or else interfere with microtubule disassembly, either of which effects could inhibit a process such as mitosis, which is a highly coordinated and complex interplay of microtubule growth, shrinkage, and dynamics. The tubulin binding domain of APC resembles part of the microtubuleassociated protein Tau, a part that is not known to bind to tubulin directly but which regulates the Tau-tubulin interaction (12). One may speculate that this domain acts like a MAP to enhance assembly and inhibit dynamics; this could also explain why APC can induce microtubule assembly in the absence of MAPs. In contrast, Fhit can only enhance microtubule assembly in the presence of a MAP; for Fhit, the correlation between structure and function is still unclear. The fact that the H96N mutation, which markedly decreases the Ap 3 A hydrolase activity (5,29), alters neither the tumor suppression nor the effect on microtubule assembly indicates that the hydrolase activity is not required for either of these properties. However, the connection between the tumor suppression and the binding to tubulin remains the subject for future investigation. | 4,407.2 | 1999-08-20T00:00:00.000 | [
"Biology"
] |
Expression and localization of aquaporin 1b during oocyte development in the Japanese eel (Anguilla japonica)
To elucidate the molecular mechanisms underling hydration during oocyte maturation, we characterized the structure of Japanese eel (Anguilla japonica) novel-water selective aquaporin 1 (AQP1b) that thought to be involved in oocyte hydration. The aqp1b cDNA encodes a 263 amino acid protein that includes the six potential transmembrane domains and two Asn-Pro-Ala motifs. Reverse transcription-polymerase chain reaction showed transcription of Japanese eel aqp1b in ovary and testis but not in the other tissues. In situ hybridization studies with the eel aqp1b cRNA probe revealed intense eel aqp1b signal in the oocytes at the perinucleolus stage and the signals became faint during the process of oocyte development. Light microscopic immunocytochemical analysis of ovary revealed that the Japanese eel AQP1b was expressed in the cytoplasm around the yolk globules which were located in the peripheral region of oocytes during the primary yolk globule stage; thereafter, the immunoreactivity was observed throughout the cytoplasm of oocyte as vitellogenesis progressed. The immunoreactivity became localized around the large membrane-limited yolk masses which were formed by the fusion of yolk globules during the oocyte maturation phase. These results together indicate that AQP1b, which is synthesized in the oocyte during the process of oocyte growth, is essential for mediating water uptake into eel oocytes.
Background
Teleost oocytes are arrested at the prophase of the first meiotic division during their long period of growth (vitellogenic phase) [1]. After completion of vitellogenesis, oocytes undergo maturation (meiosis resumption) which is accompanied by several important processes, such as germinal vesicle breakdown (GVBD), hydration of oocytes, lipid coalescence and clearing of ooplasm [2]. In particular, in marine teleosts spawning buoyant eggs in seawater, oocytes undergo a significant increase in size because of rapid water uptake during meiosis resumption [3][4][5]. During these processes, the oocytes become buoyant, which is essential for their oceanic survival and dispersal as well as for the initiation of early embryogenesis [4,6,7].
Previous studies in marine teleosts producing buoyant (pelagic) eggs in seawater [8,9] and our recent study in the Japanese eel, Anguilla japonica [5] indicated that free amino acids and small peptides produced by yolk protein hydrolysis [6,7,10] and the accumulation of ions such as K + and Cl - [10,11] during oocyte maturation provide an osmotic driving force for water influx into the oocyte. Moreover, aquaporin, an open molecular channel transporting water and other solutes along an osmotic gradient [12], was found to contribute to the rapid water influx into the oocyte during oocyte maturation in the gilthead seabream (Sparus auratus; [4,9]. In the Japanese eel [5], although possible contribution of aquaporin on oocyte hydration is suggested, no direct evidence have not been available until today. Recent phylogenetic and genomic analyses showed that teleosts have two closely linked AQP1 paralogous genes, termed aqp1a and aqp1b (formally AQP1o). In the gilthead seabream, the aqp1b gene was highly expressed in the ovary containing previtellogenic and early vitellogenic oocytes [4]. Immunocytochemical analysis [4,7] revealed that AQP1b protein appeared to be located within a thin layer just below the oocyte plasma membrane. These observations therefore indicate that AQP1b is synthesized de novo by the oocyte at the initiation of vitellogenesis on an already existing mRNA pool. Closely linked to aqp1b paralogous gene, termed aqp1a, was also found from European eel kidney and was ubiquitously expressed [13]. Although there have been several other reports on the AQP contributing to osmoregulation in intestine [14], and gills [15], there is no detailed information concerning the AQP related mechanisms of oocyte hydration in eel.
Freshwater eels of the genus Anguilla are distributed worldwide and have unique characteristics such as a catadromous life history. The Japanese eel A. japonica is believed to migrate from the rivers into the ocean and spawn eggs in a particular area in the western North Pacific ocean (west of the Mariana Islands; [16]). Japanese eels found in rivers and the costal region of Japan are sexually immature and never mature under commercial rearing conditions [17,18]. Repeated injections of salmon pituitary extracts (SPE) induced vitellogenesis and subjective injection of 17, 20β-dihydroxy-4-pregnen-3-one (DHP, an eel maturation-inducing steroid) successfully induced maturation and ovulation of oocytes [18][19][20][21]. In vitro addition of DHP into the incubation medium also induced GVBD and ovulation of oocytes [5,22,23]. During in vivo and in vitro maturation, Japanese eel oocytes undergo a significant increase in size because of rapid water uptake, and the eggs become buoyant [5]. Recent in vitro experiments from our laboratory showed that addition of HgCl 2 , an inhibitor of AQP water permeability, inhibited HCG-or DHP-induced water influx into oocytes and moreover, the inhibition was reversed by the addition of β-mercaptoethanol, suggesting that AQP facilitates water uptake by acting as a water channel in the oocyte of the Japanese eel [5]. However, there are no studies on AQP gene expression and its protein localization in the oocytes of Japanese eel or other primitive teleosts.
Therefore, in order to clarify AQP mediated mechanisms of oocyte hydration in the Japanese eel, the present paper reports the cloning, expression and sub-cellular localization of aqp1b using in situ hybridization and immunocytochemistry during oocyte growth and maturation.
Fish and ovarian samples
Cultured female Japanese eels weighing approximately 300 to 500 g were obtained from a fish farm and the Shibushi Station, National Center for Stock Enhancement, Fisheries Research Agency, Japan. After acclimation to seawater, they were kept without feeding in 400-L indoor circulating tanks under a natural photoperiod at a water temperature of 20°C. Cultured female eels are sexually immature and the oocytes never develop exceeding the early vitellogenic stage under the rearing condition [18,24]. To induce sexual maturation, they were intraperitoneally injected with SPE (20-30 mg/kg body weight) once a week. SPE was prepared by homogenizing salmon (Oncorhynchus keta) pituitary powder with a 0.9% NaCl solution, followed by centrifugation at 3000 rpm [19,24,25]. Oocytes at the previtellogenic and vitellogenic stages were taken from maturing female eels during the process of artificially induced sexual maturation. After 10-13 injections, full-grown oocytes and oocytes at the migratory nucleus stage were taken from the genital pore of fully matured female eels with a polyethylene cannula. Matured oocytes were obtained from female eels that were processed according to the method described previously [18,19,25]. Briefly, females that possessed oocytes over 750 μm in diameter at the migratory nucleus stage were injected with SPE (30 mg/kg body weight) as a priming dose, followed 24 h later by an intraperitoneal injection of DHP (2 μg/g body weight). All animal experiments were conducted in accordance with the University of Miyazaki guidelines and every effort was made to minimize the number of animals used and their suffering.
Isolation of Japanese eel aqp1b
The ovary was collected from a maturing female Japanese eel, frozen in liquid nitrogen, and stored -80°C until used. Total RNA was extracted from ovary with TRIzol Reagent (Invitrogen, Carlsbad, CA, USA). The RNA was quantified on the basis of the absorbance at 260 nm and appeared to be non-degraded on a 1% (w/v) denaturing agarose gel containing formaldehyde and ethidium bromide. Complementary DNA was synthesized from 1 μg of total RNA using random hexamers and the Omniscript Reverese Transcriptase kit (Qiagen, Hilden Germany). After reverse transcription (RT), one-tenth of the RT reaction product was then mixed with 0.5 μM of each primer, 0.5 mM dNTPs and 1.25 U PrimeSTAR HS DNA polymerase (Takara Biomedicals, Tokyo, Japan) in a 50-μl reaction volume and amplified by a three-step PCR protocol as follows: 94°C for 2 min followed by 30 cycles of 94°C for 15 s, 52°C for 30 s, and 72°C for 1 min. A pair of degenerate primers based on the nucleotide sequence of other vertebrates were designed to clone the Japanese eel aqp1b cDNA (sense: 5'-TGGAGGRCNGTBCTDGCYGAGCT-3'; antisense: 5'-GCT GGDCCRAAWGWYCGAGCAGGGTT-3'; IUB group codes were used: R = A/G, Y = C/T, W = A/T, B = C/G/T, D = A/G/T, N = A/C/G/T). Sequence information from the partial cDNA clone was used to design gene-specific primers for 5'-RACE and 3'-RACE to clone the full-length eel aqp1b.
5 'RACE and 3 'RACE were carried out using the GeneRacer Kit (Invitrogen) according to the manufacturer's instructions. Briefly, 1 μg total RNA was dephosphorylated with calf intestinal alkaline phosphatase and decapped using tobacco acid pyrophosphatase (TAP). The GeneRacer RNA oligo was ligated to the TAP-treated mRNA with T4 RNA ligase and a cDNA template generated by reverse transcription using SuperScript II and the GeneRacer oligo dT primer. The 5'end of the eel aqp1b was amplified using the antisense primers 5'GSP-1 (5'-AAGCCTTGAGCCACGGCAACACC-3') and 5'GSP-2 (5'-AGCTCGGAACACGCTTATCTGA-CAGC-3') in combination with the GeneRacer 5' Primer and GeneRacer 5' Nested Primer from the 5'RACE. For 3'RACE, sense primers 3'GSP-1 (5'-AGCTGTCAGA-TAAGCGTGTTCCGAGC-3') and 3'GSP-2 (5'-AAGC-TAAATGGTGTTGCCGTGGCTC-3') were used in combination with the GeneRacer 3' Primer and GeneRacer 3' Nested Primer to amplify the 3' end of eel AQP1b cDNA. The resultant DNA fragment was ligated and subcloned using TOPO-XL (Invitrogen), and sequenced. The cDNA clones were sequenced using an ABI PRISM 3130 × l DNA sequencer (Applied Biosystems, Foster City, CA, USA). The nucleotide sequence was determined by analyzing more than four clones from distinct amplification to avoid PCR errors. The nucleotide and amino acid sequences were analyzed using the SeqMan software (DNAStar, Madison WI, USA) and the BLAST network service of the National Center for Biotechnology Information. The accession number of the sequence reported in this paper has been deposited in the DDBJ/ EMBL/GenBank as AB586029.
Tissue distribution of messenger RNA
One microgram of total RNA was reverse-transcribed to first-strand cDNA using a random hexamer primer and Omniscript RT kit (Qiagen, Chatsworth, CA, USA). After reverse transcription (RT), one-tenth of the RT reaction product was then mixed with 0.5 μM of each primer, 0.5 mM dNTPs and 0.2 U PrimeSTAR HS DNA polymerase (Takara Biomedicals, Tokyo, Japan) in a 20-μl reaction volume and amplified by a three-step PCR protocol as follows: 94°C for 2 min followed by 30 cycles of 94°C for 30 s, 60°C for 30 s, and 72°C for 1 min. Primers specific to the Japanese eel aqp1b were designed, aqp1b sense (5'-GATTACCCTGGCTACGCT-CATT-3') and aqp1b antisense (5'-CTTGAGCCACAG-CAACACCA-3'). These primers flank an intron in the aqp1b sequence, so any amplicons from possible genomic contamination can be eliminated. A DNA positive control reaction was performed using primers for b-actin (GenBank accession number AB074846). Additionally, negative control reactions were performed using RNA (without RT) as a template in our amplification protocol. Ten microlitres of product from each reaction was analyzed using 2% agaraose gel in 1 × TAB stained with 0.5 μg/ml ethidium bromide. The amplified fragments of aqp1b and b-actin, 173 and 226 bp, respectively, were sequenced to verify their specification.
In situ hybridization
To generate digoxigenin (DIG)-labeled antisense and sense RNA probes, cDNA was amplified by PCR using primers for eel aqp1b. The gene-specific downstreamprimer contained an artificially introduced T7 RNA polymerase recognition sequence (5'-TAATACGACT-CACTATA-3'), and a 6 bp transcription initiation sequence at its 5'-end to enable synthesis of transcripts. Primer sequences are shown below, whereby the underline indicates the promoter sequence: for generation of antisense probe: aqp1b forward: 5'-CATATTCGTTGG-TATTTCAGCTGCAGTCGG-3'; aqp1b T7-reverse: 5'-TAATACGACTCACTATAGGGAGGGGAGGTAGT-CATAGACAAGGGCAGCTACCA G-3'. For production of sense probe: aqp1b T7-forward: 5'-TAATACGACT-CACTATAGGGAGG CATATTCGTTGGTATTT-CAGCTGCAGTCGG-3'; aqp1b reverse: 5'-GGAGGT AGTCATAGACAA GGGCAGCTACCAG-3'. DIGlabelled RNA probes were generated from RT-PCRderived templates by in vitro transcription performed using DIG RNA Labeling kit (Roche Diagnostics, Mannheim, Germany). The RNA probes synthesized were quantified and an equal amount of each probe was used for hybridization. In situ hybridizations were performed using DIG-labeled antisense and sense cRNA probes according to a slightly modified method of [29]. Briefly, freshly dissected ovarian fragments were fixed in Bouin's solution at 4°C overnight. Fixed ovarian fragments were embedded in paraffin, sectioned (5 μm) mounted onto silane-coated slides (Matsunami Glass, Osaka, Japan). The sections were deparaffinized, hydrated, and then hybridized with cRNA probes. Coverslips were placed over the sections, and the slides were incubated in humidified chambers at 58°C for 18 h. Hybridized probes were detected using the DIG Nucleic Acid Detection Kit (Roche Diagnostics).
Antibody
A polyclonal antibody was raised in a rabbit against a synthetic peptide corresponding to part of the C-terminal region of eel AQP1b molecules (APAQEPLLEGC-SAAQWTKG) (Figure 1). The antigen conjugated with keyhole limpet hemocyanin (KLM) was emulsified with complete Freund's adjuvant, and immunization was performed in a New Zealand white rabbit (Uniqtech Co. Ltd., Chiba, Japan). The antisera were affinity-purified on thiopropyl sepharose 6B coupled to the synthetic peptide. The anti-eel AQP1b antibody was used at 1:1000-2000 for immunocytochemistry and Western blot analysis.
Western blot analysis
Ovarian fragments containing vitellogeinc oocytes at the various developmental stages were used in the experiments. Western blot analysis was performed according to the method of [30]. Prior to Western blotting, the ovarian fragments and oocytes were homogenized in SDS-PAGE sample buffer (10% SDS 4.5 ml: glycerin 3 ml: 2-mercaptoethanol 1.5 ml: bromophenol blue 0.75 ml: distilled water 0.25 ml) and the homogenates were centrifuged at 10,000 g for 40 min at 4°C. The supernatant was used for the following sodium-dodecyl-sulphate-polyacrylamide gel electrophoresis (SDS-PAGE). SDS-PAGE was performed on precast polyacryllaide gel with a gradient of total acrylamide concentrations from 5% to 20% (ATTO, Tokyo, Japan). Protein bands on SDS-PAGE gels were transblotted onto a Polyvinylidene difluoride (PVDF) membranes (Immobilon-P, Millipore, Bedford, MA) using a semidry transfer apparatus (Trans-Blot SD; Bio-Rad, Hercules, CA). The transferred proteins were stained with Coomassie brilliant blue (CBB), destained in 50% methanol and 5% acetic acid, washed with Milli-Q water (Millipore Corp.) and dried. Immunological detection of eel AQP1b was carried out using an antibody to the Japanese eel AQP1b described above. Immunoreactive proteins were visualized by using a Histofine streptoavidin-biotin peroxidase complex kit (Nichirei Co. Ltd., Japan).
Immunocytochemistry
Ovarian fragments obtained from female eels were fixed in Bouin's solution at 4°C overnight. The immuocytochemical staining was performed as per the method described previously [31]. Briefly, deparaffinized sections were immersed in 3% hydrogen peroxide for 10 min to remove endogenous perxoidase staining. Immunoreactive proteins were visualized by using a Histofine streptoavidin-biotin peroxidase complex kit (Nichirei Co. Ltd., Japan). After incubation with 10% normal goat serum for 10 min, the sections were incubated with the primary antisera for 30 min. The sections were then rinsed in PBS and incubated with biotinylated anti-rabbit IgG for 10 min. After rinsing with PBS, the sections were incubated with streptavidin-linked horseradish-peroxidase (HRP) for 5 min. After a final wash with PBS, the HRP complex was developed with a solution of 0.001% hydrogen peroxidase-0.05% 3,3-diaminobezidine tetrahydrocholoride (Sigma, CA) in 0.05 M Tris-HCl buffer (pH 7.6). After immunostaining, the sections were counterstained with Mayer's hematoxylin. All procedures were conducted at room temperature. The specificity of the immunostaining was confirmed by the following controls: (1) primary antisera were substituted with normal rabbit serum (NRS) or PBS, (2) primary antisera were absorbed with purified synthetic peptide corresponding to part of the C-terminal region of eel AQP1b molecules. Replacement of primary antiserum with PBS or NRS abolished the immunostaining of the antiserum. The preabsorption of antiserum with an excess amount of the synthetic peptide also greatly reduced the immunostaining.
Results
Cloning of aquaporin1b cDNA and deduced amino acid sequence (Figure 1) The deduced protein sequence has 263 amino acids with a calculated molecular mass of 28.0 kDa. According to Kyte-Doolittle hydropathic analysis, AQP1b contained six hydrophobic putative transmembrane domains connected by five loops (A-E) and extracellular N-terminal and cytoplasmic C-terminal domains. Further analysis indicated that Japanese eel AQP1b had a Asn-Pro-Ala (NPA) motif in loops B and E, which is the hallmark of the membrane intrinsic protein family. Figure 1 shows the deduced amino acid sequence of the Japanese eel aqp1b cDNA aligned with counterparts from other species. Japanese eel AQP1b shared high homology with European eel (99%), zebrafish (65%), Senegale sole (62%), gilthead seabream (61%) AQP1b, with lower identity to AQP1 from human (54%). The deduced amino acid sequence homology of AQP1b with teleost AQP1s was as follows: Japanese eel AQP1a 64%; European eel AQP1a 64%; gilthead seabream AQP1a 66%; zebrafish AQP1a 61%; Xenopus AQP1 55%; chicken AQP1 55%; rat AQP1 53%; human AQP1 54%. To illustrate the phylogenetic relatedness of the Japanese eel AQP1b with other reported AQP proteins, an evolutionary tree was constructed using the neighbor-joining method of CLUSTAL W. As expected from the sequence alignment, Japanese eel AQP1b fell into the cluster with the AQP1b subfamily from other species (Figure 2).
Eel AQP1b expression and tissue distribution (Figure 3)
The expression of ovary-derived aqp1b in different Japanese eel tissues was assessed by RT-PCR analysis. For RT-PCR analysis, selected regions of AQP1b cDNA were amplified from the RT mixture with sequence-specific primers. As shown in Figure 3, an abundant amplification product of the aqp1b transcript was detected in Japanese eel ovary and a relatively lower amount in the testis. In contrast, the expression was barely detectable in the other tissues tested (brain, eye, gill, oesophagus, heart, liver, kidney, small intestine, and muscle). We repeated the analysis using different primer sets and found similar results (data not shown).
In situ hybridization
In situ hybridization was carried out to characterize the cellular expression of eel aqp1b in the oocytes at various developmental stages. Although an aqp1b signals could not be observed at oocytes at the perinucleolus stage ( Figure 4A), the intense signal was first found in larger oocytes at the same stage ( Figure 4B). Relatively intense signals were localized to regions within the cytoplasm ( Figure 4B) and later they became distributed throughout the cytoplasm ( Figure 4C). Similar intense signals were found in the oocyte at the oil droplet stages ( Figure 4D). Thereafter, the expression of eel aqp1b signals became weak in association with oocyte growth from the primary yolk ( Figure 4E) to the secondary yolk globule stage ( Figure 4F) and sometimes became absent.
Immunocytochemistry
Immunoblotting analysis using the eel AQP1b antisera on the extracts from ovarian fragments showed a single protein band with a molecular mass of approximately 28 kDa, thus corresponding to the calculated molecular mass of Japanese eel AQP1b ( Figure 5). Immunocytochemical observation for eel AQP1b was performed in ovarian fragments containing oocytes at various developmental stages. Previtellogenic ovarian follicles at the perinucleolus and oil droplet stages were devoid of eel AQP1b positive reaction ( Figure 6). Intense immunoreaction was first observed in oocytes at the primary yolk globule stage, exclusively in the vesicles (which are surmise yolk globules) located at the peripheral ooplasm ( Figure 7). Full-grown oocytes at the tertiary yolk globule stage showed the germinal vesicle in a central position and small yolk globules distributed in the cytoplasm. Intense immunoreactions were located around yolk globules occupied in oocyte cytoplasm ( Figure 8A, D). During oocyte maturation phase, yolk globules fused together and increased in size in oocytes at the migratory nucleus stage ( Figure 8B) and yolk globules became larger yolk masses but they did not fuse into a single yolk mass in oocytes at the mature stage ( Figure 8C). Immunoreactions were observed around the membrane bound yolk masses in oocytes at the migratory nucleus stage ( Figure 8E) and at the mature stage ( Figure 8F).
Discussion
We have isolated and characterized a Japanese eel aqp1b cDNA derived from ovary. The predicted amino acid sequences of the cloned Japanese eel ovary-derived aqp1b shared 99% overall sequence identity with that of the AQP1 previously reported in the European eel, Anguilla anguilla [13] termed AQP1dup. Tingaud-Sequeira et al. [32] indicated, from results of phylogenetic and genomic analyses, that teleosts, unlike tetrapods, have two closely linked aqp1b paralogous genes, termed aqp1a and aqp1b. European AQP1dup is identical to that recently isolated from the same species and named aqp1b. Therefore, it is certain that the Japanese eel AQP that we cloned from the ovary is a homologue of aqp1b. The Japanease eel APQ1b contains three functional domains; an N-terminal extracellular Figure 3 Tissue distribution of aqp1b in Japanese eel. RT-PCR was performed using 1 μg of total RNA prepared from Japanese eel tissues using the Omniscript kit (Qiagen). Amplification products were analyzed on a 2.0% agarose gel and stained by ethidium bromide. Control RT-PCR of β-actin using the same amount of cDNA is also indicated in the lower panel. Lane from the left is brain(1), eye(2), gill(3), esophagus(4), heart(5), liver(6), kidney (7), intestine(8), ovary(9), testis(10), and no template (11), respectively. domain, a large transmembrane domain, and a C-terminal cytoplasmic domain. In particular, six potential transmembrane domains and two NPA motifs are conserved. Moreover, amino acids known to be essential for the pore-forming region in human AQP1 (i.e., Phe 56 , His 180 , and Arg 195 [33]) were present in an analogous position in Japanese eel AQP1b. Therefore, these amino acids in Japanese eel AQP1b may be involved in waterselective pore formation. Also, a Cys residue located Nterminal to the second NPA motif, which may be involved with inhibition of water permeability by mercurial compounds [34], was identical in Japanese eel AQP1b. In silico analysis of AQP1b C-terminal amino acid sequence revealed that a consensus residue (Ser), had high phosphorylation score and fulfilled the criteria for a Pro-directed kinase phosphorylation site, also found in the C-terminus of fugu, sole, and sea bream AQP1b, but not European eel [32]. These consensus sites may be involved in the control of AQP1b intracellular trafficking through phosphorylation-independent and -dependent mechanisms [32].
Present RT-PCR analysis revealed that Japanese eel aqp1b transcripts were highly abundant in the ovary. The present result observed in the ovary is in accordance with the previous studies in teleosts spawning pelagic eggs into seawater, such as the European eel, gilthead seabream and Senegalese sole [32]. Our data also showed faint, but significant, aqp1b transcript expression in the testis of Japanese eel, however expression was not detected in other tissues. Although relatively lower levels of aqp1b transcripts were found in gut, kidney and gills of gilthead seabream and the European eel [32], specific aqp1b transcripts were not observed in testis. These differences in the AQP1b distribution, especially that of testis, may reflect differences between species and/or sexual maturation stages, although the exact reasons are unclear, as detailed physiological information about the stage of fish was not provided in the previous report [32]. In mammals, AQPs are involved in the regulation of fluid resorption in the efferent duct [35,36] and also in the volume reduction of spermatids [37]. Therefore, in teleosts, further investigation of the physiological significance of AQP in testis is warranted.
In situ hybridization analysis showed that eel aqp1b mRNA was expressed in oocytes at the perinucleolus stage. This is the first information showing sub-cellular localization of aqp1b gene transcripts in any teleost oocytes and the result confirms the RT-PCR data indicating that seabream aqp1b is expressed in ovaries containing oocytes at the previtellogenic and early vitellogenic stages. Moreover, our data showed the dispersion of intense hybridization signals found in oocytes at the perinucleolus stage become faint in association with oocyte growth and maturation. The previous data obtained by RT-PCR showed that the levels of seabream aqp1b mRNA did not change significantly during oocyte growth and maturation [4]. Therefore, in Japanese eel, as indicated in seabream [4], mRNA which is synthesized in oocytes during the early growth phase might be dispersed in oocyte cytoplasm but total amounts of aqp1b mRNA might maintain constant levels during subsequent oocyte growth and maturational stages. RT-PCR analysis is needed to obtain direct evidence on changes of aqp1b mRNA levels during oocyte maturation of Japanese eel in future.
Immunocytochemical analysis showed that eel AQP1b protein is synthesized in early vitellogenic oocytes. These results are identical to the data obtained in oocytes of gilthead seabream [4]. During oocytes growth, gonadotropin, follicle-stimulating hormone (FSH) [38], directly acts on ovarian follicles to produce estradiol-17β (E 2 ) through the collaborative actions of theca and granulosa cell layers [39]. Vitellogenin is synthesized by hepatocytes following induction by E 2 [40] and is then incorporated into growing oocytes by receptor-mediated endocytosis [41,42] by stimulation of FSH [43,44]. Therefore, FSH, E 2 or other factors, such as IGF-I [31] found in the follicles, may regulate eel AQP1b synthesis at post-translational levels, as suggested in the previous studies of mammals [45,46].
During oocyte maturation of Japanese eel (meiosis resumption), the yolk granules fused and increased in size to become large yolk masses but did not form a single yolk mass [5]. These morphological changes observed during oocyte maturation is different from those observed in the gilthead seabream [4] in which the yolk granules fuse into a single yolk mass. In the gilthead seabream, during oocyte maturation, AQP1b translocated towards oocyte periphery and become concentrated within a thin layer just below the oocyte plasma membrane, suggesting that AQP1b located at the plasma membrane is essential for water influx into oocytes. However, in the Japanese eel, immunocytochemical analysis showed that immunoreactions of eel AQP1b were mainly observed around the fused yolk masses in oocytes at the migratory nucleus and mature stages. Localization of AQP1b just below the oocyte plasma membrane found in the gilthead seabream is not apparent in the present study. These morphological differences of yolk masses and different localization of AQP1b during oocyte maturation observed between gilthead seabream and the Japanese eel may be reflected by the varied mechanism of hydration of oocytes. In the Japanese eel, oocyte hydration may be regulated by a two-step mechanism. At the start of hydration, water may pass into the oocyte through the plasma membrane and then into the yolk mass through AQP1b localized in the yolk membrane, resulting in swelling of the yolk mass. As suggested in the gilthead seabream [4], it may also be possible that water influx from blood and ovarian fluid into oocyte can occur by simple diffusion (a comparatively slow influx) through the follicular (somatic cells) and oocyte membranes, since a longer period of time (5 days in vivo [5]) is required to accomplish hydration during oocyte maturation of the Japanese eel. Further studies are necessary to obtain conclusive evidence of AQP1b localization on plasma membrane of Japanese eel oocytes.
Conclusions
Based on the present study and the previous in vitro data [5], mechanisms of oocyte hydration during meiotic maturation can be explained in the Japanese eel. During the previtellogenic stage (at the perinucleolus stage), mRNA of eel aqp1b are synthesized in the oocytes, perhaps by maternal gene expression and/or synthesized from ovarian follicle, since the darkly staining aqp1 transcript masses are found around oocytes. Synthesis of AQP1b protein is stimulated when oocytes begin vitellogenesis. The gonadotropin, FSH and/or E 2 may be involved in the initiation of AQP1b synthesis in oocytes. During oocyte maturation, accumulated AQP1b in ooplasm is translocated around the yolk mass, which forms by the fusion of yolk globules, and by the proteolytic cleavage of vitellogenin-derived yolk proteins. Water transport is mediated by these AQP1bs which are located at different sites on the oocytes along with the osmotic driving force created by the accumulated yolk protein-derived free amino acids and inorganic ions [5]. Further studies on other evolutionally primitive species, such as conger eel Conger myriaster and Pike eel Muraenesox cinereus, may provide more insight to confirm this contention. | 6,311.2 | 2011-05-27T00:00:00.000 | [
"Biology"
] |
Analysis of Electromagnetic Characteristics of Copper-Steel Composite Quadrupole Rail
The ablation and wear of the four-rail electromagnetic launcher during the working process will aggravate the damage of the armature and rail, and greatly affect the service life of the launcher. To effectively alleviate rail damage, this paper applies the copper-steel composite rail to the four-rail electromagnetic launcher and proposes a new four-rail electromagnetic launcher. Based on the quadrupole magnetic field theory, the physical model of the new four-rail electromagnetic launcher is established, and the electromagnetic characteristics of the ordinary and new launchers are compared and analyzed using the finite element method. On this basis, the influence of composite layer parameters on the electromagnetic characteristics of copper-steel composite quadrupole rail is explored. The study found that the new four-rail electromagnetic launcher can provide a better launch magnetic field environment for smart loads, and the current distribution of the armature and the rail contact surface is more uniform, which can effectively improve the contact condition between the armature and the rail. The composite layer parameters of copper-based composite rail will have a certain impact on electromagnetic characteristics, and copper-steel composite rail of appropriate proportions can be selected according to different needs. The model proposed in this paper has a certain degree of scientificity and rationality.
Introduction
The electromagnetic launch is a new weapon launch technology that uses electromagnetic thrust to accelerate the load to ultra-high speed [1], with the advantages of great power, strong concealment, and controllable thrust [2,3], and has attracted great attention from all over the world [4,5]. As the technology continues to evolve, the need to launch new smart carriers such as missiles, aircraft, and satellites becomes increasingly urgent [6][7][8], but these loads contain a large number of precision electronics that are extremely sensitive to the magnetic field environment of the launch. The four-rail electromagnetic launcher (FREL) can effectively achieve magnetic field shielding and better meet the requirements of launching new smart carriers, and has broad development prospects [9][10][11].
However, the proposal of FREL has not yet been able to solve the problem of ablation wear of rail [12], which greatly restricts the development of FREL. As one of the important components of the device, the inner side of rail faces ablation, wear and other problems, and its conductivity, stiffness, and other characteristic requirements are high. The comprehensive performance of rail to a large extent determines the actual application of the launcher, so the design of the rail and the choice of materials has become the key to solving the problem. The composite materials can take into account all aspects of performance, the proposal and application of composite rails provide new ideas for solving the above problems [13]. The composite rail refers to the use of composite material designability, according to the performance requirements and use needs, through the material composition and configuration design of the corresponding rail, due to the good conductivity of copper materials, the rail mostly uses copper-based composite rail [14,15].
problems [13]. The composite rail refers to the use of composite material designability, according to the performance requirements and use needs, through the material composition and configuration design of the corresponding rail, due to the good conductivity of copper materials, the rail mostly uses copper-based composite rail [14,15].
To prolong the life of rail, improve the launch accuracy, and alleviate damage during the launch, a new type of copper-based composite rail can be used, which has been studied a lot by scholars at home and abroad in recent years [16,17]. Cao [18] experimentally studied the thermal ablation characteristics of copper diamond electromagnetic rails in the initial stage of launch, and found that they are closely related to current and preload. Barbara [19] investigated the degree of damage of different material orbitals at different launch energies, and found that the Cr/Cu composite orbits had less damage at low energies. An XY [20] analyzed the contact stresses on the copper-based composite rail and armature contact surfaces, and found that the stress caused by the temperature rise of the armature had a greater impact on the rail. The above research is based on ordinary rail electromagnetic launchers, and the application of copper-based composite rail on FREL has not been explored.
In this paper, the electromagnetic characteristics of copper-steel composite rail in the new four-rail electromagnetic launcher (NFREL) are mainly studied, and the model of NFREL is established, and the electromagnetic characteristics of the new and ordinary FREL are compared and analyzed using the finite element method. The influence of the geometric parameters of the composite layer on the electromagnetic characteristics is discussed, the rationality and superiority of the copper-steel composite quadrupole electromagnetic rail are verified, and the theoretical reference is provided for the design and application of the composite four-rail electromagnetic launcher.
Quadrupole Magnetic Field Theory
As shown in Figure 1, a certain launch cross-section is selected to analyze the magnetic field strength generated by the rail current in the launch region.
The magnetic field strength generated by the cross-sectional current source d d k J x y at the point ' ' ( , ) P x y in rail 1 is The rails are numbered m = 1, 2, 3, 4 sequentially, with the launcher aperture d, the composite rail width a, the rail height b, and the current isar density J, according to the Biot-Savart Law: The magnetic field strength generated by the cross-sectional current source Jdxdyk at the point P(x , y ) in rail 1 is where µ 0 is the permeability in the vacuum, k is the unit vector of the direction of the rail current, R is the distance vector between the center point of the current source (x, y) and the P(x , y ), i.e., R = (x − x )i + (y − y )j, then B T m P is the magnetic field strength generated by the current of the mth rail section at the point P(x , y ), and the current in the section of the rail 1 can be integrated to obtain the magnetic field strength in the section of the rail where S 1 is the cross-sectional area of the copper rail of the first composite rail. Then the magnetic field strength excited by the current in the section of the mth composite rail at point P(x , y ) is Extending the above results to three-dimensional space, the magnetic field strength of the mth composite rail section current in the spatial region is When the armature moves along the rail to z(t), the magnetic field strength of the mth rail energized segment in the space region is According to the vector superposition principle of the magnetic field, the electromagnetic field strength generated by the four composite rails at P(x , y , z ) is The current distribution of the armature is shown in Figure 2, and it can be seen that the current is mainly concentrated at the four-stage drainage arc [21], and the magnetic field strength generated by the current in the armature in the launch area P(x , y , z ) is where B A n P is the magnetic field strength generated by the nth stage drainage arc at P(x , y , z ); l n is the length of the nth-segment drainage arc, The magnetic field strength at any point in space is the sum of the magnetic field strength generated by the rail and the armature is The magnetic field strength at any point in space is the sum of the magnetic field strength generated by the rail and the armature is
Physical Model of FREL
The model of FREL and NFREL is shown in Figure 3. The magnetic field strength at any point in space is the sum of the magnetic field strength generated by the rail and the armature is
Physical Model of FREL
The model of FREL and NFREL is shown in Figure 3. In the NFREL, the rail is based on copper with good electrical and thermal conductivity as the base material, which can ensure the flow-through capacity and the magnetic field environment required for launch. Steel with good stiffness and ablation resistance as a reinforcing material can improve the wear resistance of the rail. The current flows in from the copper rail end surface of the composite rail, and then flows through the rail path through the armature from the adjacent rail. The current of the rail creates a quadrupole magnetic field in the launch region, which is orthogonal with the current flowing through the armature to push the armature to move in the +z direction. The copper-based rails are used to generate and conduct currents, and provide magnetic fields, and steel rails are used to carry armature. In the NFREL, the rail is based on copper with good electrical and thermal conductivity as the base material, which can ensure the flow-through capacity and the magnetic field environment required for launch. Steel with good stiffness and ablation resistance as a reinforcing material can improve the wear resistance of the rail. The current flows in from the copper rail end surface of the composite rail, and then flows through the rail path through the armature from the adjacent rail. The current of the rail creates a quadrupole magnetic field in the launch region, which is orthogonal with the current flowing through the armature to push the armature to move in the +z direction. The copper-based rails are used to generate and conduct currents, and provide magnetic fields, and steel rails are used to carry armature.
The caliber of NFREL is 80 mm × 80 mm, the length of the copper rail is 1000 mm, the height is 40 mm, and the width is 18 mm; The length l and height h of the steel rail are consistent with the copper rail; The length of the armature is 40 mm, and the thickness of the throat is 15 mm. The caliber, rail length, rail height and width of the FREL are consistent with the new four-rail electromagnetic launcher; The rail and armature material properties are shown in Table 1.
Simulation Conditions and Method
After the two models are established, the relevant functions need to be built to load the current. Electromagnetic launch is a complex transient process that should be simulated using transient currents. Through the waveform research, it is found that the direct application of the pulsed strong current on the rail will generate a relatively strong electromagnetic force, and the trapezoidal current waveform has the best effect. Therefore, the trapezoidal excitation is selected for the transient simulation current used in this section, as shown in Figure 4. The entire current conduction time is 6 ms, the peak value is 150 KA, and the current peak duration is 2 ms. Consider the current skin effect and set the vacuum region of the solution to 300%.
Simulation Conditions and Method
After the two models are established, the relevant functions need to be built to load the current. Electromagnetic launch is a complex transient process that should be simulated using transient currents. Through the waveform research, it is found that the direct application of the pulsed strong current on the rail will generate a relatively strong electromagnetic force, and the trapezoidal current waveform has the best effect. Therefore, the trapezoidal excitation is selected for the transient simulation current used in this section, as shown in Figure 4. The entire current conduction time is 6 ms, the peak value is 150 KA, and the current peak duration is 2 ms. Consider the current skin effect and set the vacuum region of the solution to 300%. When using different size meshes to divide the model, the calculation time and computer resources required are different, and the calculation results are also different. Therefore, the accuracy of the calculation results under different size meshing needs to be verified. The maximum grid size of the armature in the control model is 0.5 mm, 1 mm and 2 mm respectively, and the maximum current density of the rail is calculated, and the result is shown in Table 2. When using different size meshes to divide the model, the calculation time and computer resources required are different, and the calculation results are also different. Therefore, the accuracy of the calculation results under different size meshing needs to be verified. The maximum grid size of the armature in the control model is 0.5 mm, 1 mm and 2 mm respectively, and the maximum current density of the rail is calculated, and the result is shown in Table 2. Comparing the maximum current density obtained by solving with different grid sizes, it can be seen that when the maximum grid size is 1 mm, the maximum current density of the armature is 7.28 × 10 9 A/m 2 ; When the maximum grid size is 0.5 mm, the maximum current density is 7.48 × 10 9 A/m 2 , and the error of the two calculation results does not exceed 2.7%, indicating that the grid division is reasonable. To improve the calculation efficiency, the maximum mesh size is 1 mm to divide the mesh, and the armature and the rail contact are refined, and the maximum mesh size at this place does not exceed 0.5 mm.
In this paper, we propose using the finite element method to simulate the model of FREL and NFREL. The finite element method, also known as the matrix approximation method, is based on the variational principle and the weighted margin method. The basic idea is to simplify the solution domain of a complex system problem to a large number of finite interconnected non-superimposed subdomains, by solving the solution of the subdomain and then using the variational principle or the weighted margin method to derive the approximate solution of the entire system, using this method to achieve high-precision approximation calculation of the simulation model of this paper.
Analysis of the Current Density Distribution
The distribution of the current density has an important influence on the magnetic field distribution and the location of the heat production. The site where the current density accumulates has a high heat production, which is easy to cause thermal damage to the rail and further affects the life of launcher. Therefore, the analysis of the current density distribution is extremely important. For the convenience of description, FREL can be described as ordinary type, and NFREL can be described as new type. Figures 5 and 6 are the current density distribution of ordinary quadrupole rail and copper-steel composite quadrupole rail at 4 ms, respectively. armature and the rail contact are refined, and the maximum mesh size at this place does not exceed 0.5 mm.
In this paper, we propose using the finite element method to simulate the model of FREL and NFREL. The finite element method, also known as the matrix approximation method, is based on the variational principle and the weighted margin method. The basic idea is to simplify the solution domain of a complex system problem to a large number of finite interconnected non-superimposed subdomains, by solving the solution of the subdomain and then using the variational principle or the weighted margin method to derive the approximate solution of the entire system, using this method to achieve high-precision approximation calculation of the simulation model of this paper.
Analysis of the Current Density Distribution
The distribution of the current density has an important influence on the magnetic field distribution and the location of the heat production. The site where the current density accumulates has a high heat production, which is easy to cause thermal damage to the rail and further affects the life of launcher. Therefore, the analysis of the current density distribution is extremely important. For the convenience of description, FREL can be described as ordinary type, and NFREL can be described as new type. Figures 5 and 6 are the current density distribution of ordinary quadrupole rail and copper-steel composite quadrupole rail at 4 ms, respectively. From Figures 5 and 6, it can be seen that the current density distribution of ordinary and copper-steel composite quadrupole rail is different. The current density of ordinary quadrupole rail is reached 9 2 4.04 10 A/ m × , while the copper-steel composite quadrupole rails is only 9 2 2.91 10 A/ m × . From the perspective of the location of the current distribution, the difference is also more obvious: the ordinary quadrupole rail current is distributed very little in the middle area, and mainly distributed in the surface thin layer of the rail and the four edges, which is caused by the skin effect and proximity effect of the current: the two adjacent railcurrent directions are opposite, which meets the condi- From Figures 5 and 6, it can be seen that the current density distribution of ordinary and copper-steel composite quadrupole rail is different. The current density of ordinary quadrupole rail is reached 4.04 × 10 9 A/m 2 , while the copper-steel composite quadrupole rails is only 2.91 × 10 9 A/m 2 . From the perspective of the location of the current distribution, the difference is also more obvious: the ordinary quadrupole rail current is distributed very little in the middle area, and mainly distributed in the surface thin layer of the rail and the four edges, which is caused by the skin effect and proximity effect of the current: the two adjacent railcurrent directions are opposite, which meets the conditions of the current proximity effect, so the currents are close to each other, the distribution of the current on the inner edge is more concentrated, indicating that the proximity effect is also a factor that must be considered in electromagnetic analysis. The composite rail current is mainly distributed on the copper rail. There is no obvious current concentration in the inner edges and corners of the copper-based rail. The current flows centrally at the junction of the armature and the rail, so there is a large current density at this location.
Although the geometry of the two rails is the same, the current area of the coppersteel composite quadrupole rail is smaller than that of the ordinary rail, and after loading the same size current, the maximum current density of the ordinary quadrupole rail is greater than that of the copper-steel composite quadrupole rail, which may be because the skin effect and proximity effect of the current in the ordinary quadrupole rail are more obvious, and the current is more concentrated on the edge, while the copper-steel composite quadrupole rail has a large current density only at the contact due to the characteristics of the rail material. It can be seen that the copper-steel composite rail can reduce the maximum current density of the rail and alleviate thermal damage to a certain extent.
To more clearly explore the distribution of current in the rail, the cross-section shown in Figure 7 is selected for analysis. As can be seen from Figure 7, the current of the ordinary type is concentrated on both sides of the rail, and there is almost no current distribution in the middle area of the rail. The current is mainly conducted from the side of the armature. When the current flows to the contact of the rail and armature, part of the current flows in from the tail of the armature due to the short-circuit effect of the current.
Because the resistivity of the copper rail is the smallest, followed by the aluminum armature, and the resistivity of the steel rail is the largest, part of the current will accumulate at the front of the armature contact interface and flow from the armature head. The current in the copper-steel composite quadrupole rail is more distributed in the copper rail, and the current flows into the armature through the steel rail at the contact. It can be seen that the current of the copper-steel composite rail flows into the armature more evenly, which is quite different from the ordinary rail. It shows that the structure of the new type can alleviate the unevenness of the current distribution of the contact surface.
To more intuitively reflect the distribution of the current, the armature and the contact surface are simulated for analysis, and the results are shown in Figures 8 and 9. As can be seen from Figure 7, the current of the ordinary type is concentrated on both sides of the rail, and there is almost no current distribution in the middle area of the rail. The current is mainly conducted from the side of the armature. When the current flows to the contact of the rail and armature, part of the current flows in from the tail of the armature due to the short-circuit effect of the current.
Because the resistivity of the copper rail is the smallest, followed by the aluminum armature, and the resistivity of the steel rail is the largest, part of the current will accumulate at the front of the armature contact interface and flow from the armature head. The current in the copper-steel composite quadrupole rail is more distributed in the copper rail, and the current flows into the armature through the steel rail at the contact. It can be seen that the current of the copper-steel composite rail flows into the armature more evenly, which is quite different from the ordinary rail. It shows that the structure of the new type can alleviate the unevenness of the current distribution of the contact surface.
To more intuitively reflect the distribution of the current, the armature and the contact surface are simulated for analysis, and the results are shown in Figures 8 and 9. current in the copper-steel composite quadrupole rail is more distributed in the copper rail, and the current flows into the armature through the steel rail at the contact. It can be seen that the current of the copper-steel composite rail flows into the armature more evenly, which is quite different from the ordinary rail. It shows that the structure of the new type can alleviate the unevenness of the current distribution of the contact surface.
To more intuitively reflect the distribution of the current, the armature and the contact surface are simulated for analysis, and the results are shown in Figures 8 and 9. Comparing Figures 8a and 9a, it can be seen that due to the special structure of the armature, the current is mainly distributed on the four drainage arcs, indicating that the design of the drainage arc is reasonable and can play a role in centralized conduction of current. There is also a large current distribution on the edge of the inner edge of the armature arm, and the concentration at the throat of the armature is severe, which is determined by the shortest path of the current. Analysis of Figures 8b and 9b shows that since the rail width is larger than the armature width, the current also flows into the armature from both sides of the armature arm. Affected by the proximity effect of the current, the current around the contact surface is relatively concentrated, and there is almost no current distribution in the middle of the contact surface. However, the zero-area current area of contact surface of the new type is smaller than that of the ordinary type, and the maximum current density is mainly concentrated in the head and tail of the contact surface.
In terms of current size, the composite rail can reduce the maximum current density on the contact surface, and the zero area of the contact surface is larger, which can improve the current distribution on the contact surface.
To quantify the current distribution of the contact surface, we select path 1~path 3 as shown in Figure 10. Extract the magnitude of the current density on each path, as shown in Figure 11. The abscissa in the figure represents the distance from the head of the armature, and the ordinate coordinate is the current density. Comparing Figures 8a and 9a, it can be seen that due to the special structure of the armature, the current is mainly distributed on the four drainage arcs, indicating that the design of the drainage arc is reasonable and can play a role in centralized conduction of current. There is also a large current distribution on the edge of the inner edge of the armature arm, and the concentration at the throat of the armature is severe, which is determined by the shortest path of the current. Analysis of Figures 8b and 9b shows that since the rail width is larger than the armature width, the current also flows into the armature from both sides of the armature arm. Affected by the proximity effect of the current, the current around the contact surface is relatively concentrated, and there is almost no current distribution in the middle of the contact surface. However, the zero-area current area of contact surface of the new type is smaller than that of the ordinary type, and the maximum current density is mainly concentrated in the head and tail of the contact surface.
In terms of current size, the composite rail can reduce the maximum current density on the contact surface, and the zero area of the contact surface is larger, which can improve the current distribution on the contact surface.
To quantify the current distribution of the contact surface, we select path 1~path 3 as shown in Figure 10. Extract the magnitude of the current density on each path, as shown in Figure 11. The abscissa in the figure represents the distance from the head of the armature, and the ordinate coordinate is the current density.
As can be seen from Figure 11, the current distribution changes across paths are consistent. The difference in the current density of the head and tail on path 1 of the ordinary type is 1.7 × 10 9 A/m 2 , and the difference of the new type is 0.533 × 10 9 A/m 2 , it can be seen that the current density distribution of the contact surface of the new type is more uniform, which is consistent with the previous analysis, which can more effectively alleviate the thermal damage of the contact surface. The distribution of paths 2 and 3 is consistent with path 1, except that the current density in the middle of paths 2 and 3 is almost 0, i.e., there is no current distribution. It is explained that the current on the contact surface is mainly distributed in a limited area on both sides of the contact surface.
In terms of current size, the composite rail can reduce the maximum current density on the contact surface, and the zero area of the contact surface is larger, which can improve the current distribution on the contact surface.
To quantify the current distribution of the contact surface, we select path 1~path 3 as shown in Figure 10. Extract the magnitude of the current density on each path, as shown in Figure 11. The abscissa in the figure represents the distance from the head of the armature, and the ordinate coordinate is the current density. As can be seen from Figure 11, the current distribution changes across paths are consistent. The difference in the current density of the head and tail on path 1 of the ordinary type is 9 2 1.7 10 A / m × , and the difference of the new type is 9 2 0.533 10 A / m × , it can be seen that the current density distribution of the contact surface of the new type is more uniform, which is consistent with the previous analysis, which can more effectively alleviate the thermal damage of the contact surface. The distribution of paths 2 and 3 is consistent with path 1, except that the current density in the middle of paths 2 and 3 is almost 0, i.e., there is no current distribution. It is explained that the current on the contact surface is mainly distributed in a limited area on both sides of the contact surface.
Analysis of the Magnetic Field Strength Distribution
The magnetic field strength distribution is affected by the current distribution and can affect the operating performance of the electronic device of the load. On the basis of the analysis in the previous section, the magnetic field strength of ordinary and new type is analyzed. The results are shown in Figures 12 and 13.
Analysis of the Magnetic Field Strength Distribution
The magnetic field strength distribution is affected by the current distribution and can affect the operating performance of the electronic device of the load. On the basis of the analysis in the previous section, the magnetic field strength of ordinary and new type is analyzed. The results are shown in Figures 12 and 13.
Due to the distribution of current, the magnetic field strength is also mainly concentrated on the surface of the rail, and the edge of the ordinary quadrupole rail is the most obvious. The magnetic field strength distribution of copper-steel composite quadrupole rail is similar to that of ordinary type, but with one obvious feature: there is a concentration of magnetic field strengths at the copper-steel rail interface.
We select the cross-section shown in Figure 14 to analyze the distribution of the magnetic field strength axially inside the launcher.
Analysis of the Magnetic Field Strength Distribution
The magnetic field strength distribution is affected by the current distribution and can affect the operating performance of the electronic device of the load. On the basis of the analysis in the previous section, the magnetic field strength of ordinary and new type is analyzed. The results are shown in Figures 12 and 13. Due to the distribution of current, the magnetic field strength is also mainly concentrated on the surface of the rail, and the edge of the ordinary quadrupole rail is the most obvious. The magnetic field strength distribution of copper-steel composite quadrupole rail is similar to that of ordinary type, but with one obvious feature: there is a concentration of magnetic field strengths at the copper-steel rail interface.
We select the cross-section shown in Figure 14 to analyze the distribution of the magnetic field strength axially inside the launcher. Due to the distribution of current, the magnetic field strength is also mainly concentrated on the surface of the rail, and the edge of the ordinary quadrupole rail is the most obvious. The magnetic field strength distribution of copper-steel composite quadrupole rail is similar to that of ordinary type, but with one obvious feature: there is a concentration of magnetic field strengths at the copper-steel rail interface.
We select the cross-section shown in Figure 14 to analyze the distribution of the magnetic field strength axially inside the launcher. Due to the structural characteristics of the launcher, the magnetic field generated by the rail current cancels each other out at the middle of the launch area, forming a hollow magnetic field, and can meet the requirements of the intelligent load on the magnetic field environment. A strong magnetic field strength appears at the bottom of both armatures, which is related to the distribution of currents. Unlike ordinary type, the new type has a strong magnetic field distribution in the steel rail layer. Compared with the weak magnetic region, it can be found that the hollow area of the new type is larger, indicating that Due to the structural characteristics of the launcher, the magnetic field generated by the rail current cancels each other out at the middle of the launch area, forming a hollow magnetic field, and can meet the requirements of the intelligent load on the magnetic field environment. A strong magnetic field strength appears at the bottom of both armatures, which is related to the distribution of currents. Unlike ordinary type, the new type has a strong magnetic field distribution in the steel rail layer. Compared with the weak magnetic region, it can be found that the hollow area of the new type is larger, indicating that the new type can better meet the electromagnetic shielding requirements.
To analyze the armature magnetic field more intuitively, the magnetic field strength of the front and rear surfaces of the armature is simulated. Figure 15 shows the magnetic field strength distribution cloud diagram of the front and rear surfaces of the armature. It can be seen that the magnetic field strength distribution of the end surface is symmetrical, and the zero magnetic field area can be clearly seen in the middle area of the armature. The magnetic field strength of the posterior surface is greater than that of the front surface of the armature. The magnetic field distribution of the armature end of the new type is similar to the former, except that the magnetic field strength of the rear surface of the latter can reach up to 9 T, and the front surface can reach 6 T, which is 2.5 times and 2.1 times of the former, respectively. At the same time, there is also a strong magnetic field distribution on steel rail.
To analyze the magnetic field strength at different locations of the cross-section, set the four paths shown in the red line in Figure 16 It can be seen that the magnetic field strength distribution of the end surface is symmetrical, and the zero magnetic field area can be clearly seen in the middle area of the armature. The magnetic field strength of the posterior surface is greater than that of the front surface of the armature. The magnetic field distribution of the armature end of the new type is similar to the former, except that the magnetic field strength of the rear surface of the latter can reach up to 9 T, and the front surface can reach 6 T, which is 2.5 times and 2.1 times of the former, respectively. At the same time, there is also a strong magnetic field distribution on steel rail.
To analyze the magnetic field strength at different locations of the cross-section, set the four paths shown in the red line in Figure 16 The magnetic field strength distribution of the two type on path 4 is similar. Affected by the skin effect of current, the current is mainly distributed on the outside of the rail, the greater the current density, the greater the magnetic field strength excited in space, so the magnetic field strength on the outside of the rail is larger; there is almost no current distribution in the middle of the rail, so the magnetic field strength at that location is very low. At 20 mm, the magnetic field strength has a more obvious increase, this is because due to the influence of the current proximity effect, there is also a certain current distribution on the inside of the rail, which will also stimulate the corresponding magnetic field. The magnetic field, excited by the current, cancels each other out in the launch area, forming a weak magnetic area, realizing electromagnetic self-shielding.
It is worth noting that the magnetic field strength of the copper-steel composite quadrupole rail has a significant mutation at 18 mm (the junction of the copper rail and the steel rail), and the extreme value appears, up to 3.5 T. This is because the permeability of steel is close to 200 times that of air, which is much larger than copper. This is in line with The magnetic field strength distribution of the two type on path 4 is similar. Affected by the skin effect of current, the current is mainly distributed on the outside of the rail, the greater the current density, the greater the magnetic field strength excited in space, so the magnetic field strength on the outside of the rail is larger; there is almost no current distribution in the middle of the rail, so the magnetic field strength at that location is very low. At 20 mm, the magnetic field strength has a more obvious increase, this is because due to the influence of the current proximity effect, there is also a certain current distribution on the inside of the rail, which will also stimulate the corresponding magnetic field. The magnetic field, excited by the current, cancels each other out in the launch area, forming a weak magnetic area, realizing electromagnetic self-shielding.
It is worth noting that the magnetic field strength of the copper-steel composite quadrupole rail has a significant mutation at 18 mm (the junction of the copper rail and the steel rail), and the extreme value appears, up to 3.5 T. This is because the permeability of steel is close to 200 times that of air, which is much larger than copper. This is in line with The magnetic field strength distribution of the two type on path 4 is similar. Affected by the skin effect of current, the current is mainly distributed on the outside of the rail, the greater the current density, the greater the magnetic field strength excited in space, so the magnetic field strength on the outside of the rail is larger; there is almost no current distribution in the middle of the rail, so the magnetic field strength at that location is very low. At 20 mm, the magnetic field strength has a more obvious increase, this is because due to the influence of the current proximity effect, there is also a certain current distribution on the inside of the rail, which will also stimulate the corresponding magnetic field. The magnetic field, excited by the current, cancels each other out in the launch area, forming a weak magnetic area, realizing electromagnetic self-shielding. what we analyzed earlier. Comparing the magnetic field strength of the middle magnetic field area of the launch area, it can be seen that the new type is smaller. The above shows that the magnetic field strength distribution of path 5 and path 6 of the new type and ordinary type is consistent. There is a strong magnetic field strength on the outside of the rail, close to the middle of the rail, the magnetic field gradually decreases, and the magnetic field in the middle of the copper rail drops to 0 T; There is a significant magnetic jump at the copper-steel junction, which gradually decreases in the middle of the launch area. what we analyzed earlier. Comparing the magnetic field strength of the middle magnetic field area of the launch area, it can be seen that the new type is smaller. The above shows that the magnetic field strength distribution of path 5 and path 6 of the new type and ordinary type is consistent. There is a strong magnetic field strength on the outside of the rail, close to the middle of the rail, the magnetic field gradually decreases, and the magnetic field in the middle of the copper rail drops to 0 T; There is a significant magnetic jump at the copper-steel junction, which gradually decreases in the middle of the launch area. It is worth noting that the magnetic field strength of the copper-steel composite quadrupole rail has a significant mutation at 18 mm (the junction of the copper rail and the steel rail), and the extreme value appears, up to 3.5 T. This is because the permeability of steel is close to 200 times that of air, which is much larger than copper. This is in line with what we analyzed earlier. Comparing the magnetic field strength of the middle magnetic field area of the launch area, it can be seen that the new type is smaller.
Compared with the magnetic field strength of the armature front end, the magnetic field strength of the new type is greater than that of the ordinary type, and the weak magnetic field range of the new type is larger than that of the ordinary type, which can better ensure the magnetic field demand of the intelligent load. As can be seen from Figure 20, the magnetic field strength of path 7 of ordinary type is very weak, and the maximum is only about 450 mT. The new type has a maximum of 3 T, limited to steel rails, and is less than 0.5 T in most areas.
Effects of Composite Layer Parameters on Electromagnetic Properties
The above analysis results show that compared with the ordinary type, the electromagnetic characteristics of the launcher will change to a certain extent after the steel layer is added to the rail. To explore the influence of the thickness ratio of copper-steel on the electromagnetic characteristics of the launcher, the thickness of the control rail is 20 mm unchanged, and the launchers with copper-steel thickness ratio of 1:1, 3:1, 4:1, and 9:1 are simulated, the effect of copper-steel thickness ratio on armature and rail current density and magnetic field strength is analyzed, and the relationship between the electromagnetic characteristics of copper-steel composite quadrupole rails and the parameters of the composite layer is studied. Figure 21 shows the current density distribution on the contact surface of the steel rail and armature under different copper-steel thickness ratios. The above shows that the magnetic field strength distribution of path 5 and path 6 of the new type and ordinary type is consistent. There is a strong magnetic field strength on the outside of the rail, close to the middle of the rail, the magnetic field gradually decreases, and the magnetic field in the middle of the copper rail drops to 0 T; There is a significant magnetic jump at the copper-steel junction, which gradually decreases in the middle of the launch area.
The Influence of the Composite Layer Parameters on the Current Density
Compared with the magnetic field strength of the armature front end, the magnetic field strength of the new type is greater than that of the ordinary type, and the weak magnetic field range of the new type is larger than that of the ordinary type, which can better ensure the magnetic field demand of the intelligent load.
As can be seen from Figure 20, the magnetic field strength of path 7 of ordinary type is very weak, and the maximum is only about 450 mT. The new type has a maximum of 3 T, limited to steel rails, and is less than 0.5 T in most areas.
Effects of Composite Layer Parameters on Electromagnetic Properties
The above analysis results show that compared with the ordinary type, the electromagnetic characteristics of the launcher will change to a certain extent after the steel layer is added to the rail. To explore the influence of the thickness ratio of copper-steel on the electromagnetic characteristics of the launcher, the thickness of the control rail is 20 mm unchanged, and the launchers with copper-steel thickness ratio of 1:1, 3:1, 4:1, and 9:1 are simulated, the effect of copper-steel thickness ratio on armature and rail current density and magnetic field strength is analyzed, and the relationship between the electromagnetic characteristics of copper-steel composite quadrupole rails and the parameters of the composite layer is studied. Figure 21 shows the current density distribution on the contact surface of the steel rail and armature under different copper-steel thickness ratios.
The Influence of the Composite Layer Parameters on the Current Density
From Figure 21a-d, it can be seen that the copper-steel thickness ratio mainly affects the current density on the contact surface, and has little impact on the current distribution position. The current is mainly distributed at the edge of the contact surface and the bottom and head of the contact surface, and there is almost no distribution in the middle area of the contact surface. When the copper-steel thickness ratio is 1:1, 3:1, 4:1 and 9:1, respectively, the maximum current density on the steel rail is 4.01 × 10 9 A/m 2 , 3.52 × 10 9 A/m 2 , 3.36 × 10 9 A/m 2 and 2.74 × 10 9 A/m 2 . The copper-to-steel thickness ratio of 9:1 reduces the maximum current density of contact surface by 31.67% compared to the thickness ratio of 1:1. It can be seen that with the increase of the thickness ratio, the maximum current density of the contact surface will decrease, which can alleviate the heat concentration of the contact surface to a certain extent. From Figure 21a-d, it can be seen that the copper-steel thickness ratio mainly affects the current density on the contact surface, and has little impact on the current distribution position. The current is mainly distributed at the edge of the contact surface and the bottom and head of the contact surface, and there is almost no distribution in the middle area of the contact surface. When the copper-steel thickness ratio is 1:1, 3:1, 4:1 and 9:1, respectively, the maximum current density on the steel rail is 9 2 4.01 10 A/m × , 9 2 3.52 10 A/m × , 9 2 3.36 10 A/m × and 9 2 2.74 10 A/m × . The copper-to-steel thickness ratio of 9:1 reduces the maximum current density of contact surface by 31.67% compared to the thickness ratio of 1:1. It can be seen that with the increase of the thickness ratio, the maximum current density of the contact surface will decrease, which can alleviate the heat concentration of the contact surface to a certain extent. Table 3 shows the maximum current density on the contact surface. Table 3 shows the maximum current density on the contact surface. Consistent with the above analysis, the maximum current density of the contact surface also decreases with the increase of the thickness ratio. When the copper-steel thickness ratio is 1:1, 3:1, 4:1 and 9:1, respectively, the maximum current density is 3.83 × 10 9 A/m 2 , 3.62 × 10 9 A/m 2 , 3.06 × 10 9 A/m 2 and 2.25 × 10 9 A/m 2 . The copper-steel thickness is 9:1 lower than the maximum current density of the 1:1 armature by 41.2%.
The Influence of the Composite Layer Parameters on the Magnetic Field Strength
To better explore the magnetic field strength distribution characteristics, the magnetic field strength at different locations of the launcher is analyzed, and the four paths shown in Figure 16 are still used. Figure 22 and Table 4 show the maximum magnetic field strength on the four paths under different thickness ratios. Consistent with the above analysis, the maximum current density of the contact surface also decreases with the increase of the thickness ratio. When the copper-steel thickness ratio is 1:1, 3:1, 4:1 and 9:1, respectively, the maximum current density is 9 2 3.83 10 A/m × , 9 2 3.62 10 A/m × , 9 2 3.06 10 A/m × and 9 2 2.25 10 A/m × . The coppersteel thickness is 9:1 lower than the maximum current density of the 1:1 armature by 41.2%.
The Influence of the Composite Layer Parameters on the Magnetic Field Strength
To better explore the magnetic field strength distribution characteristics, the magnetic field strength at different locations of the launcher is analyzed, and the four paths shown in Figure 16 are still used. Figure 22 and Table 4 show the maximum magnetic field strength on the four paths under different thickness ratios. It shows that the magnetic field strength distribution at different thickness ratios is consistent and highly symmetrical. The magnetic field strength distribution is closely related to the current distribution, and the current is more concentrated on the outside of the copper-based rail, so the outside of the copper-based rail has a strong magnetic field strength, while there is almost no current distribution in the middle of the copper-based rail, where the magnetic field strength is weaker and the strength decreases.
Due to the high permeability of the steel, the magnetic field strength at the coppersteel interface increases sharply until it reaches its peak on the inside of the rail. Because It shows that the magnetic field strength distribution at different thickness ratios is consistent and highly symmetrical. The magnetic field strength distribution is closely related to the current distribution, and the current is more concentrated on the outside of the copper-based rail, so the outside of the copper-based rail has a strong magnetic field strength, while there is almost no current distribution in the middle of the copper-based rail, where the magnetic field strength is weaker and the strength decreases.
Due to the high permeability of the steel, the magnetic field strength at the copper-steel interface increases sharply until it reaches its peak on the inside of the rail. Because the magnetic fields strength generated by the four composite rails in the launch space cancel each other out, the magnetic field strength in the middle of the launch area is weak, and the magnetic field in the middle area of the launch shows a decreasing trend as the thickness of the steel layer increases. It can be seen from Table 4 that the maximum magnetic field strength on each path is negatively correlated with the copper-steel thickness ratio, and the maximum magnetic field strength of the launcher with a thickness ratio of 1:1 is the largest, and the magnetic field strength of the launcher with a thickness ratio of 9:1 is small.
Conclusions
In this paper, the copper-based composite rail is introduced into the FREL, NFREL is proposed, and the electromagnetic characteristics, such as current density and magnetic field strength, are compared and analyzed. This paper introduces composite materials into the electromagnetic launch rail to solve the problem of the rail life of the FREL and verifies their scientific and rationality, which further promotes electromagnetic launch technology from the laboratory to engineering applications. Through the analysis, it can be found that: (1) The current density of contact surface of the NFREL is significantly reduced, the zero area of the contact surface is larger, and the heat production decreases, indicating that the composite rail can effectively alleviate the thermal damage of the rail and improve the current distribution on the contact surface; The introduction of steel layer rails can increase the wear resistance of the rail and prolong the life of the rail.
(2) The hollow area of the magnetic field of the NFREL is larger, which can provide a good magnetic field launch environment and better meet the electromagnetic shielding requirements.
(3) The copper-steel thickness ratio will have a certain impact on the armature and rail current density and magnetic field strength, and attention should be paid to choosing the appropriate copper-steel thickness ratio.
In this paper, only the electromagnetic characteristics of the NFREL are simulated and analyzed, and the other characteristics of the rail of the NFREL, such as the vibration characteristics, were not analyzed in depth, which will also be the focus of subsequent research to this paper. | 11,824.4 | 2022-08-25T00:00:00.000 | [
"Engineering"
] |
Dynamic Behaviors Analysis of a Novel Fractional-Order Chua’s Memristive Circuit
)is paper proposed a novel fractional-order Chua’s memristive circuit. Firstly, a fractional-order mathematical model of a diode bridge generalized memristor with RLC filter cascade is established, and simulations verify that the fractional-order generalized memristor satisfies the basic characteristics of a memristor. Secondly, the capacitor and inductor in Chua’s chaotic circuit are extended to the fractional order, and the fractional-order generalized memristor is used instead of Chua’s diode to establish the fractional-order mathematical model of chaotic circuit based on RLC generalized memristor. By studying the stability analysis of the equilibrium point and the influence of the circuit parameters on the system dynamics, the dynamic characteristics of the proposed chaotic circuit are theoretically analyzed and numerically simulated.)e results show that the proposed fractional-order memristive chaotic circuit has gone through three states: period, bifurcation, and chaos, and a narrow period window appears in the chaotic region. Finally, the equivalent circuit method is adopted in PSpice to realize the construction of the fractional-order capacitance and inductance, and the simulation of the fractional-order memristive chaotic circuit is completed.)e results further verify the correctness of the theoretical analysis.
Introduction
Memristor is a nonlinear circuit element that describes the relationship between charge and magnetic flux. Its theoretical concept was first proposed by Professor Chua in 1971 [1]. It has electrical characteristics that cannot be achieved by any combination of three basic elements: resistance, capacitance, and inductance. In 2008, HP Labs proved the physical existence of memristor for the first time, responding to Professor Cai's speculation [2]. Since then, the related theoretical research of memristor has entered the practical stage, which has attracted strong attention from researchers [3][4][5][6].
e memristor is a nonlinear device, and it is easy to produce abundant chaotic phenomena when combined with other oscillating circuits. As early as 2008, Itoh and Chua replaced Chua's diode with a memristor based on the known Chua's chaotic circuit and obtained a memristive chaotic system [7]. In 2010, Bao Bocheng and others replaced the diode in the Chua's oscillator with a memristor and a negative conductance. By setting different parameters and initial conditions, they showed different dynamic behaviors [8]. Later, Pham proposed a memristive hyperchaotic system and found that the system has hidden attractors [9]. is hyperchaotic system based on memristor has no equilibrium point, but it can still exhibit chaotic phenomena. Wang et al. [10] proposed a new type of memristor-based Wien-bridge oscillator and studied its parameter-independent dynamical behaviors. A hyperchaotic memristive circuit based on the Wien-bridge oscillator structure is proposed in [11]. Chang et al. [12] proposes and analyzes a memristor with coexistent shrinkage hysteresis loop and twin local activity domains and applies it to the classic Chua's circuit to replace the diode. e complex dynamics of the system are analyzed using compound coexistence bifurcation diagram, Lyapunov exponent spectrum, and phase diagram. In recent years, various novel memristive chaotic circuits have been proposed and analyzed, such as a three-order Wien-bridge chaotic circuit without inductor [13], a memristive chaotic oscillator with controllable amplitude and frequency [14], and a physical SBT memristor-based chaotic circuit [15]. In addition, researchers have also conducted indepth studies on generalized memristors, such as diode bridgecascaded RLC memristors and a generalized memristive simulator composed of a first-order RC [16,17].
As we all know, almost all electronic devices are not ideal devices. erefore, there will be deviations when using an ideal model to analyze actual electronic components. With the development of the theory of fractional calculus, Carlson and others proposed the concept of fractional-order capacitance [18]. Nowadays, a large number of studies have shown that fractional-order modeling of the system makes the description of its electrical characteristics more accurate. According to the theory of fractional calculus, a more accurate model can be obtained by extending the memristive chaotic circuit to the fractional order.
is makes the frequency and electrical characteristics of the fractional-order memristive chaotic system more valuable for research [19]. Ding et al. [20] proposed a fractional-order memristive Chua's circuit with time delay, which is composed of a passive flux-controlled memristor and a negative conductance in parallel and then analyzed the coexistent multiple stability of the system. Yan et al. [21] conducted a numerical study on the dynamics, coexisting attractors, complexity, and synchronization of the fractional-order memristor-based hyperchaotic system. Petras [22] studied the application of fractional calculus in nonlinear circuits and analyzed the fractional-order equation and numerical solution of Chua's oscillator based on memristor. Wu et al. [23] proposed an active fractional-order memristor model and analyzed the electrical characteristics of the memristor via power-off plot and dynamic road map. Finally, it is found that the fractional-order memristor has continually stable states, so it is nonvolatile. It is worth mentioning that both fractional calculus theory and memristor are current research hotspots. erefore, a large number of scholars have concentrated on the research of the chaotic oscillator circuit based on the fractional-order memristor [24][25][26][27][28]. e rest of the paper is organized as follows. Section 2 briefly introduces the fundamental of fractional calculus and constructs a fractional-order generalized memristor composed of a diode bridge and a second-order RLC filter and then proposes a fractional-order memristive chaotic circuit. In Section 3, by studying the stability of the equilibrium points and the influence of system parameters on the bifurcation, the dynamic behavior of the proposed fractional memristive chaotic circuit is theoretically analyzed and numerically simulated. Section 4 realizes the construction of the equivalent circuit of the fractional-order inductor and capacitor and then completes the circuit simulation of the fractional-order memristive chaotic circuit in PSpice. e last section is the conclusion.
Fractional Calculus eory.
e theory of fractional calculus has been studied for more than 300 years. e operator a D α t of fractional calculus is defined as follows: where α is limited to real numbers, t is the independent variable, and a is the lower boundary of the variable [29]. e definition proposed by Caputo is more convenient for initial conditions and closer to practical applications, so this paper uses Caputo definition. e fractional Caputo derivative of the function f(t) is defined as follows: (2) In the formula, Γ(·) is the Gamma function and n ∈ N is an integer.
Under the natural condition of the function f(t), for α ⟶ n, the Caputo definition becomes the normal n-order derivative of the function f(t). e Laplace transform of Caputo fractional derivative is directly calculated by the following formula:
e Fractional-Order Generalized
Memristor. Corinto and Ascoli [30] proposed a full-wave rectifier with a secondorder RLC filter with memristive characteristics. On this basis, this paper proposes a new second-order generalized memristor composed of a diode bridge and a parallel RLC filter, and its circuit model is shown in Figure 1. e mathematical model of the second-order RLC generalized memristor is as follows: where ρ � 1/(2nV T ), and I s , n, and V T are the diode reverse saturation current, emissivity, and thermal voltage, respectively. In addition, v in is the excitation voltage of the generalized memristor, i in is the input current, and x � [v c , i L ] T . e internal function expression of the generalized memristor is as follows: According to the theory of fractional calculus, this paper expands the integer-order capacitor and inductor in the generalized memristor to the fractional order to describe its electrical characteristics. e fractional-order circuit model is shown in Figure 2, where q is the order. erefore, the unified mathematical model of the fractional-order generalized memristor is as follows: e component parameters of the novel fractional-order RLC generalized memristor are as follows: C � 4 nF, L � 230 mH, and R � 800Ω; the diode parameters are as follows: I s � 2.682 nA, n � 1.836, and V T � 25 mV. e Oustaloup method was used to obtain the approximate transfer function of the fractional-order integral operator [31]. In the frequency domain, the transfer function of a fractional integral operator with order q can be expressed as F(s) � 1/s q . According to the Oustaloup approximation technique, transfer functions of different orders at the operating frequency can be obtained. Table 1 gives the approximate transfer functions of the fractional-order integrator F(s) with orders of 0.87, 0.9, 0.95, and 0.97, respectively. e fractional-order capacitor C q is realized by the equivalent unit circuit in Figure 3. e fractional-order inductance L q is realized by the equivalent circuit in Figure 4.
In the complex frequency domain, the equivalent circuit expression for realizing the fractional-order capacitor C q is equation (7). Tables 2 and 3, respectively, give the calculated values of resistance and capacitance in the equivalent cell circuit under different orders. e calculated values of resistance and inductance in the equivalent circuit of fractional inductor L q under different orders are calculated by formula (8), which are given in Tables 4 and 5, respectively.
On the basis of the aforementioned theory, in order to verify whether the proposed novel fractional-order RLC generalized memristor possesses the three essential characteristics of memristor [32], numerical simulation is performed using MATLAB/Simulink software. e input excitation source is u in � V m sin(2πft)V, where f is the frequency of the input voltage and V m is the amplitude of the input voltage. When V m � 4 V and the order q are, respectively, 0.87, 0.9, 0.95, and 0.97, the i in − u in curves under different frequencies are shown in Figure 5. It can be seen from Figure 5 that the trajectory in the i in − u in plane shrinks at the origin, and the area of the tight hysteresis loop decreases monotonously as the excitation frequency increases. In addition, when the frequency of the input voltage tends to infinity, the tight hysteresis loop shrinks to a nonlinear single-valued function, which proves that the proposed fractional-order generalized memristor satisfies the basic characteristics of the memristor.
In addition, as the input voltage amplitude increases, the area of the tight hysteresis loop will increase, but the shape remains unchanged, as shown in Figure 6(a). In order to analyze the influence of the order on the characteristics of the memristor model, fix V m � 4 V, f � 1 kHz, and q to be 0.87, 0.9, 0.95, and 0.97, respectively, and obtain the voltampere characteristic curves under different orders, as shown in Figure 6(b). It is found that the area of the tight hysteresis loop of the fractional-order generalized memristor decreases monotonously with the increase of the order under the same amplitude and frequency.
2.3.
e Fractional-Order Memristive Chaotic Circuit. Yang et al. [33] proposed a chaotic circuit composed of capacitance, inductance, resistance, negative conductance, and nonlinear passive memristor. After realizing the fractional-order generalized memristor, a fractional-order memristive chaotic circuit is established using the proposed v in novel fractional-order RLC generalized memristor M q instead of Chua's diode, as shown in Figure 7.
v c1 is the voltage across the fractional-order capacitor C q 1 , and v c2 is the voltage across the fractional-order capacitor C q 2 .
e current flowing through the fractional-order inductor L 1 q is defined as i 1 . e voltage across the memristor is v M and the current is i M . R, L q , and C q are the internal resistance, fractional-order inductor, and fractional-order capacitor of the generalized memristor, respectively. erefore, the mathematical model of the fractional-order memristive chaotic system can be written as follows: e novel fractional-order RLC generalized memristive chaotic circuit model is established in MATLAB/Simulink, and numerical simulation is carried out on it. e system parameter values are set as C
Stability Analysis of the Equilibrium Point.
e dynamic behavior of the proposed fractional-order memristive chaotic circuit is analyzed by calculating the equilibrium points and eigenvalues of the Jacobian matrix. e equilibrium points of the chaotic circuit can be obtained by solving the following equation: Obviously, Q 1 (0, 0, 0, 0, 0) is an equilibrium point in the fractional-order RLC memristive chaotic circuit. is paper uses MATLAB-based graphical analysis methods to obtain other equilibrium points [34]. Simplify formula (10) Figure 3: e equivalent unit circuit of the fractional capacitor C q .
Ls q= Figure 4: e equivalent unit circuit of the fractional inductor L q . Using MATLAB to plot the functions of v c2 and v c , the coordinates of the intersection of the two functions are obtained, as shown in Figure 9. erefore, the other two equilibrium points are Q 2,3 � (0, ± 1.07478, ± 0.000597, 0.09556, 0.000119).
e Jacobian matrix of formula (9) is as follows: where a � 1/C y � 1/L, and z � R. e eigenvalues of the chaotic system can be obtained by bringing the equilibrium point into the Jacobian matrix: From the above eigenvalues, it can be seen that all three equilibrium points have a positive real root, so the three equilibrium points are the unstable saddle points of the system.
Bifurcation Behaviors of Fractional-Order Memristive
Chaotic System. In order to study the dynamic influence of the capacitance C q value on the fractional memristive chaotic system, the initial value of the chaotic system is set to (0 V, 0.01 V, 0 A, 0 V, 0 A), and the bifurcation diagram of the fractional-order memristive chaotic system can be obtained, as shown in Figure 10. It can be seen from Figure 10 that the dynamic behaviors of the fractional-order memristive chaotic system have three states: period, bifurcation, and chaos, and a narrow period window appears in the chaotic state. When C q < 0.8 nF, the fractional-order system is in a single cycle state. As the value of C q increases, the system enters complex nonlinear dynamic behaviors, such as bifurcation and chaos. When 0.8 nF < C q < 0.98 nF, the fractional-order system is in cycle 2 state. When 1.55 nF < C q < 1.65 nF, the system is in a narrow periodic window in the chaotic region. Figure 11 shows the phase diagrams of the fractional-order memristive chaotic circuit with different values of the capacitor C q . e results of the phase diagrams are consistent with the dynamic behaviors described in the bifurcation diagram. As shown in Figure 11(a), the fractional-order memristive chaotic system is in a periodic state when C q � 0.76 nF. As can be seen in Figure 11(b), the system is in a bifurcated state, and the cycle 2 state appears when C q � 0.9 nF. When C q � 1.6 nF, the system is in a narrow periodic state in the chaotic region as shown in Figure 11(c). When C q � 4 nF, the system is in a chaotic state, as shown in Figure 11(d).
Circuit Simulation Experiment of the Fractional-Order Memristive Chaotic System
e same equivalent circuits are used to realize the capacitors C q 1 and C q 2 and inductor L q in the fractional-order memristive chaotic circuit. In the whole system, the value of order q is 0.95. Tables 6 and 7 show the equivalent resistance, capacitance, and inductance parameters of the fractional-order capacitors C q 1 and C q 2 and the fractionalorder inductor L q . e simulation diagram of the fractional-order Chua's memristive circuit is shown in Figure 7: e fractional Chua's memristive chaotic circuit. Figure 13 shows the phase diagrams of the fractionalorder memristive chaotic circuit in the chaotic state when C q � 4 nF. e results verify the correctness of the numerical simulations and the practicality of the fractional-order capacitor and inductor equivalent circuit.
Conclusions
is paper proposes a fractional-order generalized memristor composed of a fractional-order capacitor, a fractionalorder inductor, a resistor, and diodes and analyzes and verifies the memristive characteristics. en Chua's chaotic circuit is extended to the fractional order and combined with the fractional-order generalized memristor to form a fractional-order Chua's memristive chaotic circuit. e numerical simulation and circuit simulation of the fractionalorder Chua's memristive chaotic circuit show that the three equilibrium points of the proposed fractional-order chaotic system are the unstable saddle points of the system, and the value of the capacitance C q has an effect on the dynamic behaviors of the fractional-order chaotic system significantly. As the value of capacitance C q increases, the fractional-order Chua's memristive chaotic circuit enters a bifurcation from a periodic state and finally enters chaos state. A narrow period window was found in the chaotic state. e phase diagrams in the corresponding state are given in the paper, which are consistent with the dynamic characteristics described in the bifurcation diagram. Finally, the equivalent circuit of the fractional-order generalized memristive chaotic circuit is built in PSpice, and the results obtained by circuit simulation are consistent with the theoretical analysis and numerical simulations.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request. | 3,936.8 | 2021-07-12T00:00:00.000 | [
"Computer Science",
"Engineering",
"Physics"
] |
A New Indexing Technique for Information Retrieval Systems Using Rhetorical Structure Theory (RST)
: Effective information retrieval requires an efficient indexing technique. With the availability of huge volume of information, it has become necessary to capture the semantic of the document, which is almost impossible with the existing techniques. Moreover in the existing technique the weights once assign are remains unchangeable through out the cycle. In this paper an indexing technique assignment using Rhetorical Structure Theory with dynamic weight assignment technique has been presented. The nodes of Rhetorical Parsing tree contain relations and text spans which can be used for indexing by indexer. The results are promising for different texts. Enhancing the technique of NLP can improve the proposed algorithm to accommodate more relations and huge documents.
INTRODUCTION
Organizing the text documents based on their contents is called indexing. Indexing is an important process in an Information Retrieval System. Indexing has three primary purposes in Information Retrieval [1] . * Access to easy location of document by topics * To relate one document to another by defining topic areas * To indicate the relevant document for a specific information.
So any index created must be evaluated that up to what level it satisfies the above-mentioned purpose. In the past the indexing has been done by manually by some trained person. These trained persons were considered to be familiar with the topic of text. An uncontrolled indexing language was generally used which permits the indexer more flexibility in document description. The main problem considers for manual indexing are lack of consistency [2][3][4][5][6] , Exhaustively [1] , Specificity [1] , indexer-user-mismatch [7] etc.
With the increase in electronic texts online, the problem with manual indexing has been increased such as it is too slow and expensive. Due to this, need of automatic indexing was considered. It was Luhn [8] who first suggested that certain words could be automatically extracted from texts to represent their content.
Automatic text indexing is much faster and ratio of errors is low. Retrieval effectiveness of automatic indexing is much better than manual [9] . Many automatic-indexing techniques have been developed for retrieval system on the web [1][2][3] . Besides all these efforts it has been established that Precision is only 30% [10] . Early automated indexing technique were keyword based. These keywords are believed to express that documents. These keywords are usually assigned weights. Usually some IR Model like Extended Boolean, Vector based, probabilistic etc [5][6][7] are used to assign the weights. This technique suffers drawbacks like return of small amount of relevant information and lacking of semantic information. The weight assigning technique is static which has its own limitations [11] . Thus it is necessary that more semantic information must be captured to increase the performance and weight assignment should be dynamic.
The paper presents an indexing technique with dynamic weight assignment using Rhetorical Structure Theory (RST) [12] , the theory of computational and linguistics. The technique presented is keyword as well as relation based. Precision rate has been improved with the help of RST.
RHETORICAL STRUCTURE THEORY
Mann & Thompson developed Rhetorical Structure Theory (RST). They indicate the existence of twentyfive relations. In Table 1, we give some of the RST relationships (other details can be seen in [12] ). The relations can relate parts and sub-parts of a text. The text semantics can be captured from these relations. RST is a linguistically useful method for describing text documents and characterizing their structure. It explains a range of possibilities of structure by comparing various kinds of "building blocks" that can be observed in text documents. Using this theory, two spans of text (adjacent in most cases, but exceptions can be found) are related such that one of them has a specific role relative to the other. For example, an evidence for the claim follows a claim. The claim spans a nucleus and the evidence spans a satellite. The order of these spans is not constrained, but there are more likely and less likely orders for all of the RST relationships. A general format of a RST relationship and its two spans are shown in Figure 1. Due to the ability of RST to define coherence relations very formally and elaborately makes motivate to develop an algorithm for reorganizations of relations and to use these relations for Indexing. The system developed on the basis of these relations will be able to capture the semantic of the documents.
Previous work: The substantial evidences show that the first automatic indexing system was SMART [13] . SMART was initiated at Harvard University in 1961.The first generation of SMART system was developed in the early 1970.The basic design of SMART was based on the use of various kinds of stored dictionaries, word suffix lists, Phrase tables, and hierarchical term arrangements [3,4] . The relevance feedback methodologies [6] were introduces in SMART along with other retrieval methodologies.
SMART lead to advances in other aspects of automated text manipulation, like new retrieval models generation of new automatic indexing methods, term weight etc.
SMART was unable to retrieve the semantics information of documents. From SMART till now, this time many attempts have been done to improve indexing techniques and overall retrieval effectiveness of information Retrieval Systems by using statistics and probability theory, logic, computational linguistics and various aspects of artificial intelligence. However no references has been found for RST based indexing with dynamic weight assignment technique.
THE PROPOSED INDEXING TECHNIQUE
It has already been mentioned in Introduction Section that the existing indexing technique suffer from a lot of problems like retrieval of irrelevant information and missing of capturing the semantic information. The proposed indexing technique first time in the history is presenting concept of indexing by using Rhetorical Structure Theory (RST) in which we can query the data by using keyword and also the rhetorical relations. The processing of indexing by using RST is complex and requires certain other steps.
It involves text segmentation, rhetorical relation finding, and rhetorical parsing tree. All these steps with the proposed algorithm have already been presented in the previous papers [14][15][16] .
The text was broken into small segments [14] on the basis of Cue phrases and Punctuations. These obtained segments were passed to the relation finder [15] . The relation finder algorithm uses the technique of Natural Language processing, cue phrases, and punctuations for finding the relations present in discourse. The obtained relations were then used for the construction of Rhetorical Parsing Tree [16] . The nodes of the tree contains the relations and the text spans. The concept of the strong node and week node [15] was introduced to Asses the initial weights at this stage. This initial weights assessment basically enables us to the make the assignment of weights dynamic and its implementation is given in the proposed algorithm.
This initial weight is between 0 and 1.We assigned the root node 1 and assign 0.9 to the nucleus and 0.5 to the satellite of parent. These values of weights can be changeable. The weight assigned to the child nodes is calculated on the basis of the following formula.
Initial weight of child node = Weight of the child node * weight of the parent node
If one node has two relationships then parent value is assigned to them. Otherwise the weight of all the children is calculated. This weight is attached with the index terms obtained from the text spans. The actual weight is assigned to the index terms on the basis of initial weight assessment and term frequency. Its formula is as follows: Actual weight of the index term = Initial weight assessment * term frequency.
Indexer takes document id, vocabulary id and weight and maintains the knowledge base. It takes the document id as an input for determining which word exist in which document and takes vocabulary id because knowledge base contains collections of words. And id is assigned to words so that redundancy doesn't occur and less space is consumed. Weight is semantic based and shows the occurrence of the important index terms on the basis of semantics in the document. Knowledge base contains document vocabulary, and dynamic weight assessment in normalized form.
PROPOSED SOLUTION FOR KEY WORD BASED INDEXING
The proposed algorithm works as under: Procedure Indexer takes input collection of Documents and uses a different utility procedures getTextSpans, which extract the text spans from the collection, getRelations which hypothesis the rhetorical relations while makeTree is utilized to make Rhetorical Text Tree for the respective document which in tern calls assessInitialWeight to get initial weight assessment to get initial weight assessed Text Spans. These Text Spans are handled by tokenHalder which manipulates the knowledgebase.
The proposed solution is using the basic database structure for Knowledgebase representation. Step 1: Procedure Indexer takes input collection of Documents and uses a different utility procedures getTextSpans, getRelations, makeTree and assessInitialWeight to get initial weight assessment to get initial weight assessed Text Spans. These Text Spans are handled by tokenHalder which manipulates the knowledgebase.
Step2: Procedure assessInitialWeight takes Rhetorical Tree as input along with Nucleous Ration and Stalite Ratio and assigns the initial weight assessment. It works in recursive manners and uses the in-order traversing mechanism.
PROPOSED CONCEPT FOR RELATION BASED INDEXING
To understand the semantic of the document and retrieving only the relevant information, retrieval system that is concerned with the semantic and discourse structure works on the basis of relations the proposed algorithm can follow the following steps Segmentation: The techniques presented in the paper [14] will be used to segment the text to identify the elementary units in the text.
Relation finder: From these small segments the relations those exist between different parts of text will be identified. The technique has been elaborated in the paper [15] .
Parser: A parsing tree consisting of text spans and relations can be built by using the technique presented in the paper [16] . The text spans has been obtained from Segmentation and relation from Relation Finder.
Database for the rhetorical relations and text spans:
The obtained text spans and relations will be put into a database. The relational model can be used for this purpose. The tables will be as following Table 2: Document table Document ID Document URL S1 www.scipub.org Table 3: Text spans table Document ID Span S1g1 Text Span1 S1g2 Text Span2 S1g3 Text Span3 …. …..
S2g3 ……
The obtained relations will be put into the following Table The relation tables can be manipulated by using SQL. The query can be made to find out the rhetorical relevant documents to the query and search on the relation's table will result a high precision.
CONCLUSION AND FUTURE WORK
A new indexing technique by using RST based on dynamic weight assignment has been presented in this paper, which has been successfully implemented. A concept of the indexing technique using relations has been presented. It is concluded that the system has high degree of precision than the system that use traditional indexing techniques. The algorithm can be enhanced to accommodate other kinds of documents like multimedia, images etc. as well.
The concept of noise word and stemming can improve the efficiency of the proposed algorithm. The proposed concept can be implemented, which will have high precision. We have only considered Boolean Extended model. Research can be carried for certain flexibilities to accommodate other models as well in the proposed algorithm. | 2,655.8 | 2006-03-31T00:00:00.000 | [
"Computer Science"
] |
Variability of Precipitation along Cold Fronts in Idealized Baroclinic Waves
Precipitation patterns along cold fronts can exhibit a variety of morphologies including narrow cold-frontal rainbands and core-and-gap structures. A three-dimensional primitive equation model is used to investigate alongfront variability of precipitation in an idealized baroclinic wave. Along the poleward part of the cold front, a narrow line of precipitation develops. Along the equatorward part of the cold front, precipitation cores and gaps form. The difference between the two evolutions is due to differences in the orientation of vertical shear near the front in the lower troposphere: at the poleward end the along-frontal shear is dominant and thefrontisinnear-thermalwindbalance,whileat theequatorwardendthecross-frontalshearisalmostas large. At the poleward end, the thermal structure remains erect with the front well defined up to the mid-troposphere, hence updrafts remain erect and precipitation falls in a continuous line along the front. At the equatorward end, the cores form as undulations appear in both the prefrontal and postfrontal lighter precipitation, associated with vorticity maxima moving along the front on either side. Cross-frontal winds aloft tilt updrafts, so that some precipitation falls ahead of the surface cold front, forming the cores. Sensitivity simulationsarealsopresentedinwhichSSTandroughnesslengtharevariedbetweensimulations. LargerSST reduces cross-frontal winds aloft and leads to a more continuous rainband. Larger roughness length destroys the surface wind shift and thermal gradient, allowing mesovortices to dominate the precipitation distribution, leading to distinctive and irregularly shaped, quasi-regularly spaced precipitation maxima.
Introduction
Precipitation patterns along cold fronts as observed by radar can exhibit a variety of morphologies.Sometimes the precipitation can fall within a single narrow line called a narrow cold-frontal rainband (e.g., Browning and Harrold 1970;Hobbs 1978;Houze and Hobbs 1982; Knight and Hobbs 1988).Other times, the precipitation can break up into regularly spaced cores of maximum precipitation rate separated by gaps of lighter or no precipitation in between, called core-and-gap structures (Hobbs and Biswas 1979;James and Browning 1979;Hobbs and Persson 1982).Often, the cores are rotated anticyclonically from the orientation of the surface cold front.These features are clearly observed over land by precipitation radar (e.g., over the British Isles as in Fig. 1) and have also been observed over open ocean (e.g., Hobbs and Biswas 1979;Hobbs and Persson 1982;Wakimoto and Bosart 2000;Jorgensen et al. 2003).The precipitation associated with these fronts is often shallow, with hydrometeors only reaching heights of 2-4 km (James and Browning 1979;Hobbs and Biswas 1979;Hagen 1992;Browning and Reynolds 1994;Brown et al. 1999;Wakimoto and Bosart 2000;Jorgensen et al. 2003).As well as introducing pronounced local variability in rainfall accumulation, the circulations associated with precipitation cores can spawn tornadoes (e.g., Mulder and Schultz 2015;Apsley et al. 2016).
A considerable variety of morphologies of precipitation cores exists from case to case.Cores may be long and spaced far apart (Fig. 1a) or short and spaced much more closely together (Fig. 1b).Cores may also be more curved in some cases than in others (Figs. 1c,d; in both these images the cores point into the prefrontal air mass, likely associated with mesovortices along the front).Furthermore, cores may vary within a single case, with their length, width, spacing, curvature, and cold-frontrelative angle varying along the cold front (particularly exhibited in the boxes drawn in Figs.1a and 1c).Cores may even be embedded within a wide band along the cold front (Fig. 1d).In contrast, some cold fronts maintain an almost continuous line of maximum precipitation (Fig. 1e).Given this wide range of possible morphologies, the natural questions are what are the factors that cause these different morphologies of precipitation core-andgap regions along cold fronts, and why do some fronts not develop the core-and-gap morphology?
Cores and gaps have commonly been proposed to result from horizontal shear instability, for which many studies have gained observational evidence (e.g., Matejka 1980;Carbone 1982;Hobbs and Persson 1982;Browning and Roberts 1996).Also, some studies have performed realdata simulations of cores and gaps, in which the simulated wind field has properties that agree well with horizontalshear-instability theory (e.g., Brown et al. 1999;Jorgensen et al. 2003;Smart and Browning 2009).This paper seeks to build on these previous studies by addressing the question of why some fronts develop cores and gaps and some do not, in terms of the synoptic environment.
When the cloud-layer wind is oriented along a front, precipitation tends to fall parallel to the front, whereas fronts with a large component of cloud-layer wind normal to the front are more likely to generate precipitation maxima pointing away from the front (Dial et al. 2010).Jorgensen et al. (2003) made detailed in situ measurements of the flow environment along and across a cold front with cores and gaps over the eastern Pacific.They found that the vertical shear of the low-level cross-frontal wind was closely linked to the resulting precipitation structures.At the poleward ends of cores, where the precipitation maxima were farthest displaced from the general orientation of the cold front, the cross-frontal vertical shear was greatest and the updraft erect or downshear tilted (i.e., eastward).At the equatorward ends of cores and in gaps, where the precipitation maxima were closest to the cold front, the cross-frontal shear was weaker with the updraft broader and upshear tilted (up the cold front, see Fig. 25 in Jorgensen et al. for a schematic).
The above studies suggest that the development of vertical shear across a cold front applies a perturbation to the wind field above the surface cold front and results in cyclonically oriented precipitation cores.In contrast, a cold front with more alongfront-oriented vertical shear should lead to a more uniform wind field along the front and be host to more uniform rainfall.These two situations may be expected to occur in the event of kata-type and ana-type cold fronts, respectively.An ana-type cold front is characterized by rearward sloping ascent from the warm conveyor belt up the cold front, so that the precipitation is aligned along the cold front (e.g., Browning and Pardoe 1973).A katatype cold front is crossed by an intrusion of dry air from upper levels, so that an upper-level front forms above and ahead of the surface cold front and some precipitation falls ahead of the cold front (e.g., Browning and Monk 1982).
To isolate the physics associated with the variation in the morphology of precipitation cores, idealized simulations have been performed in previous studies.Bluestein and Weisman (2000) showed that vertical shear 458 from the frontal orientation is most likely to generate regularly spaced and shaped cells along the front (such as in Fig. 1b), whereas vertical shear normal to the front is more likely to generate larger isolated cells that are irregularly spaced and shaped (such as in Fig. 1c).However, these structures are sensitive to more than just the shear environment.Kawashima (2007) simulated precipitation structures resembling observed cores and gaps from a wavelike disturbance that gained its energy from both vertical shear and buoyancy along the cold front, demonstrating that more than one mechanism may produce such structures.However, Kawashima (2011) noted that the simulations of Kawashima (2007) were initialized with weak vertical shear and only static stability was varied between simulations, hence were restricted in their generality and applicability to fronts in the real atmosphere.To achieve greater generality, Kawashima (2011) simulated precipitation structures along a cold front resembling observed precipitation cores, varying the vertical shear along/across the front, the magnitude of the wind shift across the front, and the prefrontal static stability between simulations.The cores and gaps he simulated were sensitive to all of these factors, suggesting that cores and gaps in the real atmosphere result from the nonlinear interaction of these factors.
These idealized-modeling studies demonstrate how precipitation cores are sensitive to the flow and stratification in the vicinity of the cold front along which they form.However, in the real atmosphere, the flow and stratification along a cold front may vary considerably, leading to a range of morphologies of cores along the front.To this effect, Browning and Roberts (1996) documented a cold front over the British Isles whose precipitation morphology was markedly different between the two ends of the cold front.At the equatorward end, an ana-type cold front led to relatively uniform precipitation along the cold front, with regularly spaced cores and small spaces between cores.At the poleward end, a kata-type cold front led to more poorly organized cores with greater spaces between.
In this paper we investigate how the synoptic environment determines which of the many possible morphologies the cores and gaps can adopt.The study builds on that of Norris et al. (2014) who compared the distribution and evolution of precipitation bands in idealized baroclinic wave simulations where roughness length, latent-heat release, and surface fluxes of sensible and latent heat were varied.Norris et al. (2014) simulated, at 20-km grid spacing, precipitation bands resembling those observed in the real atmosphere that were sensitive to all of these influences.The variation of the bands between simulations occurred via variations in the synoptic-and mesoscale structure of the flow environment.The current study aims to do the same, but for finer-scale precipitation cores.Therefore, a simulation from Norris et al. (2014) with all these diabatic factors appropriate to an extratropical cyclone over the open ocean, after 132 h when the surface cold front has formed, is reinitialized with a 4-km nested domain and the different diabatic factors are varied between simulations, subsequent to this time.Thus, this paper demonstrates how precipitation cores along a mature cold front of maritime origin vary, depending on differences in the synoptic and mesoscale flow environment.
Method
Moist idealized baroclinic-wave simulations were performed with version 3.7.1 of the Advanced Research Weather Research and Forecasting (WRF) Model (Skamarock et al. 2008).As in Norris et al. (2014), WRF was initialized with the baroclinic-wave test case, which consists of a zonal jet on an f plane ( f 5 10 24 s 21 ) in thermal wind balance with a horizontal temperature gradient at the surface of roughly 20 K (1000 km) 21 .The jet is obtained by inverting a baroclinically unstable potential vorticity distribution in the y-z plane, as in Rotunno et al. (1994).The computational domain is 8000 km in the north-south direction.In the east-west direction, the domain is 4000 km long, which is equal to the wavelength of the most unstable normal mode of the initial jet (Plougonven and Snyder 2007).The domain has 20-km grid spacing, with 80 vertical levels from the surface up to 16 km.The lower boundary condition is ocean with a roughness length z 0 5 0.2 mm and the sea surface temperature (SST) is fixed to the initial temperature of the lowest atmospheric model level.The simulations use the following parameterizations: Thompson et al. (2008) microphysics, Kain-Fritsch convection (Kain and Fritsch 1990;Kain 2004), MM5 surface layer (Monin and Obukhov 1954;Skamarock et al. 2008), and Yonsei University boundary layer (Hong et al. 2006).
After 132 h into this simulation, a wide band of precipitation lies along and ahead of a well-defined surface cold front and wind shift (Fig. 2), similar to composite real cyclones (e.g., Fig. 8 in Field and Wood 2007) and idealized modeled cyclones (e.g., Fig. 4 in Zhang et al. 2007).In the present paper, we focus on the precipitation at the leading edge of the cold front.A nested domain with 4-km horizontal grid spacing is inserted at the location indicated by the black box in Fig. 2, capturing the evolution of the cold front in the 36 h that it takes for the cold front to cross the nested domain.After the front begins to pass into the nested domain, several hours are needed for spinup in terms of simulating the front at higher resolution, during which an adjustment is FIG. 2. The initial condition for all simulations in this paper, showing an idealized baroclinic wave at 20-km grid spacing, 132 h after an initial perturbation is made to a uniform midlatitude jet, as described in Norris et al. (2014), but using WRF 3.7.1 [Norris et al. (2014) used WRF 3.4.1].The area shown is the full domain in the zonal direction (i.e., periodic), but only part of the meridional span of the domain.In the 132 h that the baroclinic wave has evolved to this state, the model has included the parameterizations and surface-layer specification detailed in section 2. Plotted are surface precipitation rate (colored, mm h 21 ), 2-m temperature (gray contours every 3 K), 10-m wind vectors (m s 21 ), the location of the nested domain (which is initialized at this time) indicated by the black box, and the location of the coastline (which is created at this time in all simulations other than CNTL) indicated by the bold red line.
evident in the resolution of the front.Therefore, the output from the inner domain is only documented from 21 h after the inner domain is initialized onward, which is also the time it takes for the front and its associated precipitation to be fully inside the inner domain.In the remainder of this article, the hour of the simulation refers to the number of hours since the 4-km domain was initialized.
As the purpose of this article was to simulate the precipitation morphologies along the cold front, 4-km grid spacing is sufficient.Several of the figures in Jorgensen et al. (2003) of core-and-gap structures along a cold front were plotted from a simulation with a 4-km horizontal grid spacing, demonstrating that a grid spacing of 4 km is sufficient to capture the core-and-gap structures.Moreover, their simulations were nested down to 1.3-km grid spacing in order to calculate air-parcel trajectories on this scale.Although the structures on the 1.3-km grid possessed more detail, the overall structures were not substantially different between the two grids.As we are not interested in this level of computational detail, 4-km grid spacing is sufficient for this article.
In the control simulation (CNTL), the simulation is kept the same as the outer domain over the 132 h before the nest is initialized (i.e., this is an all-ocean simulation), except that convective parameterization is switched off in the 4-km nest.In the other simulations, the lower boundary is converted to half-ocean-halfland, with ocean and land occupying the left-hand and right-hand sides of the outer domain, respectively (the bold red line in Fig. 2 indicates the location of the coastline).Therefore, the nested domain is almost all land in the sensitivity simulations and the coverage of these simulations in this paper is of the movement of the surface cold front over the land.However, the nested domain also contains 100 grid boxes in the x direction over the ocean, west of the coastline, in order that the land-sea contrast is resolved in the nest.
In all simulations, the left-hand ocean side of the domain has the same constant roughness length of 0.2 mm and a SST distribution equal to the initial temperature of the lowest model level, T 0 (the initial temperature when the baroclinic wave is initialized, as opposed to at the initialization time in this paper).The variability in the sensitivity simulations is all in the thermal and frictional characteristics of the right-hand land side of the domain, as summarized in Table 1.The LANDFRIC1, LANDFRIC2, and LANDFRIC3 simulations contain a frictional contrast only between land and sea, with land roughness lengths of 5, 250, and 2000 mm prescribed, respectively (appropriate for featureless land, high crops, and a city center, respectively; World Meteorological Organization 2008).These roughness lengths span the full range of possible roughness of a land surface.In reality, the roughness length is constantly changing as a front passes over a land surface, but these simulations allow the effect on a cold front of three distinct land-use categories to be isolated.The MINUS2K, MINUS1K, PLUS1K, and PLUS2K simulations contain a thermal contrast only, with land surface temperatures of T 0 2 2, T 0 2 1, T 0 1 1, and T 0 1 2 K, respectively.These are fairly arbitrary choices and were determined partly because particularly large or small land surface temperatures led to increasingly unphysical-looking simulations.As will be shown, this 4-K difference in land surface temperature between simulations leads to significant differences.The term ''land'' is used loosely and in no simulations is a land surface scheme used.The purpose of this paper is not to simulate a full-physics land-sea contrast.Instead, in LANDFRIC1, LANDFRIC2, LANDFRIC3, MINUS2K, MINUS1K, PLUS1K, and PLUS2K, the lower boundary is treated as an ocean surface with a roughness-length or SST discontinuity, in order to isolate the sensitivity of precipitation cores to each of these factors.''Coastline'' is used to refer to the discontinuity, whether frictional or thermal.The variations between simulations detailed in Table 1 are effective only after the simulations are reinitialized at 132 h.
Evolution of precipitation cores and surface wind field in control simulation
After 21 h, the cold front is fully inside the inner domain and a near-continuous line of maximum precipitation lies along it (Fig. 3).At the poleward end, the precipitation undergoes relatively little evolution, maintaining a continuous convective line (Fig. 4) and resembling the narrow cold-frontal rainband shown on TABLE 1.A summary of the sensitivity simulations.Gives roughness length z 0 and the SST distribution (T 0 is the initial temperature of the lowest model level when the baroclinic wave is initialized).Boldface entries are where the given factor is different to CNTL.The given factors are prescribed on the right-hand side of the domain only (to the right of the red line in Fig. 2) and, in all simulations, the left-hand side of the domain is as in CNTL.
radar in Fig. 1e.At the equatorward end, however, the initial narrow cold-frontal rainband starts to break up into core-and-gap regions from the south to the north (Fig. 5).By 34 h, the precipitation cores are well-defined and have formed appendages pointing into the warm air at their poleward ends, resembling the precipitation cores observed on radar in Figs.1a-d.
A strip of maximum vorticity forms along the front, which remains intact throughout the simulation (Figs. 4, 5).
Unlike horizontal shear instability along the cold front, where the maximum vorticity rolls up, vorticity structures appear in the simulation ahead of the front.Initially, these are chaotic, but they become more organized and move poleward along the front as the simulation progresses, eventually creating bands of vorticity rotated clockwise with respect to the front, similar to the precipitation cores.These structures generate undulations in the lighter prefrontal precipitation at the both the poleward (Fig. 4) and equatorward (Fig. 5) ends.Although the strip of maximum vorticity along the front does not roll up, the mesovortices ahead of the front originate on the line of maximum vorticity.
Also from 22 h onward, separate vorticity maxima begin to form behind the cold front, becoming well-defined and periodically spaced at the equatorward end by 26 h (Fig. 5).These vorticity structures behind the front originate as a strip of vorticity in the low center, as shown by an animation of the outer domain, and develop equatorward as the cyclone occludes (not shown).Likely to be associated with horizontal shear instability, these postfrontal vorticity maxima propagate equatorward, the opposite direction to the prefrontal maxima, and point in the opposite direction to the prefrontal maxima.These postfrontal maxima never become connected to the poleward part of the front (Fig. 4), but they become connected to the strip of maximum vorticity along the equatorward part of the front after 28 h and generate undulations in the postfrontal precipitation (Fig. 5).
Thus, when the baroclinic wave is simulated at 4-km grid spacing, mesovortices either side of the front generate undulations in the lighter warm-frontal and postfrontal precipitation.At the equatorward end, there are both prefrontal and postfrontal vorticity structures and hence undulations in precipitation on either side of the cold front, from the interaction of which the precipitation cores appear to form.At the poleward end, the postfrontal vorticity structures are much farther behind the cold front, so that there are only undulations in the warm-frontal precipitation.
Diagnostics of precipitation cores
The continuity of the strip of maximum vorticity in the 10-m winds along the cold front (Figs. 4, 5) gives a useful reference point throughout the simulation.Alongfront variability may therefore be investigated relative to this well-defined frontal location by deriving diagnostics to describe the evolution of the precipitation cores and of the flow environment in which they form.Therefore, every hour, at every y coordinate in the inner domain, the corresponding x coordinate was found at which the 10-m vorticity gradient in the x direction (roughly FIG. 3. Surface precipitation rate (colors, mm h 21 ), 2-m temperature (red contours every 3 K), and surface pressure (labeled gray contours every 4 hPa) in the inner domain 21 h after the inner domain is initialized (21 h after the outer domain is shown in Fig. 2).''Poleward'' and ''equatorward'' boxes show locations of finer-scale plots presented in subsequent figures as indicated.In all panels in this paper referred to as the poleward and equatorward boxes, the range of x coordinates varies from panel to panel, depending on the location of the cold front at the given time, but the range of y coordinates is always as indicated in this figure (i.e., the same part of the cold front).Also shown is the location of the front, identified from the algorithm detailed in section 4 (rearmost bold line) and the line obtained by moving 6 grid cells (24 km) forward for each y coordinate.Various diagnostics are calculated between these two lines throughout the paper.cross-frontal) was greatest.For every hour of the simulation, this method generated a set of grid points from south to north, giving the western edge of the cold front.
The location of the front is shown at 21 h in Fig. 3 by the leftmost bold line, demonstrating the effectiveness of this method in identifying the front.The line of maximum precipitation maximum is just ahead of the identified front, with near-constant distance between the identified front and the maximum precipitation.The poleward and equatorward boxes in Fig. 3 each stretch 90 grid points (360 km) from north to south along the front (the parts of the front shown in Figs. 4 and 5, respectively) and the state of the atmosphere above the front may be compared between these two sections.
The vertical wind profile above this identified front is markedly different between the poleward and equatorward ends (Fig. 6).The hodographs are calculated from the mean wind along the front at the equatorward and poleward ends.The hodographs shown are at 21 h, before the cores and gaps form in the rainband at the equatorward end, but the essential patterns remain the same throughout the period shown in Figs. 4 and 5.At both the equatorward and poleward ends, winds along the surface front are northwesterly, becoming southwesterly just above the surface, with positive vertical shear of zonal and meridional wind up to 400 hPa.This shear is greater at the poleward end, particularly in the y component, where shear is almost purely meridional (roughly alongfrontal) from about 950 to 850 hPa, with a magnitude of about 30 m s 21 difference in this layer.Contrastingly, at the equatorward end, this layer of along-frontal shear is just between about 950 and 900 hPa with a magnitude of 15 m s 21 difference in this layer.
As time progresses and the cores and gaps of precipitation begin to form at the equatorward end, du/dz (cross-frontal vertical shear) becomes more similar between the poleward and equatorward ends of the front (Fig. 7a).Meanwhile, dy/dz (along-frontal vertical shear) at the poleward end remains nearly double that at the equatorward end (40-45 m s 21 from the surface to 600 hPa at the poleward end vs 20-25 m s 21 at the equatorward end, Fig. 7b).Therefore, above the surface cold front, the along-frontal shear dominates throughout the simulation at the poleward end, indicating nearthermal wind balance, whereas cross-frontal shear becomes almost as great at the equatorward end.
To relate this contrast in the vertical wind profile to the precipitation distribution, the method of identifying the cold front through time described above also allows for an analysis of alongfront variability of precipitation and other variables in the area just ahead of the front (where maximum precipitation falls, Fig. 3).The eastern bold line in Fig. 3 is obtained by moving 6 grid points FIG. 7. Time series of 600-hPa wind speed minus 10-m wind speed, showing the (left) u and (right) y components.Each value is calculated from taking the mean difference between 10-m and 600-hPa wind speed along the front (the rear bold line in Fig. 3) at the poleward and equatorward ends separately.FIG. 6. Hodographs at 21 h of the mean vertical wind profile along the front (the rear bold line in Fig. 3) at the poleward and equatorward ends separately.The marked values are the pressure levels (hPa).
(24 km) east of the identified front for each y coordinate.The space between the two lines is just wide enough to contain the cold-frontal precipitation, including later in the simulation when the precipitation cores form.Thus, analyzing along-frontal variability of precipitation and other atmospheric variables within these 24 3 360 km 2 boxes for the poleward and equatorward ends of the front separately describes the variability of the precipitation cores and associated flow fields, both along the front and through time.
The time series of the variance of precipitation along the front captures when the precipitation cores start to become pronounced at the equatorward end after about 30 h (Fig. 8a).By contrast, the variance at the poleward end is near-constant through the simulation, reflecting the persistence of the continuous convective line of precipitation.The time series of 10-m vorticity variance along the front does not capture this rapid increase after 30 h (Fig. 8b).Furthermore, the 10-m vorticity variance at the poleward end is greater than at the equatorward end, despite the fact that the precipitation cores form only at the equatorward end.However, just above, rather than at the surface, the along-frontal vorticity variance at the equatorward end closely resembles the precipitation variance (Fig. 8c), with the precipitation and vorticity variances rapidly increasing at the same time and by the same proportional amount.
The greater dependence of equatorward precipitation on vorticity just above than at the surface is further illustrated by the fact that, after 30 h when the precipitation variance rapidly increases, precipitation is better correlated with both 950-hPa and even 900-hPa vorticity than with 10-m vorticity (Fig. 9b).Meanwhile, at the poleward end, the correlation of vorticity with precipitation steadily decreases with height (Fig. 9a).Therefore, precipitation at the equatorward end is aligned along the 950-900-hPa wind field, while precipitation at the poleward end is aligned along the surface wind field.The remainder of this paper investigates why the equatorward precipitation is less driven by surface winds and how the synoptic environment creates different vertical structures between these two parts of the front.
Synoptic differences between poleward and
equatorward ends in lower-tropospheric winds and resulting differences in the vertical structure of a cold front This section investigates what features of the flow environment, other than at the surface, determine the morphology of precipitation.As discussed in the introduction, the morphology of precipitation cores is commonly thought to depend largely on the magnitude and FIG. 8. Along-frontal variances of the given variables calculated every hour from 21 h onward between the two bold lines in Fig. 3. Variances are calculated along each of the six front-parallel lines between the two bold lines separately and then the mean of these six is plotted.Time series are calculated for the poleward and equatorward ends of the front separately.orientation of near-surface vertical shear.Naturally, a front that is in near-thermal wind balance, with vertical shear oriented along it will be host to updrafts and hence precipitation remaining along the front.A front that has cross-front-oriented vertical shear will be host to updrafts that become tilted ahead of the front with height, so that some precipitation falls ahead of the front.
The wind and thermal fields between 950 and 850 hPa are markedly different between the poleward and equatorward ends of the front (Fig. 10).At the poleward end, there is a pronounced frontal structure and wind shift up to 850 hPa.At the equatorward end, there is no frontal structure or wind shift above 950 hPa.Consequently, the winds aloft are much more oriented across the surface cold front at the equatorward end.Consequently, updrafts remain erect at the poleward end and precipitation keeps falling uniformly along the front (Fig. 11, top panels).The layer of upright ascent (Fig. 11b) corresponds to the layer of almost purely meridional vertical shear at the poleward end (Fig. 6).Contrastingly, at the equatorward end, the updrafts become forward tilted by the cross-frontal winds aloft and the precipitation cores magnify (Fig. 11, bottom panels).By 35 h, when the precipitation cores are most pronounced, the hydrometeors are falling increasingly ahead of the surface cold front with height, so that maximum precipitation reaches the surface ahead of the front (Fig. 11d).The shallow depth of these circulations is consistent with previous studies of core-and-gap structures along cold fronts (e.g., James and Browning 1979;Hobbs and Biswas 1979;Hagen 1992;Browning and Reynolds 1994;Brown et al. 1999;Wakimoto and Bosart 2000;Jorgensen et al. 2003) and studies of continuous narrow cold-frontal rainbands (e.g., Browning and Pardoe 1973).However, the minimum contour of hydrometeors is 0.2 g kg 21 to highlight those associated with the precipitation core, so precipitation particles are in fact falling throughout the lower troposphere.
Of course, this vertical structure at the equatorward end of the cold front only holds where the precipitation becomes displaced from the surface cold front.A strong contrast in the vertical frontal structure exists along the length of individual precipitation cores.There is no great contrast in the 2-m frontal temperature gradient along the length of precipitation cores (Fig. 12a) (i.e., whether frontal precipitation is heavy or light, the surface temperature gradient is roughly the same) and the surface frontal structure remains continuous along the length of precipitation cores and gaps (Fig. 13a).
At 900 hPa, on the other hand, the u gradient is considerably greater where surface precipitation rate is greater (Fig. 12b), roughly 3 times greater where precipitation is .5 mm h 21 (cores) than where precipitation is ,2 mm h 21 (gaps).This contrast in the thermal structure between cores and gaps is illustrated in Fig. 13 in the area indicated in Fig. 11c.At the poleward end of a typical precipitation core, the front and updraft are tilted eastward up to 800 hPa (Fig. 13b) as shown in Fig. 11d, so that the precipitation is farthest ahead of the surface cold front along the length of the core (Fig. 13a).Farther equatorward, where the precipitation is falling closer to the surface cold front, the frontal structure and updraft extend almost as high above the surface, but are erect (Fig. 13c), as with the continuous convective line along the poleward part of the front (Fig. 11b).In gap regions (i.e., where there is no heavy precipitation at or ahead of the surface cold front) the frontal structure and updraft are erect, but only extend to 900 hPa (Fig. 13d; note that hydrometeors are present, but below the minimum contour interval).
This contrast in the vertical structure of temperature and vertical velocity along the length of individual cores indicates that the precipitation cores form in conjunction with a perturbation along the front, whereby surface frontogenesis is relatively uniform along the front, but occurs up to about 850 hPa where there are cores, and only slightly above the surface where there are gaps of precipitation.This perturbation does not occur at the poleward end of the front, due to the much deeper anatype cold-frontal structure with strong along-frontal wind shear keeping the updraft erect above the surface.Despite the results of the simulations in the current study, the greater tendency for precipitation cores at the equatorward than poleward end is not universal.As stated in the introduction, Browning and Roberts (1996) documented a cold front with precipitation cores at the poleward but not equatorward end (their Fig. 19).Similarly, Kawashima (2016) performed real-data WRF simulations of a case where precipitation cores formed more distinctively and for longer at the poleward than equatorward end (his Fig. 12).Therefore, the equatorward part of the front is not necessarily where the most distinctive structures form.The particular poleward/ equatorward contrast exhibited in this study is due to the particular baroclinic wave simulated and the particular synoptic environment to which this baroclinic wave leads.Crucially though, as with Browning and Roberts (1996), the precipitation cores in this study form along the part of the front where temperature and pressure contours are more perpendicular (Fig. 3), hence weaker thermal wind balance and a more kata-type front.As discussed previously, precipitation is more continuous along the front (i.e., less tendency for cores and gaps) where the front is closer to thermal wind balance, which may be at the poleward or equatorward end, depending on the synoptic setup as the cyclone develops.
Sensitivity of precipitation cores to thermal and frictional properties of the lower boundary
Although precipitation cores form at the equatorward end in CNTL, the surface temperature gradient does not vary in the along-frontal direction and precipitation remains continuous along the front (Fig. 11c), unlike in FIG.11.A comparison of the poleward and equatorward boxes after 35 h.(left) Precipitation rate (mm h 21 ) and 10-m winds (m s 21 ).(right) Cross sections across the cold front at the locations indicated in left panels: total hydrometeor mixing ratio (rain, snow, and graupel) (blues, g kg 21 ), u (red contours every 2 K), and w (black contours every 0.1 m s 21 with a minimum contour of 0.3 m s 21 ).The box drawn in (c) shows the location of the smaller-scale plot in Fig. 13a.some of the radar images in Fig. 1, where large gaps of absent precipitation form between cores.This section shows the effect of altering the heat and momentum fluxes from the lower boundary on the frontal flow environment and hence how some of the more distinctive precipitation structures may form along the front.As described in section 2, in the sensitivity simulations the front is forced to pass over a coastline, east of which either SST or roughness length is altered.Section 6a investigates the effect of altering SST and hence surface sensible heat fluxes, whereas section 6b investigates the effect of altering roughness length and hence surface momentum fluxes.The analysis is performed at the equatorward end of the front only, where the core-and-gap morphology is most pronounced in all simulations.
a. Sensitivity to SST
In PLUS2K, PLUS1K, MINUS1K, and MINUS2K, the SST of the land surface east of the coastline is increased or decreased by 1 or 2 K (as described by the name of each simulation) at each grid point.A few hours after the front passes over the coastline, the differences in the wind profile above the surface cold front are shown in Fig. 14a.Relative to CNTL, PLUS2K and MINUS2K slightly reduce and increase du/dz, respectively, up to about 700 hPa.The 1000-800-hPa u shear in MINUS2K is about double that in PLUS2K.However, along-frontal winds are almost completely unaffected by SST (i.e., very little difference in dy/dz between simulations), so that in PLUS2K the along-frontal shear is relatively dominant.Consequently, for most of the simulations, greater SST implies maximum precipitation falling closer to the front (Fig. 15a).Toward the end of the simulations, those with lower SST show a decrease in the distance of maximum precipitation from the front on account of the fact that the precipitation rate along the front decays (not shown).Along-frontal variance of precipitation does not show that cores are more pronounced with lower SST (Fig. 15b), but variance is also affected by the fact that precipitation rate is greater when SST is greater.
The contrasts between PLUS2K, CNTL, and MINUS2K are illustrated in Fig. 16.Although the surface front is no different between the simulations, the less stable stratification with greater SST is shown by the u contours (right panels), resulting in the greater ascent above the surface front, hence greater precipitation rate along the front (left panels).However, the lack of vertical shear of the zonal wind in PLUS2K (Fig. 14a) keeps precipitation in a continuous convective line, whereas cores and gaps form in the simulations with lower SST (Fig. 16, left panels).Therefore, enhanced sensible heat fluxes suppress the core-and-gap morphology.
b. Sensitivity to roughness length
In LANDFRIC1, LANDFRIC2, and LANDFRIC3, the roughness length of the land surface is progressively increased.The effect of greater friction on the vertical wind profile above the surface front is more striking than for SST variability (cf.Figs.14a,b).The reduction of surface meridional wind speed with greater friction implies less meridional shear from the surface to the midtroposphere, whereas the reduction of surface zonal wind speed implies greater zonal shear.Consequently, the shallow layer of meridional shear up to 900 hPa in CNTL is almost nonexistent in LANDFRIC2, in which FIG. 12. Time series of the mean cold-frontal u gradient at the equatorward end of the front for different intervals of maximum precipitation along or ahead of the front at (a) the surface and (b) 900 hPa.Specifically, for every y coordinate, the maximum u gradient and precipitation rate are found (not necessarily at the same x coordinate) and the mean u gradient is calculated for all y coordinates where the maximum precipitation is within the given interval.Missing values in some time series indicate that there were no y coordinates at the given time with a precipitation maximum within the given interval.zonal shear dominates the vertical wind profile from the surface to the midtroposphere.The effect in both alongfrontal variance of precipitation and distance of maximum precipitation from the front is striking (Figs.15c,d).The time series for LANDFRIC2 is noisy because the line of maximum vorticity from which the location of the front is identified is not so well-defined in this simulation.
The greater alongfront variability of precipitation with increasing roughness length is illustrated in Fig. 17.As roughness length increases between simulations, the surface wind shift (left panels) and surface cold front (right panels) are increasingly poorly defined, as in the idealized simulations of Hines and Mechoso (1993), Rotunno et al. (1998), andNorris et al. (2014).In LANDFRIC3 there is hardly a surface wind shift at all and no discernable surface cold front (LANDFRIC3 was omitted from previous figures because the line of maximum vorticity from which the front is identified does not exist in this simulation).Therefore, a transition is evident between simulations in the cross sections: in CNTL the ascent is relatively erect above the surface cold front with hydrometeors falling relatively near the front; in LANDFRIC1, a separate maximum of vertical velocity has formed ahead of the front; in LANDFRIC2, this maximum ahead of the front has separated from the weak ascent above the front; in LANDFRIC3 almost all the ascent and hydrometeors are ahead of the front (the front is not even visible in the u contours).The more poorly defined cold front with increasing roughness length has also decreased static stability, enhancing updrafts, so that the precipitation rate increases from CNTL to LANDFRIC3.
LANDFRIC3 shows what can happen to the precipitation distribution along the cold front when the surface wind shift breaks down and mesovortices are allowed to dominate the surface wind field (Fig. 18).
After 22 h, the strip of maximum 10-m vorticity representing the surface wind shift is still relatively intact and cold-frontal precipitation is continuous along it.Thereafter, the vorticity strip breaks up into the mesovortices and the core-and-gap morphology becomes increasingly distinctive.Unlike in CNTL, where the vorticity strip remains intact and a line of precipitation remains along the front, despite the formation of the cores (Fig. 5), large gaps of absent precipitation form between the cores in LANDFRIC3 (Fig. 18).The large precipitation cores in LANDFRIC3 resemble the long curved precipitation cores observed on radar in Figs.1a and 1c, suggesting that these structures may have formed due to the cold front traveling a long distance over a rough land surface, with the surface wind shift and cold front breaking up into mesovortices.
These sensitivity simulations are consistent with the sensitivity simulations of Kawashima (2011).In simple idealized experiments, he found that greater vertical shear of the cross-frontal wind (du/dz), relative to the vertical shear of the along-frontal shear (dy/dz), enhances the growth rate and amplitude associated with cores and gaps.The current study has shown that the same holds in a more realistic primitive equation model and how in the real FIG.15.Time series comparing the given simulations along the equatorward part of the front.(left) The mean distance of maximum precipitation along the front.Specifically, for each y coordinate, the distance is found at which maximum precipitation ahead of the front lies, only searching 24 km ahead of the front at each y coordinate to avoid identifying warm-frontal precipitation.(right) Along-frontal precipitation variance (as described in Fig. 8).atmosphere these differences in vertical shear may arise.In particular, lower surface heat fluxes may increase du/dz (Fig. 14a), whereas greater surface friction both increases du/dz and reduces dy/dz (Fig. 14b), so that the greatest differences between precipitation cores occur due to differences in roughness length.
Summary
Precipitation cores of anticyclonic orientation are frequently observed on radar along surface cold fronts of varying width, length, curvature, wavelength, and angle made with the surface cold front.Previous studies have related variability in the morphology of precipitation cores to variability in the wind and thermal profiles above the surface cold front.These studies have either been observational studies, in which the sensitivity of the cores to the atmospheric conditions is not investigated, or simple-idealizedmodel studies, in which the horizontal homogeneity of the large-scale deformation prevents an analysis of how the synoptic environment leads to differences in FIG.16.As in Fig. 11, but comparing simulations of various SSTs and with different contour intervals for hydrometeors.MINUS2K is shown after 29 h (as opposed to 35 h when the other simulations are shown) because thereafter in MINUS2K the precipitation along the front (and hence the cores and gaps) decays, as illustrated in the time series in Figs.15a and 15b.
precipitation morphology between different parts of the front.
In this study, a more realistic primitive equation model (WRF) was run at high resolution to investigate the sensitivity of cores in an idealized framework.
Moist idealized baroclinic-wave simulations were performed, in which a mature cold front at 20-km grid spacing, subject to heat and momentum fluxes from the lower boundary, was reinitialized with a nested domain of 4-km grid spacing inserted to simulate FIG. 17.As in Fig. 11, but comparing simulations of various roughness lengths.
clockwise-oriented precipitation cores along the cold front.
In the control simulation, where the lower boundary has roughness length appropriate for the ocean and a SST distribution equal to the initial temperature at the lowest model level, a continuous narrow cold-frontal rainband persists along the poleward part of the front, whereas at the equatorward end of the front periodic precipitation maxima appear within the narrow rainband, resembling those observed on radar.
The precipitation cores bear resemblance to counterpropagating mesovortices on either side of the front.The postfrontal mesovortices form on a line of vorticity behind the cold front, whereas the prefrontal mesovortices are attached to the line of maximum vorticity along the cold front.These mesovortices generate undulations in the lighter prefrontal and postfrontal precipitation and eventually interact across the cold front to generate the precipitation cores along the front.However, the precipitation cores are aligned along the winds just above rather than at the surface, indicating that ascent at the surface is unaffected, whereas ascent slightly aloft is rotated by the cross-frontal winds, tilting updrafts.These cross-frontal winds are associated with cross-frontal vertical shear, which is of similar magnitude at both the poleward and equatorward ends of the front.However, at the equatorward end, there is a weaker cold front and vertical shear of the along-frontal winds is hardly any larger than that of the cross-frontal winds.Contrastingly, at the poleward end, although the cross-frontal shear is similar to that at the equatorward end, the along-frontal shear is more than double the magnitude of the cross-frontal shear, associated with a more ana-type front that persists through the simulation and keeps updrafts erect.
At the surface, the line of maximum vorticity remains intact along both the poleward and equatorward parts of the surface cold front throughout the control simulation and, despite the formation of the precipitation cores at the equatorward end, precipitation remains continuous along the front.However, sensitivity simulations reveal that, when surface friction is greater and the surface wind shift across the front breaks down, the mesovortices dominate the surface wind field (as well as aloft), so that large gaps of absent precipitation form between cores.In simulations with the highest friction, the cold front eventually becomes poorly defined, so that there is no coherent structure to the vorticity or precipitation maxima.The sensitivity simulations also reveal that greater SST and hence reduced static stability, although not affecting the surface front, leads to more erect updrafts above the surface front and hence a more continuous rainband with no distinctive precipitation cores.
All the simulations in this paper illustrate that mesovortices may form, both ahead of and behind a cold front.When a cold front has a well-defined wind shift and temperature gradient extending well above the surface, the magnitude of vorticity in these mesovortices is much smaller than that along the cold front, so that the strip of vorticity along the cold front remains intact, updrafts remain erect, and precipitation keeps falling along the cold front.When the wind shift and temperature gradient rapidly decay with height and there is little vertical structure to the cold front, mesovortices may dominate the wind field and hence vertical velocity just above the surface.However, as long as the surface cold front remains intact, the rainband remains continuous along the surface cold front, despite the formation of maxima within it, and the surface cold front provides a medium along which perturbations can propagate.In some cases, the wind shift and temperature gradient may weaken dramatically, including at the surface, allowing the mesovortices to dominate the full three-dimensional flow field, so that large gaps of precipitation appear along the front.In this case, there is no surface cold front for the cores to propagate along and the precipitation field becomes increasingly poorly defined.This paper has shown that this case may occur in the event of high friction, greatly reducing surface winds and leading to a poorly defined cold front.
These sensitivity simulations may explain much of the observed variability in the core-and-gap morphology along cold fronts.As shown on radar and by these simulations, the morphology of precipitation cores may vary along the cold front at a given time, over time during a cold front's evolution, and between different cold fronts.These simulations demonstrate how the precipitation cores depend on variations in the synoptic environment, and the resulting variations in the wind and temperature profiles above the surface cold front.
FIG. 1 .
FIG. 1. Met Office precipitation-radar composites at 1-km grid spacing, expressed as precipitation rate in mm h 21 , during the passage of cold fronts over the British Isles at (a) 1600 UTC 29 Nov 2011, (b) 1350 UTC 2 Nov 2013, (c) 1215 UTC 11 Nov 2010, (d) 2000 UTC 18 Dec 2013, and (e) 1800 UTC 22 Nov 2012.Boxes are drawn in (a)-(d) to highlight finescale precipitation cores, but the larger-scale precipitation cores in (a) over southern England should be self-evident.
FIG. 9 .
FIG. 9. Time series of the correlation between precipitation rate and relative vorticity between the two bold lines in Fig.3, calculated at the poleward and equatorward ends separately.Separate time series are shown at various vertical levels, indicating how well aligned surface precipitation is to the relative vorticity at each level.
FIG. 13
FIG. 13.(a) Precipitation rate (mm h 21 ) and 2-m temperature (black contours every 1 K) in the smaller area indicated in Fig. 11c at 35 h.(b)-(d) Cross sections at locations indicated in (a), which are as in the right-hand panels of Fig. 11, but only up to 700 hPa to emphasize near-surface vertical structures and with different contour intervals. | 11,097.8 | 2017-06-29T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Interleukin 32 Promotes Foxp3+ Treg Cell Development and CD8+ T Cell Function in Human Esophageal Squamous Cell Carcinoma Microenvironment
Proinflammatory cytokine interleukin 32 (IL-32) is involved in infectious diseases and cancer, but what subtypes of immune cells express IL-32 and its roles in tumor microenvironment (TME) have not been well discussed. In this study, we applied bioinformatics to analyze single-cell RNA sequencing data about tumor-infiltrating immune cells from esophageal squamous cell carcinoma (ESCC) TME and analyzed IL-32 expression in different immune cell types. We found CD4+ regulatory T cells (Treg cells) express the highest level of IL-32, while proliferating T and natural killer cells expressed relatively lower levels. Knocking down of IL-32 reduced Foxp3 and interferon gamma (IFNγ) expressions in CD4+ and CD8+ T cells, respectively. IL-32 was positively correlated with Foxp3, IFNG, and GZMB expression but was negatively correlated with proliferation score. IL-32 may have a contradictory role in the TME such as it promotes IFNγ expression in CD8+ T cells, which enhances the antitumor activity, but at the same time induces Foxp3 expression in CD4+ T cells, which suppresses the tumor immune response. Our results demonstrate different roles of IL-32 in Treg cells and CD8+ T cells and suggest that it can potentially be a target for ESCC cancer immunosuppressive therapy.
INTRODUCTION
Interleukin 32 (IL-32) is a novel cytokine related to cancer and immune diseases. It is also one of the essential members of the inflammatory cytokine networks, which express in immune cells and non-immune cells. Inflammatory cytokines, including IL2, IL-1β, and IFNγ, induce the secretion of IL-32 (Rosenow and Menzler, 2013). In early studies, IL-32 was upregulated in several inflammatory diseases, such as rheumatoid arthritis, inflammatory bowel disease, allergic rhinitis, and multiple sclerosis (Cagnard et al., 2005;Shioya et al., 2007;Jeong et al., 2011;Morsaljahan et al., 2017). However, several recent studies have found that IL-32 negatively regulates immune function in some immune diseases such as asthma (Xin et al., 2018), HIV infection (Palstra et al., 2018), Alzheimer disease (Yun et al., 2015), and non-alcoholic fatty liver disease (Dali-Youcef et al., 2019). IL-32 was further reported to be associated with the occurrence and development of various malignant tumors such as gastric cancer, lung cancer, and cutaneous T-cell lymphoma (Sorrentino and Di Carlo, 2009;Kang et al., 2012;Yousif et al., 2013;Tsai et al., 2014;Gruber et al., 2020), suggesting it has a critical role in tumor development. However, what types of tumor-infiltrating lymphocytes express IL-32 and the exact role of IL-32 in these cells are still unclear and need further study.
Regulatory T cells (Treg cells) are an integral part of the immune system to maintain immunological tolerance. At the same time, they suppress the antitumor immune response, thereby triggering tumor immune escape (Togashi et al., 2019). Previous studies have found that IL-32 could be detected in esophageal squamous cell carcinoma (ESCC) immortalized cell lines. Besides, the combination of IL-32 expression on tumor cells and Treg infiltration was selected as the independent prognostic factor in ESCC (Nabeki et al., 2015). However, the role and the significance of IL-32 in infiltrating Treg cells in the tumor microenvironment (TME) still need to be explored.
Our study utilized the published single-cell RNA sequencing (scRNA-seq) data to analyze the potential expression and function characteristics of IL-32 in immune cells in the microenvironment of ESCC. IL-32 was primarily expressed in T and natural killer (NK) cells. However, B cells and monocytes/macrophages expressed a lower level. Interestingly, we found Treg cells express the highest IL-32 than other T cell subsets. IL-32 had positively correlated with Foxp3, IFNγ and GZMB expression but was negatively associated with proliferation score. Furthermore, knocking down of IL-32 decreased Foxp3 expression in the Treg cell-inducing system; additionally, inhibited IL-32 expression in CD8 + T cells diminished IFNγ production. According to these results, we speculate that T cells that express IL-32 may have a contradictory role that promotes IFNγ expression in CD8 + T cells, which enhances the antitumor activity, and induces CD4 + T cells Foxp3 expression, which suppresses tumor immune response.
ESCC scRNA-Seq Data Acquisition
The raw data of ESCC in this study were downloaded from the Gene Expression Omnibus database (GSE145370), including seven ESCC tumor and paired adjacent tissues (Zheng et al., 2020).
scRNA-Seq Data Analysis
The data analysis pipeline, including transfer from raw files to FASTQ, barcode identification, UMI extraction, filter, and the map read, was the same as the method described in our published article (Zheng et al., 2020). Briefly, 10 × Genomics Cell Ranger (3.0.1 version) pipeline was used to demultiplex raw files into FASTQ files, extract barcodes and UMI, filter, and map reads to the GRCh38 reference genome and generate a matrix containing normalized gene counts versus cells per sample. This output was then imported into the Seurat (v3) R toolkit for quality control and downstream analysis. All functions were run with default parameters unless otherwise specified. Low-quality cells (<400 genes/cell and >10% mitochondrial genes) were excluded. As a result, 80,787 cells with a median of 1,170 detected genes per cell were included in downstream analyses. To remove the batch effect, the datasets collected from different samples were integrated using Seurat v3 with default parameters.
Dimensionality Reduction, Clustering, and Annotation
We then identified a subset of genes that exhibit high cellto-cell variation in the dataset, which helped to represent the biological signal in downstream analyses. The Seurat function "FindVariableFeatures" was applied to identify the highly variable genes (HVGs). The top 2,000 HVGs were used for data integration. The data were scaled using "ScaleData, " and the first 20 principal components were adopted for autoclustering analyses using "FindNeighbors" and "FindClusters" functions. For all 80,787 cells, we identified clusters setting the resolution parameter as 1.5, and the clustering results were visualized with the UMAP scatter plot. The marker genes of each cell cluster were identified using the receiver operating characteristic analysis function provided by the Seurat "FindAll-Markers" function for the top genes with the largest AUC (area under the curve). The whole dataset was then categorized into NK cells, T cells, myeloid cells, mast cells, and other cells (including fibroblast cells and basal cells) according to the known markers: KLRC1, KLRD1 (NK cells), CD3G, CD3D, CD3E, CD2 (T cells), FCGR2A, CSF1R, FCER1A (myeloid cells), CD19, CD79A (B cells), TPSB2, CPA3 (mast cells), KRT19, IGFBP4, and CTSB (basal cells/fibroblasts). Clusters were also confirmed by identifying significantly highly expressed marker genes in each cluster and then comparing them with the known cell type-specific marker genes. For 44634 NK-T cells, we identified clusters setting the resolution parameter as 1.
Correlation Analysis
For the correlation between IL-32 and other genes, we screened cells that detected the expression of two genes (expression values > 0) at the same time. The mean value of gene expression was used as the signature score, and the cells whose IL-32 expression and score were both equal to 0 were eliminated. We considered signature gene lists for the cell cycle score as published information (Navarro-Barriuso et al., 2018). Pearson correlation analysis was used for statistical test.
IL-32 shRNA Lentivirus Transfection in vitro
Fresh blood was obtained from healthy volunteers; the written informed consent was obtained. Studies were performed in accordance with the Declaration of Helsinki and were approved by the Research Ethics Board of the Xinhua Hospital, Shanghai Jiao Tong University School of Medicine. Peripheral blood mononuclear cells (PBMCs) were isolated from fresh heparinized blood by standard density gradient centrifugation with Ficoll-Paque Plus (GE Healthcare). CD4 + and CD8 + T cells were obtained by negative selection using a human CD4 + or CD8 + T cell isolation kit (Miltenyi) and seeded with 5 × 10 5 cells per well in 96-well plates. The complete medium (RPMI 1640 with 10% fetal bovine serum) was added to 50 ng/mL recombinant human IL-2 (Peprotech) and cultured for 48 h for the cell activation.
IL-32 shRNA lentivirus (Genechem Company) and activated CD4 + and CD8 + T cells were mixed according to MOI = 10 plus the infection enhancer B-1 (Genechem Company). The mixture was centrifuged at 1,200 revolutions/min, 30 min at room temperature. After 24 h, the culture medium was half-exchanged. The transfection efficiency was detected by green fluorescent protein fluorescence expression under the microscope and detects mRNA level using real-time polymerase chain reaction (PCR) at 72 h.
RNA Isolation and Real-Time PCR
We isolated total RNA from cell pellets using the RNeasy Mini Kit (Qiagen) and obtained first-strand cDNA using the Sensiscript Reverse Transcription Kit (Qiagen) according to the manufacturer's instructions. We determined the mRNA expression of IL-32 and GAPDH (internal control) by realtime PCR using SYBR Green master mix (Applied Biosystems). The primer sequences for IL-32 were as follows: forward 5 -CAG CTC TGA CCT GGT GCT GT -3 , reverse 5 -CCC AGT CTC AGG CAT TCT TTA T-3 , and those for GAPDH were forward 5 -GTG AAG GTC GGA GTC AAC G-3 and reverse 5 -TGA GGT CAA TGA AGG GGT C-3 . Thermocycler conditions comprised an initial holding at 50 • C for 2 min and then at 95 • C for 10 min, which was followed by a two-step PCR program consisting of 95 • C for 15 s and 60 • C for 60 s for 40 cycles. We collected and analyzed data using an ABI Prism 7500 sequence detection system (Applied Biosystems). We expressed all data as a fold increase or decrease relative to the expression of GAPDH. The expression of IL-32 in a given sample was presented as 2 − Ct, where Ct = Ct IL−32 -Ct GAPDH .
Flow Cytometry
The CD4 + and CD8 + T cells were collected after the cell stimulation and Treg induction for 72 h. Surface markers were stained with appropriate antibody CD4-PerCPcy 5.5, CD8-PE-CY7 at room temperature for 30 min and washed twice by phosphate-buffered saline (PBS). For intracellular protein staining, cells were stimulated with the cell stimulation cocktail plus protein transport inhibitors (eBioscience) for 5 h. Then, the cells were fixed and permeabilized with Cytofix/Cytoperm buffer and were stained with antibodies IFNγ-ACP-cy7, Foxp3-eFluor450, and isotype control according to the manufacturer's instructions. Flow cytometric was performed with a FACS Canto II instrument (BD Bioscience), and the analysis was by FlowJo software (TreeStar). All the flow antibodies were from eBioscience.
Survival Analysis
The relationship between IL-32 expression and survival of ESCC patients was analyzed by an online website 1 (Gyorffy et al., 2010). The survival was analyzed with Kaplan-Meier method, using the log-rank test to determine the difference.
Statistical Analysis
Statistical significance was determined by using the GraphPad Prism 5.0 (GraphPad, Inc.) and R (v4.0.4). Measured data were presented as the mean ± SEM; two-tailed Student t test was applied to compare quantitative data, whereas other statistical methods are described in the above "Methods" sections and in the figure legends.
IL-32 Is Overexpressed in T and NK Cells in the TME
We used published scRNA-seq data to analyze the IL-32 expression in ESCC CD45 + tumor-infiltrating immune cells (Zheng et al., 2020). According to the scRNA-seq data annotation and canonical marker, we classified several dominant cell subsets in immune cells, such as T cells, B cells, NK cells, myeloid cells, mast cells, and "other cells" that stand for the non-immune cells (Figures 1A,B). First, we analyzed IL-32 expression in disparate immune cell subsets. Seurat function "FindMarkers" was applied, and the p value adjustment was presented using Bonferroni correction based on the total number of genes in the dataset by default. Like the previous reports, IL-32 expression was higher in T and NK cells (Dahl et al., 1992;Cheon et al., 2011;Figures 1C,D). However, B cells, myeloid, mast cells, and other cells barely expressed IL-32 (Figures 1C,D). We further analyzed IL-32 expression between the tumor and adjacent tissues. Data showed that IL-32 expression in T and NK cells in tumor tissues was slightly higher in comparison to the adjacent tissues ( Figure 1E). A recent report noted that IL-32 acted as an essential growth factor for human cutaneous T-cell lymphoma cells (Suga et al., 2014). IL-32 also augmented the cytotoxic effect of NK-92 cells on the cancer cells through activation of DR3 and caspase-3 cell signaling . Our data suggest that IL-32 is potentially involved in the regulative function of T and NK cells and plays an important role in tumor surveillance.
IL-32 Is Dominant in CD4 + Treg Cells
T and NK cells are the primary cell types for antitumor activity in the TME; thus, we next determined the IL-32 expression in different T and NK cell subsets. First, we unsupervised reclustering the CD4 + T, CD8 + T, and NK cells from ESCC (Figures 2A,B). According to the annotation (Zheng et al., 2020), . The data were measured as the log10 (% of positive cells). p Value was calculated by two-tailed Student t test. **p < 10 -10 , *p < 0.001.
we grouped the CD4 + T cells into naive, Th, proliferation, and Treg cells, and CD8 + T cells into cytotoxic, proliferation, and exhausted subsets. NK cells were divided into cytotoxic, tolerogenic, and proliferation subsets (Figures 2C,D). We then compared IL-32 expression among groups. Seurat function "FindMarkers" was applied, and the p value adjustment was performed using Bonferroni correction based on the total number of genes in the dataset by default. IL-32 expression in CD4 + T cells was significantly higher than that in CD8 + T and NK cells ( Figure 2E). Interestingly, in CD4 + T cells, IL-32 expression was much higher in Treg cells. While in CD8 + T and NK cells, IL-32 expression was much higher in cytotoxic cells. Interestingly, the proliferation CD4, CD8, and NK cell subsets expressed relatively lower levels of IL-32 (Figures 2F-H). We further evaluated the IL-32 expression in the tumor and adjacent tissues T cell subsets. Date showed that in CD4 + T cells, naive CD4 and Treg cells in tumor expressed much more IL-32 than adjacent tissue (Figure 2I), and in CD8 + T cell subsets, IL-32 expression was much higher in adjacent than tumor tissues ( Figure 2J). In NK cells, the cytotoxic and exhausted subset IL-32 expression was much higher in tumor tissue ( Figure 2K).
These data suggest that ESCC TME may induce or inhibit IL-32 expression in different T cell subsets.
IL-32 Negatively Correlates With Cell Cycle Score While It Positively Correlates With Foxp3 and Cytotoxic Molecules IFNG and GZMB
Our previous data showed that IL-32 expression was far higher in Treg cells than proliferation T cells, and IL-32 has been illustrated to be the inflammatory cytokine. Next, we developed the correlations between IL-32 and Treg cell transcription factor such as Foxp3 and IKZF2 (Barbi et al., 2014;Ng et al., 2019), cell cycle (G1S and G2M), and cytotoxic molecules IFNG and GAMB, respectively. As expected, IL-32 expression was positively correlated with Foxp3 and IKZF2 in CD4 + T cells (Figure 3A), and GZMB and IFNG in CD8 + T cells (Figure 3B) in ESCC patients. However, IL-32 expression was negatively correlated with cell cycle scores in CD4 + ( Figure 3C) and CD8 + T cells, respectively ( Figure 3D). Furthermore, we used the ESCA bulk RNA sequencing data from TCGA and analyzed Frontiers in Cell and Developmental Biology | www.frontiersin.org The correlations between IL-32 and Foxp3 expression (left panel), gene expression was normalized in CD4; the correlation between IL32 and IFNG expression (middle panel) and GZMB expression (right panel), gene expression was normalized in CD8a, from ESCA bulk RNA sequence data. The R value represents the correlation between the x and y axis values, R > 0 means a positive correlation, R < 0 means a negative correlation, and p < 0.01 indicates that the correlation was statistically significant.
the correlationship between IL-32 and Foxp3 in CD4 + T cells, GZMB, and IFNG in CD8 + T cells. Consistent with scRNAseq data, IL-32 expression was positively correlated with Foxp3, GZMB, and IFNG ( Figure 3E). These data suggest that IL-32 might be involved in Treg cell function and cytotoxic CD8 + T cell function.
Knockdown of IL-32 Gene Inhibits the Development of Treg Cells and IFNγ Production in CD8 + T Cells
To demonstrate the relationship between IL-32 and Treg cells, we performed shRNA to knock down IL-32 expression in CD4 + T cells and detected the Foxp3 expression in the in vitro Treg cell induction system. Data showed that when CD4 + T cells were knocked down of IL-32, the IL-32 mRNA expression was significantly decreased (Figure 4A). Foxp3 expression was significantly decreased in the knockdown group than the control group following the stimulation or Treg cell induction system ( Figure 4B). Additionally, when the CD8 T cells were knocked down of IL-32, IFNγ production in CD8 + T cells was decreased relative to the control group ( Figure 4C). These results demonstrated that IL-32 might have duality and play different roles in the Treg and cytotoxic CD8 + T cell development, and the underlying mechanisms need to be elaborated in the future study.
We further used Kaplan-Meier plotter to analyze the patient survival; 81 ESCC patients from the dataset were included. The group of patients with a high level of IL-32 expression was compared to the low-level groups. Increasing expression of IL-32 was not positively or negatively associated with the overall survival or disease-free survival of ESCC patients (Figure 4D). The stimulation conditions were using the anti-CD3, anti-CD28 antibodies (top) activated CD4 + T cell and IL-2, transforming growth factor β induces Treg production (bottom). One of the three similar experiments is presented; data are presented as the mean ± SEM; p value was calculated by two-tailed Student t-test. (C) Flow cytometry measures the IFNγ expression in CD8 + T cells transfected with IL-32 shRNA and vector group. One of the three similar experiments is presented; data are presented as the mean ± SEM; p value was calculated by two-tailed Student t test. (D) Kaplan-Meier plot was used to analyze the overall survival (left) and disease-free survival (right) of IL-32 expression with the ESCC patients (n = 81) from the TCGA data.
One possible reason is that only 81 ESCC patients were included in the analysis. The lack of significance is most likely due to the smaller sample size; the other reason is that the different role of IL-32 in Treg and cytotoxic CD8 T cells might be responsible for these effects. In the subsequent study, more patients will be needed to obtain much more definite conclusions.
DISCUSSION
IL-32 has been reported to regulate cell growth, metabolism, and immune regulation. Therefore, it participates in the pathological regulation and protection of inflammatory diseases and cancer. Kim and colleagues recently demonstrated that IL-32γ functions through a cytoplasmic event, not a paracrine or autocrine pathway, suggesting that IL-32γ functions as a non-cytokinelike molecule in hepatitis B virus (HBV) suppression (Kim et al., 2018). Previous studies defined that IL-32 was upregulated in patients with several inflammatory diseases and was induced by inflammatory responses. However, several reports suggested that IL-32 was downregulated in several inflammatory diseases, including asthma, HIV infection disease, neuronal diseases, metabolic disorders, and experimental colitis (Hong et al., 2017). Furthermore, some recent data indicated that IL-32 induced antiinflammatory cytokines, such as IL-10 ( Kang et al., 2009) and the immunosuppressive molecules such as IDO in macrophages through STAT3 and nuclear factor κB pathway, and promoted multiple myeloma development (Smith et al., 2011;Yan et al., 2019). These data suggest that IL-32 may play different roles in different immune cells and perform different activities in inflammatory disease. Nevertheless, what exact types of T and NK cell subsets express IL-32 and its significance have not been well addressed.
Using the published ESCC scRNA-seq data, we found that IL-32 expression was dominated in T and NK cells, consistent with the previous study (Kim et al., 2005;Cheon et al., 2011;Park et al., 2012). Our study further analyzed the T cell and NK cell subset IL-32 expression and pointed that Treg cells express a much higher level of IL-32 than other T cell subsets. In contrast, proliferation exhausted T and NK cells expressed a much lower level of IL-32 in ESCC. IL-32 was negatively correlated with the cell cycle and but was positively correlated with the expression of Foxp3 and cytotoxic molecules IFNG and GZMB. In human melanoma, colon cancer, breast cancer, and other cancer types, IL-32 can be induced by tumor necrosis factor (TNFα and IFNγ to inhibit cancer development, and its high expression may be related to the therapeutic effect of PD1 (Bhat et al., 2017;Paz et al., 2019). The upregulation of IL-32 for colon cancer and prostate cancer can enhance the killing function of NK cells. In addition, IL-32 can activate the expression of several cytokines, such as IL-6, TNFα, and IFNγ in immune cells, and inhibits HIV-1 (Nold et al., 2008). In our experiment, we found that when CD8 + T cells were knocked down of IL-32, IFNγ expression decreased, suggesting that IL-32 may be involved in the development of cytotoxic CD8 + T cells. Consistent with the previous studies that IL-32 may be involved in the secretion of IFNγ, Th1, and the maintenance of killer T cells in HIV (Santinelli et al., 2019).
The high expression of IL-32 in ESCC tumor cells combined with a high proportion of Treg cell infiltration was associated with a poor prognosis and suggests that IL-32 may indeed have a specific relationship with the differentiation of T cells, the secretion of cytokines, and even the development of Tregs in the TME. IL-32 has nine alternative spliced isoforms; IL-32α and IL-32β isoforms are thought to be the major isoforms predominantly expressed in the various cells (Hong et al., 2017). Until now, IL-32 isoform secretions are very confusing and unclear; some controversy even exists. IL-32γ isoform is thought to be a secreted cytokine that possesses a hydrophobic signal peptide in its N-terminus. However, IL-32β is detected in intracellular fraction, IL-32α is not secreted in anti-CD3 antibody-activated human T cell, and IL-32β found in the supernatant is derived from the cytoplasm of apoptotic T cells (Kim et al., 2005;Goda et al., 2006). In our experiment, we found that when CD8 + T cells were knocked down of IL-32, IFNγ expression decreased; similarly, Foxp3 expression reduced when IL-32 shRNA was knocked down in CD4 + T cells, suggesting that IL-32 may be involved in the development of cytotoxic CD8 + T cells and Treg cells, but how it works, based on an autocrine or cell-intrinsic fashion, is not clear and needs to be addressed in the future work. In summary, our data showed that IL-32 might have antitumor and anti-immune response in ESCC TME. IL-32 may promote CD8 + T cell IFNγ expression that enhances the antitumor activity, but at the same time induce CD4 + T cell Foxp3 expression, which could suppress tumor immune response. Our study demonstrated that blocking IL-32 may reduce Treg cell development, or increasing IL-32 expression may enhance cytotoxic CD8 + T cell function in the ESCC tumor immunotherapy.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by The Ethical Committee of Xin Hua Hospital, Shanghai Jiao Tong University School of Medicine. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
LS and YZ designed the experiments. LH, SC, ZC, and BZ performed and analyzed the experimental data. LS, YZ, and LH wrote the manuscript, with all authors contributing to writing and providing feedback. | 5,493.6 | 2021-08-03T00:00:00.000 | [
"Biology"
] |
Measurement of the top quark mass with the template method in the tt → lepton + jets channel using ATLAS data
The top quark mass has been measured using the template method in the tt lepton + jets channel based on data recorded in 2011 with the ATLAS detector at the LHC. The data were taken at a proton-proton centre-of-mass energy of √s = 7 TeV and correspond to an integrated luminosity of 1.04 fb^(−1). The analyses in the e + jets and μ + jets decay channels yield consistent results. The top quark mass is measured to be m_ (top) =174.5±0.6_(stat)±2.3_(syst) GeV.
Introduction
The top quark mass (m top ) is a fundamental parameter of the Standard Model (SM) of particle physics. Due to its large mass, the top quark gives large contributions to electroweak radiative corrections. Together with precision electroweak measurements, the top quark mass can be used to derive constraints on the masses of the as yet unobserved Higgs boson [1,2], and of heavy particles predicted by extensions of the SM. After the discovery of the top quark in 1995, much work has been devoted to the precise measurement of its mass. The present average value of m top = 173.2 ± 0.6 stat ± 0.8 syst GeV [3] is obtained from measurements at the Tevatron performed by CDF and D∅ with Run I and Run II data corresponding to integrated luminosities of up to 5.8 fb −1 . At the LHC, m top has been measured by CMS in tt events in which both W bosons from the top quark decays themselves decay into a charged lepton and a neutrino [4]. CERN, 1211 Geneva 23, Switzerland, E-mail<EMAIL_ADDRESS>The main methodology used to determine m top at hadron colliders consists of measuring the invariant mass of the decay products of the top quark candidates and deducing m top using sophisticated analysis methods. The most precise measurements of this type use the tt → lepton+jets channel, i.e. the decay tt → νb q 1 q 2 b had with = e, µ, where one of the W bosons from the tt decay decays into a charged lepton and a neutrino and the other into a pair of quarks, and where b (b had ) denotes the b-quark associated to the leptonic (hadronic) W boson decay. In this paper these tt decay channels are referred to as e+jets and µ+jets channels.
In the template method, simulated distributions are constructed for a chosen quantity sensitive to the physics observable under study, using a number of discrete values of that observable. These templates are fitted to functions that interpolate between different input values of the physics observable, fixing all other parameters of the functions. In the final step a likelihood fit to the observed data distribution is used to obtain the value for the physics observable that best describes the data. In this procedure, the experimental distributions are constructed such that they are unbiased estimators of the physics observable used as an input parameter in the signal Monte Carlo samples. Consequently, the top quark mass determined this way from data corresponds to the mass definition used in the Monte Carlo. It is expected [5] that the difference between this mass definition and the pole mass is of order 1 GeV.
The precision of the measurement of m top is limited mainly by the systematic uncertainty from a few sources. In this paper two different estimators for m top are developed, which have only a small statistical correlation and use different strategies to reduce the impact of these sources on the final uncertainty. This choice translates into different sensitivities to the uncertainty sources for the two estimators. The first implementation of the template method is a one-dimensional template analysis (1d-analysis), which is based on the observable R 32 , defined as the per event ratio of the reconstructed invariant masses of the top quark and the W boson reconstructed from three and two jets respectively. For each event, an event likelihood is used to select the jet triplet assigned to the hadronic decays of the top quark and the W boson amongst the jets present in the event. The second implementation is a two-dimensional template analysis (2d-analysis), which simultaneously determines m top and a global jet energy scale factor (JSF) from the reconstructed invariant masses of the top quark and the W boson. This method utilises a χ 2 fit that constrains the reconstructed invariant mass of the W boson candidate to the world-average W boson mass measurement [6].
The paper is organised as follows: details of the ATLAS detector are given in Section 2, the data and Monte Carlo simulation samples are described in Section 3. The common part of the event selections is given in Section 4, followed by analysis-specific requirements detailed in Section 5. The specific details of the two analyses are explained in Section 6 and Section 7. The measurement of m top is given in Section 8, where the evaluation of the systematic uncertainties is discussed in Section 8.1, and the individual results and their combination are reported in Section 8.2. Finally, the summary and conclusions are given in Section 9.
The ATLAS detector
The ATLAS detector [7] at the LHC covers nearly the entire solid angle around the collision point 1 . It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters, and an external muon spectrometer incorporating three large superconducting toroid magnet assemblies.
The inner-detector system is immersed in a 2T axial magnetic field and provides charged particle tracking in the range |η| < 2.5. The high-granularity silicon pixel detector covers the vertex region and provides typically 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y axis points upward. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). Transverse momentum and energy are defined as p T = p sin θ and E T = E sin θ, respectively. three measurements per track, followed by the silicon microstrip tracker which provides four measurements from eight strip layers. These silicon detectors are complemented by the transition radiation tracker, which enables extended track reconstruction up to |η| = 2.0. In giving typically more than 30 straw-tube measurements per track, the transition radiation tracker improves the inner detector momentum resolution, and also provides electron identification information.
The calorimeter system covers the pseudorapidity range |η| < 4.9. Within the region |η| < 3.2, electromagnetic calorimetry is provided by barrel and end cap lead/liquid argon (LAr) electromagnetic calorimeters, with an additional thin LAr presampler covering |η| < 1.8 to correct for energy loss in material upstream of the calorimeters. Hadronic calorimetry is provided by the steel/scintillating-tile calorimeter, segmented into three barrel structures within |η| < 1.7, and two copper/LAr hadronic endcap calorimeters. The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules optimised for electromagnetic and hadronic measurements respectively.
The muon spectrometer comprises separate trigger and high-precision tracking chambers measuring the deflection of muons in a magnetic field with a bending integral up to 8 Tm in the central region, generated by three superconducting air-core toroids. The precision chamber system covers the region |η| < 2.7 with three layers of monitored drift tubes, complemented by cathode strip chambers in the forward region. The muon trigger system covers the range |η| < 2.4 with resistive plate chambers in the barrel, and thin gap chambers in the endcap regions.
A three-level trigger system is used. The first level trigger is implemented in hardware and uses a subset of detector information to reduce the event rate to a design value of at most 75 kHz. This is followed by two software-based trigger levels, which together reduce the event rate to about 300 Hz.
Data and Monte Carlo samples
In this paper, data from LHC proton-proton collisions are used, collected at a centre-of-mass energy of √ s = 7 TeV with the ATLAS detector during March-June 2011. An integrated luminosity of 1.04 fb −1 is included.
Simulated tt events and single top quark production are both generated using the Next-to-Leading Order (NLO) Monte Carlo program MC@NLO [8,9] with the NLO parton density function set CTEQ6.6 [10]. Parton showering and underlying event (i.e. additional interactions of the partons within the protons that underwent the hard interaction) are modelled using the Herwig [11] and Jimmy [12] programs. For the construction of signal templates, the tt and single top quark production samples are generated for different assumptions on m top using six values (in GeV) namely (160, 170, 172.5, 175, 180, 190), and with the largest samples at m top = 172.5 GeV. All tt samples are normalised to the corresponding cross-sections, obtained with the latest theoretical computation approximating the NNLO prediction and implemented in the HATHOR package [13]. The predicted tt cross-section for a top quark mass of m top = 172.5 GeV is 164.6 pb, with an uncertainty of about 8%.
The production of W bosons or Z bosons in association with jets is simulated using the Alpgen generator [14] interfaced to the Herwig and Jimmy packages. Diboson production processes (W W , W Z and ZZ) are produced using the Herwig generator. All Monte Carlo samples are generated with additional multiple soft proton-proton interactions. These simulated events are re-weighted such that the distribution of the number of interactions per bunch crossing (pileup) in the simulated samples matches that in the data. The mean number of primary vertices per bunch crossing for the data of this analysis is about four. The samples are then processed through the GEANT4 [15] simulation [16] and the reconstruction software of the ATLAS detector.
Event selection
In the signal events the main reconstructed objects in the detector are electron and muon candidates as well as jets and missing transverse momentum (E miss T ). An electron candidate is defined as an energy deposit in the electromagnetic calorimeter with an associated wellreconstructed track. Electron candidates are required to have transverse energy E T > 25 GeV and |η cluster | < 2.47, where η cluster is the pseudorapidity of the electromagnetic cluster associated with the electron. Candidates in the transition region between the barrel and end-cap calorimeter, i.e. candidates fulfilling 1.37 < |η cluster | < 1.52, are excluded. Muon candidates are reconstructed from track segments in different layers of the muon chambers. These segments are combined starting from the outermost layer, with a procedure that takes material effects into account, and matched with tracks found in the inner detector. The final candidates are refitted using the complete track information, and are required to satisfy p T > 20 GeV and |η| < 2.5. Isolation criteria, which restrict the amount of energy deposits near the candidates, are applied to both electron and muon candidates to reduce the background from hadrons mimicking lepton signatures and backgrounds from heavy flavour decays inside jets. For elec-trons, the energy not associated to the electron cluster and contained in a cone of ∆R = ∆φ 2 + ∆η 2 = 0.2 must not exceed 3.5 GeV, after correcting for energy deposits from pileup, which in the order of 0.5 GeV. For muons, the sum of track transverse momenta and the total energy deposited in a cone of ∆R = 0.3 around the muon are both required to be less than 4 GeV.
Jets are reconstructed with the anti-k t algorithm [17] with R = 0.4, starting from energy clusters of adjacent calorimeter cells called topological clusters [18]. These jets are calibrated first by correcting the jet energy using the scale established for electromagnetic objects (EM scale) and then performing a further correction to the hadronic energy scale using correction factors, that depend on energy and η, obtained from simulation and validated with data [19]. Jet quality criteria [20] are applied to identify and reject jets reconstructed from energies not associated to energy deposits in the calorimeters originating from particles emerging from the bunch crossing under study. The jets failing the quality criteria, which may have been reconstructed from various sources such as calorimeter noise, non-collision beamrelated background, and cosmic-ray induced showers, can efficiently be identified [20].
The reconstruction of E miss T is based upon the vector sum of calorimeter energy deposits projected onto the transverse plane. It is reconstructed from topological clusters, calibrated at the EM scale and corrected according to the energy scale of the associated physics object. Contributions from muons are included by using their momentum measured from the track and muon spectrometer systems in the E miss T reconstruction. Muons reconstructed within a ∆R = 0.4 cone of a jet satisfying p T > 20 GeV are removed to reduce the contamination caused by muons from hadron decays within jets. Subsequently, jets within ∆R = 0.2 of an electron candidate are removed to avoid double counting, which can occur because electron clusters are usually also reconstructed as jets.
Reconstruction of top quark pair events is facilitated by the ability to tag jets originating from the hadronisation of b-quarks. For this purpose, a neural-net-based algorithm [21], relying on vertex properties such as the decay length significance, is applied. The chosen working point of the algorithm corresponds to a b-tagging efficiency of 70% for jets originating from b-quarks in simulated tt events and a light quark jet rejection factor of about 100. Irrespective of their origin, jets tagged by this algorithm are called b-jets in the following, whereas those not tagged are called light jets.
The signal is characterised by an isolated lepton with relatively high p T , E miss T arising from the neutrino from the leptonic W boson decay, two b-quark jets, and Table 1 The observed numbers of events in the data in the e+jets and µ+jets channels, for the two analyses after the common event selection and additional analysis-specific requirements. In addition, the expected numbers of signal and background events corresponding to the integrated luminosity of the data are given, where the single top quark production events are treated as signal for the 1d-analysis, and as background for the 2d-analysis. The Monte Carlo estimates assume SM cross-sections. The W +jets and QCD multijet background contributions are estimated from ATLAS data. The uncertainties for the estimates include different components detailed in the text. All predicted event numbers are quoted using one significant digit for the uncertainties, i.e. the trailing zeros are insignificant.
two light quark jets from the hadronic W boson decay. The selection of events consists of a series of requirements on general event quality and the reconstructed objects designed to select the event topology described above. The following event selections are applied: it is required that the appropriate single electron or single muon trigger has fired (with thresholds at 20 GeV and 18 GeV, respectively); the event must contain one and only one reconstructed lepton with E T > 25 GeV for electrons and p T > 20 GeV for muons which, for the e+jets channel, should also match the corresponding trigger object; in the µ+jets channel, E miss T > 20 GeV and in addition E miss T + m T W > 60 GeV is required 2 ; in the e+jets channel more stringent cuts on E miss T and m T W are required because of the higher level of QCD multijet background, these being E miss T > 35 GeV and m T W > 25 GeV; the event is required to have ≥ 4 jets with p T > 25 GeV and |η| < 2.5. It is required that at least one of these jets is a b-jet.
This common event selection is augmented by additional analysis-specific event requirements described next.
Specific event requirements
To optimise the expected total uncertainty on m top , some specific requirements are used in addition to the common event selection.
2 Here m T W is the W -boson transverse mass, defined as 2 p T, p T,ν [1 − cos(φ − φ ν )], where the measured E miss T vector provides the neutrino (ν) information.
For the 1d-analysis, three additional requirements are applied. Firstly, only events with a converging likelihood fit (see Section 6) with a logarithm of the likelihood value ln L > −50 are retained. Secondly, all jets in the jet triplet assigned to the hadronic decay of the top quark are required to fulfill p T > 40 GeV, and thirdly the reconstructed W boson mass must lie within the range 60 GeV -100 GeV.
For the 2d-analysis the additional requirement is that only light jet pairs (see Section 7) with an invariant mass in the range 50 GeV -110 GeV are considered for the χ 2 fit.
The numbers of events observed and expected, with the above selection and these additional analysis-specific requirements, are given in Table 1 for both channels and both analyses. For all Monte Carlo estimates, the uncertainties are the quadratic sum of the statistical uncertainty, the uncertainty on the b-tagging efficiencies, and a 3.7% uncertainty on the luminosity [22,23]. For the QCD multijet and the W +jets backgrounds, the systematic uncertainty estimated from data [24] dominates and is used instead.
For both analyses and channels, the observed distributions for the leptons, jets, and kinematic properties of the top quark candidates such as their transverse momenta, are all well-described by the sum of the signal and background estimates. This is demonstrated for the properties of the selected jets, before applying the analysis specific requirements, for both channels in data. The largest differences between the central values of the combined prediction and the data is observed for the rapidity distribution, with the data being higher, especially at central rapidities. Based on the selected events, the top quark mass is measured in two ways as described below.
The 1d-analysis
The 1d-analysis is a one-dimensional template analysis using the reconstructed mass ratio: Here m reco top and m reco W are the per event reconstructed invariant masses of the hadronically decaying top quark and W boson, respectively.
To select the jet triplet for determining the two masses, this analysis utilises a kinematic fit maximising an event likelihood. This likelihood relates the observed objects to the tt decay products (quarks and leptons) predicted by the NLO signal Monte Carlo, albeit in a Leading Order (LO) kinematic approach, using tt → νb q 1 q 2 b had . In this procedure, the measured jets relate to the quark decay products of the W boson, q 1 and q 2 , and to the b-quarks, b and b had , produced in the top quark decays. The E miss T vector is identified with the transverse momentum components of the neutrino, p x,ν andp y,ν .
The likelihood is defined as a product of transfer functions (T ), Breit-Wigner (B) distributions, and a weight W btag accounting for the b-tagging information: The generator predicted quantities are marked with a circumflex (e.g.Ê b had ), i.e. the energy of the b-quark from the hadronic decay of the top quark. The quantities m W and Γ W (which amounts to about one fifth of the Gaussian resolution of the m reco at an input mass of m top = 172.5 GeV, based on reconstructed objects that are matched to their generator predicted quarks and leptons. When using a maximum separation of ∆R = 0.4 between a quark and the corresponding jet, the fraction of events with four matched jets from all selected events amounts to 30% -40%. The transfer functions are obtained in three bins of η for the energies of b-quark jets, E jet 1 and E jet 2 , light quark jets, E jet 3 and E jet 4 , the energy, E e , (or transverse momentum, p T,µ ) of the charged lepton, and the two components of the E miss T , E miss x and E miss y . In addition, the likelihood exploits the values of m W and Γ W to constrain the reconstructed leptonic, m( ν), and hadronic, m(q 1 q 2 ), W boson masses using Breit-Wigner distributions. Similarly, the reconstructed leptonic, m( ν b ), and hadronic, m(q 1 q 2 b had ), top quark masses are constrained to be identical, where the width of the corresponding Breit Wigner distribution is identified with the predicted Γ top (using its top quark mass dependence) [6]. Including the b-tagging information into the likelihood as a weight W btag , derived from the efficiency and mistag rate of the b-tagging algorithm, and assigned per jet permutation according to the role of each jet for a given jet permutation, improves the selection of the correct jet permutation. As an example, for a permutation with two b-jets assigned to the b-quark positions and two light jets to the light quark positions, the weight W btag amounts to 0.48, i.e. it corresponds to the square of the b-tagging efficiency times the square of one minus the fake rate, both given in Section 4.
With this procedure, the correct jet triplet for the hadronic top quark is chosen in about 70% of simulated signal events with four matched jets. However, if R 32 from the likelihood fit, i.e. calculated from m reco,like top and m reco,like W , is taken, a large residual jet energy scale (JES) dependence of R 32 remains. This is because in the fit m reco W is constrained to m W , while m reco top is only constrained to be equal for the leptonic and hadronic decays of the top quarks. This spoils the desired eventby-event reduction of the JES uncertainty in the ratio R 32 [25]. To make best use of the high selection efficiency for the correct jet permutation from the likelihood fit, and the stabilisation of R 32 against JES variations, the jet permutation derived in the fit is used, but m reco W , m reco top and therefore R 32 , are constructed from the unconstrained four-vectors of the jet triplet as given by the jet reconstruction.
The performance of the algorithm, shown in Figure 2 for the e+jets channel, is similar for both channels. The likelihood values of wrong jet permutations for signal events from the large MC@NLO sample are frequently considerably lower than the ones for the correct jet permutations, as seen in Figure 2 ple, the distribution for the jet permutation in which the jet from the b-quark from the leptonically decaying top quark is exchanged with one light quark jet from the hadronic W boson decay has a second peak at about ten units lower than the one for the correct jet permutation. The actual distribution of ln L values observed in the data is well-described by the signal plus background predictions, as seen in Figure 2(b). The kinematic distributions of the variables used in the transfer functions are also well-described by the predictions, as shown in Figure 2(c), for the example of the resulting p T of the b-jet associated to the hadronic decay of the top quark. The resulting R 32 distributions for both channels are shown in Figure 3. They are also well accounted for by the predictions.
Signal templates are derived for the R 32 distribution for all m top dependent samples, consisting of the tt signal events, together with single top quark production events. This procedure is adopted, firstly, because single top quark production, although formally a background process, still carries information about the top quark mass and, secondly, by doing so m top independent background templates can be used. The templates are constructed for the six m top choices using the specifically generated Monte Carlo samples, see Section 3.
The R 32 templates are parameterised with a functional form given by the sum of a ratio of two correlated Gaussians and a Landau function. The ratio of two Gaussians [26] is motivated as a representation of the ratio of two correlated measured masses. The Landau function is used to describe the tails of the distribution stemming mainly from wrong jet-triplet assignments. The correlation between the two Gaussian distributions is fixed to 50%. A simultaneous fit to all templates per decay channel is used to derive a continuous function of m top that interpolates the R 32 shape differences among all mass points with m top in the range described above. This approach rests on the assumption that each parameter has a linear dependence on the top quark mass, which has been verified for both channels. The fit minimises a χ 2 built from the R 32 distributions at all mass points simultaneously. The χ 2 is the sum over all bins of the difference squared between the template and the functional form, divided by the statistical uncertainty squared in the template. The combined fit adequately describes the R 32 distributions for both channels. In Figure 4(a) the sensitivity to m top is shown in the e+jets channel by the superposition of the signal templates and their fits for four of the six input top quark masses assumed in the simulation.
For the background template, the m top independent parts, see Table 1, are treated together. Their individual distributions, taken either from Monte Carlo or data estimates as detailed above, are summed, and a Landau distribution is chosen to parameterise their R 32 distribution. For each channel this function adequately describes the background distribution as shown in Figure 4(b) for the e+jets channel, which has a larger background contribution than the µ+jets channel.
Signal and background probability density functions, P sig (R 32 |m top ) and P bkg (R 32 ), respectively, are used in a binned likelihood fit to the data using a number of bins, N bins . The likelihood reads: The variable N i denotes the number of events observed per bin, and n sig and n bkg denote the total numbers of signal and background events to be determined. The term L shape accounts for the shape of the R 32 distribution and its dependence on the top quark mass m top . The term L bkg constrains the total number of background events, n bkg , using its prediction, n pred bkg , and the background uncertainty, chosen to be 50%, see Table 1. In addition, the number of background events is restricted to be positive. The two free parameters of the fit are the total number of background events, n bkg , and m top . The performance of this algorithm is assessed with the pseudo-experiment technique. For each m top value, distributions from pseudo-experiments are constructed by random sampling of the simulated signal and background events used to construct the corresponding templates. Using Poisson statistics, the numbers of signal events and total background events in each pseudo-experiment are fluctuated around the expectation values, either calculated assuming SM crosssections and the integrated luminosity of the data, or taken from the data estimate. A good linearity is found between the input top quark mass used to perform the pseudo-experiments, and the result of the fit. Within their statistical uncertainties, the mean values and width of the pull distributions are consistent with the expectations of zero and one, respectively. The expected statistical uncertainties (mean ± RMS) obtained from pseudo-experiments with an input top quark mass of m top = 172.5 GeV, and for a luminosity of 1 fb −1 , are 1.36 ± 0.16 GeV and 1.11 ± 0.06 GeV for the e+jets and µ+jets channels, respectively.
The 2d-analysis
In the 2d-analysis, similarly to Ref. [27], m top and a global jet energy scale factor (JSF) are determined simultaneously by using the m reco top and m reco W distributions 3 . Instead of stabilising the estimator of m top against JES variations as done for the 1d-analysis, the emphasis here is on an in-situ jet scaling. A global JSF (averaged over η and p T ) is obtained, which is mainly based on the observed differences between the predicted m reco W distribution and the one observed for the data. This algorithm predicts which global JSF correction should be applied to all jets to best fit the data. Due to this procedure, the JSF is sensitive not only to the JES, but also to all possible differences in data and predictions from specific assumptions made in the simulation that can lead to differences in the observed jets. These comprise: the fragmentation model, initial state and final state QCD radiation (ISR and FSR), the un-3 Although for the two analyses m reco top and m reco W are calculated differently, the same symbols are used to indicate that these are estimates of the same quantities.
derlying event, and also pileup. In this method, the systematic uncertainty on m top stemming from the JES is reduced and partly transformed into an additional statistical uncertainty on m top due to the two-dimensional fit. The precisely measured values of m W and Γ W [6] are used to improve on the experimental resolution of m reco top by relating the observed jet energies to the corresponding parton energies as predicted by the signal Monte Carlo (i.e. to the two quarks from the hadronic W boson decay, again using LO kinematics). Thereby, this method offers a complementary determination of m top to the 1d-analysis method, described in Section 6, with different sensitivity to systematic effects and data statistics.
For the events fulfilling the common requirements listed in Section 4, the jet triplet assigned to the hadronic top quark decay is constructed from any b-jet, together with any light jet pair with a reconstructed m reco W within 50 GeV -110 GeV. Amongst those, the jet triplet with maximum p T is chosen as the top quark candidate. For the light jet pair, i.e. for the hadronic W boson decay candidates, a kinematic fit is then performed by minimising the following χ 2 : with respect to parton scale factors (α i ) for the jet energies. The χ 2 comprises two components. The first component is the sum of squares of the differences of the measured and fitted energies of the two reconstructed light jets, E jet,i , individually divided by the squares of their p T -and η-dependent resolutions obtained from Monte Carlo simulation, σ(E jet,i ). The second term is the difference of their two-jet invariant mass, M jet,jet , and m W , divided by the W boson width. From these jets the two observables m reco W and m reco top are constructed. The m reco W is calculated using the reconstructed light jet four-vectors (i.e. jet energies are not corrected using α i ), retaining the full sensitivity of m reco W to the JSF. In contrast, m reco top is calculated from these light jet four-vectors scaled to the parton level (i.e. jet energies are corrected using α i ) and the above determined b-jet. In this way light jets in m reco top exhibit a much reduced JES sensitivity by construction, and only the b-jet is directly sensitive to the JES. The m reco W and m reco top distributions are shown in Figure 5 for both lepton channels, together with the predictions for signal and background. These, in both cases describe the observed distributions well. The correlation of these two observables is found to be small for data and predictions, and amounts to about −0.06.
Templates are constructed for m reco top as a function of an input top quark mass in the range 160 GeV - 190 GeV, and of an input value for the JSF in the range 0.9 -1.1, and, finally, for m reco W as a function of the assumed JSF for the same range. The signal templates for the m reco W and m reco top distributions, shown for the µ+jets channel and for JSF=1 in Figure 6(a) and 6(b), are fitted to a sum of two Gaussian functions for m reco W , and to the sum of a Gaussian and a Landau function for m reco top . Since, for this analysis, the background templates are constructed including single top quark production events, the background fit for the m reco top distribution is assumed to be m top dependent. For the background, the m reco W distribution, again shown for the µ+jets channel in Figure 6(c), is fitted to a Gaussian function and the m reco top distribution, Figure 6(d), to a Landau function. For all parameters of the functions that also depend on the JSF, a linear parameterisation is chosen. The quality of all fits is good for the signal and background contributions and for both channels. unbinned likelihood fit to the data for all events, i = 1, . . . N . The likelihood function maximised is: The three parameters to be determined by the fit are m top , the JSF and n bkg . Using pseudo-experiments, a good linearity is found between the input top quark mass used to perform the pseudo-experiments, and the result of the fits. The residual dependence of the reconstructed m top is about 0.1 GeV for a JSF shift of 0.01 for both channels, which results in a residual systematic uncertainty due to the JES. Within their statistical uncertainties, the mean values and widths of the pull distributions are consistent with the expectations of zero and one, respectively. Finally, the expected statistical plus JSF uncertainties (mean ± RMS) obtained from pseudo-experiments at an input top quark mass of m top = 172.5 GeV, and for a luminosity of 1 fb −1 , are 1.20 ± 0.08 GeV and 0.94 ± 0.04 GeV for the e+jets and µ+jets channel, respectively.
Evaluation of systematic uncertainties
Each source of uncertainty considered is investigated, when possible, by varying the respective quantities by ±1σ with respect to the default value. Using the changed parameters, pseudo-experiments are either performed directly or templates are constructed and then used to generate pseudo-experiments, without altering the probability density function parameterisations. The difference of the results for m top compared to the standard analysis is used to determine the systematic uncertainties. For the 2d-analysis, in any of the evaluations of the systematic uncertainties, apart from the JES variations, the maximum deviation of the JSF from its nominal fitted value is ±2.5%.
All sources of systematic uncertainties investigated, together with the resulting uncertainties, are listed in Table 2. The statistical precision on m top obtained from the Monte Carlo samples is between 0.2 GeV and 0.5 GeV, depending on the available Monte Carlo statistics. For some sources, pairs of statistically independent samples are used. For other sources, the same sample is used, but with a changed parameter. In this case the observed m top values for the central and the changed sample are statistically highly correlated. In all cases, the actual observed difference is quoted as the systematic uncertainty on the corresponding source, even if it is smaller than the statistical precision of the difference. The total uncertainty is calculated as the quadratic sum of all individual contributions, i.e. neglecting possible correlations. The estimation of the uncertainties from the individual contributions is described in the following.
Jet energy scale factor: This is needed to separate the quoted statistical uncertainty on the result of the 2d-analysis into a purely statistical component on m top analogous to the one obtained in an 1d-analysis, and the contribution stemming from the simultaneous determination of the JSF. This uncertainty is evaluated for the 2d-analysis by in addition performing a one-dimensional (i.e. JSF-constraint) fit to the data, with the JSF fixed to the value obtained in the twodimensional fit. The quoted statistical precision on m top is the one from the one-dimensional fit. The contribution of the JSF is obtained by quadratically subtracting the statistical uncertainties on m top for the onedimensional and two-dimensional fit of the 2d-analysis.
Method calibration:
The limited statistics of the Monte Carlo samples leads to a systematic uncertainty in the template fits, which is reflected in the residual mass differences between the fitted and the input mass for a given Monte Carlo sample. The average difference observed in the six samples with different input masses is taken as the uncertainty from this source.
Signal Monte Carlo generator: The systematic uncertainty related to the choice of the generator program is accounted for by comparing the results of pseudoexperiments performed with either the MC@NLO or the Powheg samples [28] both generated with m top = 172.5 GeV.
Hadronisation: Signal samples for m top = 172.5 GeV from the Powheg event generator are produced with either the Pythia [29] or Herwig [11] program performing the hadronisation. One pseudoexperiment per sample is performed and the full difference of the two results is quoted as the systematic uncertainty.
Pileup: To investigate the uncertainty due to additional proton-proton interactions which may affect the jet energy measurement, on top of the component that is already included in the JES uncertainty discussed below, the fit is repeated in data and simulation as a function of the number of reconstructed vertices. Within statistics, the measured m top is independent of the number of reconstructed vertices. This is also observed when the data are instead divided into data periods according to the average numbers of reconstructed vertices. In this case, the subsets have varying contributions from pileup from preceding events.
However, the effect on m top due to any residual small difference between data and simulation in the number of reconstructed vertices was assessed by computing the weighted sum of a linear interpolation of the fitted masses as a function of the number of primary vertices. In this sum the weights are the relative frequency of observing a given number of vertices in the respective sample. The difference of the sums in data and simulation is taken as the uncertainty from this source.
Underlying event: This systematic uncertainty is obtained by comparing the AcerMC [30, 31] central value, defined as the average of the highest and the lowest masses measured on the ISR/FSR variation samples described below, with a dataset with a modified underlying event.
Colour reconnection: The systematic uncertainty due to colour reconnection is determined using Ac-erMC with Pythia with two different simulations of the colour reconnection effects as described in Refs. [32][33][34]. In each case, the difference in the fitted mass between two assumptions on the size of colour reconnection was measured. The maximum difference is taken as the systematic uncertainty due to colour reconnection.
Initial and final state QCD radiation: Different amounts of initial and final state QCD radiation can alter the jet energies and the jet multiplicity of the events with the consequence of introducing distortions into the measured m reco top and m reco W distributions. This effect is evaluated by performing pseudo-experiments for which signal templates are derived from seven dedicated AcerMC signal samples in which Pythia pa- Table 2 The measured values of m top and the contributions of various sources to the uncertainty of m top (in GeV) together with the assumed correlations ρ between analyses and lepton channels. Here '0' stands for uncorrelated, '1' for fully correlated between analyses and lepton channels, and '(1)' for fully correlated between analyses, but uncorrelated between lepton channels. The abbreviation 'na' stands for not applicable. The combined results described in Section 8.2 are also listed.
rameters that control the showering are varied in ranges that are compatible with those used in the Perugia Hard/Soft tune variations [32]. The systematic uncertainty is taken as half the maximum difference between any two samples. Using different observables, the additional jet activity accompanying the jets assigned to the top quark decays has been studied. For events in which one (both) W bosons from the top quark decays themselves decay into a charged lepton and a neutrino, the reconstructed jet multiplicities [35] (the fraction of events with no additional jet above a certain transverse momentum [36]) are measured. The analysis of the reconstructed jet multiplicities is not sufficiently precise to constrain the presently used variations of Monte Carlo parameters. In contrast, for the ratio analysis [36] the spread of the predictions caused by the presently performed ISR variations is significantly wider than the uncertainty of the data, indicating that the present ISR variations are generous.
Proton PDF: The signal samples are generated using the CTEQ 6.6 [10] proton parton distribution functions, PDFs. These PDFs, obtained from experimental data, have an uncertainty that is reflected in 22 pairs of additional PDF sets provided by the CTEQ group. To evaluate the impact of the PDF uncertainty on the signal templates, the events are re-weighted with the corresponding ratio of PDFs, and 22 pairs of additional signal templates are constructed. Using these templates one pseudo-experiment per pair is performed. The uncertainty is calculated as half the quadratic sum of differences of the 22 pairs as suggested in Ref. [37].
W+jets background normalisation:
The uncertainty on the W +jets background determined from data is dominated by the uncertainty on the heavy flavour content of these events and amounts to ±70%. The difference in m top obtained by varying the normalisation by this amount is taken as the systematic uncertainty.
W+jets background shape: The impact of the variation of the shape of the W +jets background contribution is studied using a re-weighting algorithm [24] which is based on changes observed on stable particle jets when model parameters in the Alpgen Monte Carlo program are varied.
QCD multijet background normalisation: The estimate for the background from QCD multijet events determined from data is varied by ±100% to account for the current understanding of this background source [24] for the signal event topology.
QCD multijet background shape: The uncertainty due to the QCD background shape has been estimated comparing the results from two data driven methods, for both channels, see Ref.
For this uncertainty pseudo-experiments are performed on QCD background samples with varied shapes.
Jet energy scale: The jet energy scale is derived using information from test-beam data, LHC collision data and simulation. Since the energy correction procedure involves a number of steps, the JES uncertainty has various components originating from the calibration method, the calorimeter response, the detector simulation, and the specific choice of parameters in the physics model employed in the Monte Carlo event generator. The JES uncertainty varies between ±2.5% and ±8% in the central region, depending on jet p T and η as given in Ref. [19]. These values include uncertainties in the flavour composition of the sample and mis-measurements from jets close by. Pileup gives an additional uncertainty of up to ±2.5% (±5%) in the central (forward) region. Due to the use of the observable R 32 for the 1d-analysis, and to the simultaneous fit of the JSF and m top for the 2d-analysis, which mitigate the impact of the JES on m top differently, the systematic uncertainty on the determined m top resulting from the uncertainty of the jet energy scale is less than 1%, i.e. much smaller than the JES uncertainty itself.
Relative b-jet energy scale: This uncertainty is uncorrelated with the jet energy scale uncertainty and accounts for the remaining differences between jets originating from light quarks and those from b-quarks after the global JES has been determined. For this, an extra uncertainty ranging from ±0.8% to ±2.5% and depending on jet p T and η is assigned to jets arising from the fragmentation of b-quarks, due to differences between light jets and gluon jets, and jets containing b-hadrons. This uncertainty decreases with p T , and the average uncertainty for the spectrum of jets selected in the analyses is below ±2%.
This additional systematic uncertainty has been obtained from Monte Carlo simulation and was also verified using b-jets in data. The validation of the b-jet energy scale uncertainty is based on the comparison of the jet transverse momentum as measured in the calorimeter to the total transverse momentum of charged particle tracks associated to the jet. These transverse momenta are evaluated in the data and in Monte Carlo simulated events for inclusive jet samples and for b-jet samples [19]. Moreover, the jet calorimeter response uncertainty has been evaluated from the single hadron response. Effects stemming from b-quark fragmentation, hadronisation and underlying soft radiation have been studied using different Monte Carlo event generation models [19].
b-tagging efficiency and mistag rate: The b-tagging efficiency and mistag rates in data and Monte Carlo simulation are not identical. To accommodate this, b-tagging scale factors, together with their uncertainties, are derived per jet [21,38]. They depend on the jet p T and η and the underlying quark-flavour. For the default result the central values of the scale factors are applied, and the systematic uncertainty is assessed by changing their values within their uncertainties.
Jet energy resolution: To assess the impact of this uncertainty, before performing the event selection, the energy of each reconstructed jet in the simulation is additionally smeared by a Gaussian function such that the width of the resulting Gaussian distribution corresponds to the one including the uncertainty on the jet energy resolution. The fit is performed using smeared jets and the difference to the default m top measurement is assigned as a systematic uncertainty.
Jet reconstruction efficiency: The jet reconstruction efficiency for data and the Monte Carlo simulation are found to be in agreement with an accuracy of better than ±2% [19]. To account for this, jets are randomly removed from the events using that fraction. The event selection and the fit are repeated on the changed sample.
Missing transverse momentum: The E miss T is used in the event selection and also in the likelihood for the 1d-analysis, but is not used in the m top estimator for either analysis. Consequently, the uncertainty due to any mis-calibration is expected to be small. The impact of a possible mis-calibration is assessed by changing the measured E miss T within its uncertainty. The resulting sizes of all uncertainties are given in Table 2. They are also used in the combination of results described below. The three most important sources of systematic uncertainty for both analyses are the relative b-jet to light jet energy scale, the modelling of initial and final state QCD radiation, and the light jet energy scale. Their impact on the precision on m top are different as expected from the difference in the estimators used by the two analyses. Figure 7 shows the results of the 1d-analysis when performed on data. For both channels, the fit function describes the data well, with a χ 2 /dof of 21/23 (39/23) for the e+jets (µ+jets) channels. The observed statistical uncertainties in the data are consistent with the expectations given in Section 6 with the e+jets channel uncertainty being slightly higher than the expected uncertainty of 1.36 ± 0.16 GeV. The results from both channels are statistically consistent and are: m top = 172.9 ± 1.5 stat ± 2.5 syst GeV (1d e+jets), m top = 175.5 ± 1.1 stat ± 2.6 syst GeV (1d µ+jets). Within statistical uncertainties these results are consistent with each other, and the observed statistical uncertainties in the data are in accord with the expectations given in Section 7, however, for this analysis, with the e+jets channel uncertainty being slightly lower than the expected uncertainty of 1.20 ± 0.08 GeV. The corresponding values for the JSF are 0.985 ± 0.008 and 0.986 ±0.006 in the e+jets and µ+jets channels, respec-tively, where the uncertainties are statistical only. The JSF values fitted for the two channels are consistent within their statistical uncertainty. For both channels, the correlation of m top and the JSF in the fits is about −0.57.
Results
When separating the statistical and JSF component of the result as explained in the discussion of the JSF uncertainty evaluation in Section 8.1, the result from the 2d-analysis yields: m top = 174.3 ± 0.8 stat ± 2.3 syst GeV (2d e+jets), m top = 175.0 ± 0.7 stat ± 2.6 syst GeV (2d µ+jets).
These values together with the breakdown of uncertainties are shown in Table 2 and are used in the combinations.
Due to the additional event selection requirements used in the 1d-analysis to optimise the expected uncertainty described in Section 5, for both channels the 2d-analysis has the smaller statistical uncertainty, despite the better top quark mass resolution of the 1d-analysis. Both analyses are limited by the systematic uncertainties, which have different relative contributions per source but are comparable in total size, i.e. the difference in total uncertainty between the most precise and the least precise of the four measurements is only 16%.
The four individual results are all based on data from the first part of the 2011 data taking period. The e+jets and µ+jets channel analyses exploit exclusive event selections and consequently are statistically uncorrelated within a given analysis. In contrast, for each lepton channel the data samples partly overlap, see Section 4. However, because the selection of the jet triplet and the construction of the estimator of m top are different, the two analyses are less correlated than the about 50% that would be expected from the overlap of events.
The statistical correlation of the two results for each of the lepton channels is evaluated using the Monte Carlo method suggested in Ref. [39], exploiting the large Monte Carlo signal samples. For all four measurements (two channels and two analyses), five hundred independent pseudo-experiments are performed, ensuring that for every single pseudo-experiment the identical events are input to all measurements. The precision of the determined statistical correlations depends purely on the number of pseudo-experiments performed, and in particular, it is independent of the uncertainty of the measured m top per pseudo-experiment. In this analysis, the precision amounts to approximately 4% absolute, i.e. this estimate is sufficiently precise that its impact on the uncertainty on m top , given the low sensitivity of the combined results of m top to the statistical correlation, is negligible. For the 1d-analysis, the signal is comprised of tt and single top quark production, whereas for the 2d-analysis the single top quark production process is included in the background, see Table 1. Consequently, the MC@NLO samples generated at m top = 172.5 GeV for both processes are used appropriately for each analysis in determining the statistical correlations. The statistical correlation between the results of the two analyses is 0.15 (0.16) in the e+jets (µ+jets) channels, respectively. Given these correlations, the two measurements for each lepton channel are statistically consistent for both lepton flavours.
The combinations of results are performed for the individual measurements and their uncertainties listed in Table 2 and using the formalism described in Refs. [39,40]. The statistical correlations described above are used. The correlations of systematic uncertainties assumed in the combinations fall into three classes. For the uncertainty in question the measurements are either considered uncorrelated ρ = 0, fully correlated between anal-yses and lepton channels ρ = 1, or fully correlated between analyses, but uncorrelated between lepton channels denoted with ρ = (1). A correlation of ρ = 0 is used for the sources method calibration and jet energy scale factor, which are of purely statistical nature. The sources with ρ = 1 are listed in Table 2. Finally, the sources with ρ = (1) are QCD background normalisation and shape that are based on independent lepton fake rates in each lepton channel.
Combining the results for the two lepton channels separately for each analysis gives the following results (note that these two analyses are correlated as described above): m top = 174.4 ± 0.9 stat ± 2.5 syst GeV (1d-analysis), m top = 174.5 ± 0.6 stat ± 2.3 syst GeV (2d-analysis).
For the 1d-analysis the µ+jets channel is more precise, and consequently carries a larger weight in the combination, whereas for the 2d-analysis this is reversed. However, for both analyses, the improvement on the more precise estimate by the combination is moderate, i.e. a few percent, see Table 2.
The pairwise correlation of the four individual results range from 0.63 to 0.77, with the smallest correlation between the results from the different lepton channels of the different analyses, and the largest correlation between the ones from the two lepton channels within an individual analysis. The combination of all four measurements of m top yields statistical and systematic uncertainties on the top quark mass of 0.6 GeV and 2.3 GeV, respectively. Presently this combination does not improve the precision of the measured top quark mass from the 2d-analysis, which has the better expected total uncertainty. Therefore, the result from the 2d-analysis is presented as the final result. The two analyses will differently profit from progress on the individual systematic uncertainties, which can be fully exploited by the method to estimate the statistical correlation of different estimators of m top obtained in the same data sample together with the outlined combination procedure. The results are summarised in Figure 10 and compared to selected measurements from the Tevatron experiments.
Summary and conclusion
The top quark mass has been measured directly via two implementations of the template method in the e+jets and µ+jets decay channels, based on proton-proton collision data from 2011 corresponding to an integrated luminosity of about 1.04 fb −1 . The two analyses mitigate the impact of the three largest systematic uncertainties on the measured m top with different methods. The e+jets and µ+jets channels, and both analyses, lead to consistent results within their correlated uncertainties. A combined 1d-analysis and 2d-analysis result does not currently improve the precision of the measured top quark mass from the 2d-analysis and hence the 2d-analysis result is presented as the final result: This result is statistically as precise as the m top measurement obtained in the Tevatron combination, but the total uncertainty, dominated by systematic effects, is still significantly larger. In this result, the three most important sources of systematic uncertainty are from the relative b-jet to light jet energy scale, the modelling of initial and final state QCD radiation, and the light quark jet energy scale. These sources account for about 85% of the total systematic uncertainty.
Acknowledgements
We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We | 12,721.8 | 2012-03-26T00:00:00.000 | [
"Physics"
] |
How Elementary Pre-Service Teachers Use Scientific Knowledge to Justify Their Reasoning about the Electrification Phenomena by Friction
: This article uses a qualitative research method to identify eighty elementary pre-service teachers’ conceptual representations concerning static electricity. We carry out this analysis using a paper and pencil questionnaire. This study shows that pre-service teachers have an erroneous understanding compared to those commonly accepted by the scientific community. The inaccurate representations identified are relevant for developing teaching strategies focused on conceptual conflict.
Introduction
Worldwide elementary and secondary schools' curricula prescribed the study of electrical circuits and static electricity. A review of research on students' alternative conceptions and their teachers revealed less research about electrostatic phenomena [1][2][3][4][5][6] than those related to the electric circuit [7][8][9][10][11][12][13], despite the critical transition between these two areas of knowledge. On this subject, Bensghir and Closset [14], Métioui et al. [15], and Eylon and Ganiel [16] showed several conceptual difficulties in the study of electrical circuits result from misunderstanding electrostatic concept as electric charge. In this regard, Eylon and Ganiel [16] (p. 76) point out the following about the conceptual difficulties encountered by students about the electrical circuit.
We find that even in very simple situations, most students do not tie concepts from electrostatics into their description of the phenomena. This leads to severe inconsistencies in student answers to questions about currents, charges, and their sources in an electric circuit. Formal definitions (even when quoted correctly) are not utilized operationally. Consequently, a consistent picture of the mechanisms is usually lacking. This may explain why students cannot conceptualize the electric circuit as a system and appreciate the functional relationships between its parts (p. 76).
Furthermore, students have difficulty interpreting the sign terminals of a battery + and − and the electric current flow in a simple electrical circuit [11]. Likewise, for many students, if we touch one end of the battery terminals with the finger, we receive an electric shock [11]. In addition, many students think that a battery contains electrical charges as a water reservoir [17].
The following misconceptions are the consequence of the imprecise transition between dynamic electricity and static electricity as highlighted by many researchers [14][15][16][17]: the voltage is related to the amount of current, the current creates the voltage rather than the voltage being needed for the current to flow, the voltage could not exist if no current is flowing, batteries become flat when all of the electricity stored in the battery is used up, the "negative current" goes back to the battery and the "positive current" comes from the battery.
Interestingly, these misconceptions are also the consequence of the imprecise use of everyday language (e.g., the terms current, voltage, and electrical power are used interchangeably, voltage is the force of the electric current) [18].
This qualitative research falls within this perspective and aims to identify the scientific knowledge used by eighty elementary pre-service teachers to justify their reasoning about the electrification phenomena by rubbing, contact, and induction (polarization) as well as the formation of lightning.
Methodology
We gave them a paper-and-pencil questionnaire lasting sixty minutes to highlight their conceptual representations. The (open) questions were related to situations with which they interact in their daily environment, and the questions' answers require an understanding of the production phenomenon of electric charges and their displacement and the law of charges between contrary charges' objects. We have also considered the concepts prescribed in the Québec Education Program [19], where teachers are required to teach activities describing: the effect of electrostatic attraction (e.g., describes the effect of electrostatic attraction (paper attracted by a charged object), electrical conductors, and insulators, and the insulating properties of various substances).
Population
Within the framework of a university course on science didactics offered to students in primary school teacher training, we administered the questionnaire to eighty (80) students at the start of the course, including the average age of 23 years old. After six years of elementary school, they completed their high school diploma (five years) and their college diploma (two years) in the humanities. In high school, they took two science courses related to physics, chemistry, and the environment, and we asked them if they had ever taken a course in electrostatics. Thus, only 25% indicated studying it and vaguely remembered the concepts studied. Before they filled in the questionnaire, it was explained to them that the questions aimed to know their previous knowledge on phenomena related to static electricity that they will have to teach their future students under the training program of the Ministry of Education. We explained to them that they had to answer without worrying about the scientific veracity of their explanations and that they would have the opportunity to compare them to those that will be studying during the course on this topic.
Construction of the Questionnaire
We constructed six questions and the students had to rely on their scientific knowledge (i.e., their explanatory model) to answer them. The questionnaire could not be answered adequately by referring to the notion learned "mechanically" by rote. Table 1 presents the formulated questions and their answer scientifically accepted. To ensure the scientific validity of our answers, we consulted scientific literature [20][21][22]. Note that the responses conform to the electrification by friction and the polarization phenomenon as developed in the context of static electricity associated with the study of electrical charges at rest. Furthermore, we validated the questionnaire with three university professors in science education who noted that the wording of the questions was understandable for elementary pre-service teachers who are not scientists and that the answers to the questions do not require any quantitative reasoning. Therefore, answers were clearly formulated, even if the concepts raised, such as the electric field, were not explored in depth. Note that the questions selected are related to the students' environment. They also cover the phenomenon of electrification by friction, the law of charges between rubbed objects, and matter polarization.
Scientific knowledge
It is a phenomenon of electrification by friction and repulsion between charges of the same signs (positive/positive or negative/negative). The hairs were charging because of their friction with the comb (e.g., the comb became positively charging (electrons loss) and hairs negatively (electrons gain) because of their friction. Following the law of electric charges, hairs stand on end because their charge has the same sign.
Question 2
Why does the "anti-static" paper (fabric softener) in a dryer reduce static build-up? Explain your answer as well as you can.
Scientific knowledge: In the machine dryer, the pieces of laundry are charged positively and negatively due to their friction. We interposed between the laundry anti-static paper to reduce the transfer charge from one laundry to another. The paper acts as a "sponge" by "absorbing" negative charges, a kind of capacitor. Furthermore, one can make the analogy between the paper in the dryer and the water vapor (humidity), which reduces the formation of static electricity in each place.
Question 3
Why, sometimes, does a balloon rubbed on hair stick on a wall? Explain your answer as well as you can.
Scientific knowledge: The balloon charged, for example, negatively due to its friction on the hair created an electric field that induces polarization in the wall. The opposite charges to those of the balloon closest to it explain the attraction of the sticky balloon.
Question 4
Why does a plastic ruler previously rubbed with wool attract some small pieces of paper without touching them? Explain your answer as well as you can.
Scientific knowledge: The scientific explanation of this phenomenon involves the polarization properties of dielectric material as illustrated to question 3. So, the plastic ruler was charging (e.g., negatively) due to its rubbing against the woolen fabric that loses negative charges (electrons); it creates an electric field that induces a polarization in the paper by creating "dummy" polarization charges. The opposite charges to the ruler closest to it are responsible for the attraction. Note, we observed pieces of paper attraction from a distance because of their lightness, which is not the case with the wall.
Question 5
How does lightning form during a thunderstorm? Explain your answer as well as you can.
Scientific knowledge: The phenomenon of lightning forming during a thunderstorm is complex and can be explained in three stages: 1. The friction between the water droplets and the hailstones in the cloud gives rise to a separation of charges in the cloud (e.g., its lower part having a charge of opposite sign to its upper part); 2. When the cloud is near the ground, its negatively charged polarizes atoms in the ground and creates a positively charged surface; and 3. When this concentration of charges is intense, the air becomes ionized, making it conductive; then, an attraction between the negative and positive charges becomes sufficient to allow the charges to cross the gap between the cloud and the ground, creating an emission of light (photons) that we designate by the word lightning.
Question 6
How to explain that by rubbing our feet on a carpet in dry weather, we can catch a shock by touching a metal doorknob or another person?
Scientific knowledge: Friction between the feet and the carpet causes loads to shift from the carpet to the feet, which become loaded. As the body is a conductor, loads in the feet can travel up to the hand. The negatively charged hand polarizes atoms' handles and creates a positively charged surface. Thereby, a hand's negative charges attract with handling positive charges at a distance. The attraction becomes strong at the shortest distance, and if it is dark, we will see a little lightning, and we receive an electric shock when we touch the door handle. The same thing happens when we sometimes give our hand to a particular person.
Data Analyses
We present below the representations identified in each question following the analysis of the answers, followed by the students' comments through illustration. The scientific knowledge synthesized in Table 1 is essential because it constitutes our analyses grid. Thereby, we construct a conceptual representation of responses having similar intended meanings. Note that the representations' numbers depend on the question and the student's answers.
Data Analysis: Question 1
Analysis of the responses allowed us to identify four conceptual representations to explain this phenomenon: 1. As a result of friction, the comb gives up electrons (negative charges) to the hair. These have the same charge (negative), repel each other, 2. Friction between the comb and the hair creates a transfer of static electricity (or electric charges) to the hair, hence their repulsion, 3. The hair stands on end due to the static electricity created by rubbing against the comb, and 4. Hair stands on end due to the electrical charges in the surrounding air.
Students' numbers identified for each representation, and some of their responses are presented in Table 2, followed by our analyses.
Student scientific knowledge Analyses
"The friction between brush and the hair produces the phenomenon of electrification by friction, that is to say, that the effect of friction will cause a transfer of electrons from one material to another, either between the brush and the hair. Due to this electron transfer, the hair will charge the same sign. The repulsion of charges of the same sign, so the hair will grow back and stand on end." (E 28 ) "The hair stands on end because there is a transfer of loads between it and the comb. This transfer will create the presence of charges of the same signs in the hair, which will grow back from where it stands." (E 45 ) "So, the hair would all become of the same load and repel each other?" (E 50 ) The answers put forward as to why hair stands on end are correct. They implicitly applied the law of repulsion between charges of the same signs in the case of hair and the phenomenon of hair electrification by friction with the comb (transfer positive or negative charges from one object to another).
Representation 2 (14/80-18%): Friction between the comb and the hair creates a transfer of static electricity (or electric charges) to the hair, hence their repulsion.
"They lift themselves into the air with static electricity transmitted to them." (E 48 ) "It is about transferring the electrostatics from my brush into my Hair." (E 49 ) "Friction between the brush and the Hair causes a transfer of static electricity from the brush to the Hair." (E 70 ) "Since there is friction between the comb and the Hair, it creates a transfer of electrical charges from one object to another (comb to the Hair). Hair stands on end when combed." (E 75 ) Those students do not specify why the friction charged the hair and its meaning, and it is also not explained why the balloon sticks on the wall. Because of the friction, the hair and the comb become charged; that is, hair and the comb are not anymore electrical neutral.
Representation 3 (26/80-32%): The hair stands on end due to the static electricity created by rubbing against the comb.
"When combed, they stand up from the static. Friction forms Statics. Statics make our hair stand up." (E 20 ) "The Friction between the brush and the hair creates static. So the hair no longer stays in place on the head, but rises into the air like being rubbed against an inflatable balloon." (E 25 ) "Rubbing her hair against the brush creates static electricity. This electricity is what attracts the hair to the brush, making the hair stand up when combed." (E 52 ).
Incomplete explanation because they do not specify what the statics represent or the link between the friction of the hair and why it stands up. As stated above, electron transfer occurs when we rub one material on another, and thus, there is a modification of the two objects. It is about the phenomenon of friction electrification.
Representation 4 (16/80-20%): Hair stands on end due to the electrical charges in the surrounding air.
"This phenomenon is explained by a static effect. When we comb our hair, the comb comes in contact with air which contains a lot of electrons." (E 22 ) "The air must be charged with electricity, which produces the phenomenon of static electricity between the air, the brush and our hair." (E 43 ) "In this case, there is an electric charge in the air which attracts the hair and makes it stand up." (E 63 ) "I think it is because of the air around us. When it is colder, the hair stands on end and becomes drier." (E 66 ) Atmospheric conditions indeed affect the phenomenon of electrification by friction. For example, when the air is dry, the hair will stay repulsed longer than if the air is humid. These students have conceptual difficulties explaining the electrification of hair by friction and the atmospheric conditions that can disrupt this electrification.
Other students' (14/80-17%) answers are indecipherable to regroup them in given representations. By way of illustration, here are some advanced answers: "Because hair contains electrical charges too. Our brush or comb also contains electrical charges opposite to those of our hair. So it is like a magnet, opposing forces attract, and equal forces repel each other. It is electrostatics." (E 18 ) "There is an activation of neutrons and electrons that causes a magnetic charge, like a little with magnets, except that there is not necessarily the presence of ferromagnetic metals." (E 23 ) "When combing the hair, I create heat which agitates the atoms that compose it and creates electrostatic energy, which acts like a magnet." (E 30 )
Data Analysis: Question 2
This question aimed to find out how the students explain the effect of "anti-static" paper in a dryer to reduce the formation of "static" in clothes. As highlighted in Table 1, the focus of this question concerns the property of dryer sheets to reduce static formed during drying, and this phenomenon is familiar to them.
Two representations emerge from the analysis of the advanced responses: 1.
The fabric softener paper prevents static formation due to its chemical composition.
2.
The fabric softener prevents static formation because the paper components have the property of absorbing the static.
Students' numbers identified for each representation, and some of their responses are presented in Table 3, followed by our analyses. Table 3. Student scientific knowledge relative to "anti-static" paper and their analyses.
Students scientific knowledge Analyses
"It should be made of a particular material that is resistant to static." (E 4 ) "It is because of the components of the anti-static paper.
There must be a chemical." (E 13 ) "It is because of the composition of the anti-static paper. This is probably something that will spread all over the clothes to prevent static on them." (E 53 ) These explanations are incomplete. It is fair to say that fabric softener paper has a chemical composition that causes laundry to be lightly or lightly loaded. However, there is no explanation for the laundry and the fabric softener without knowing its chemical composition. Fabric softener sheets are chemically composed to reduce the number of electrons transferred from one laundry to another due to their rubbing, much like a sponge or a capacitor.
Representation 2 (29/80-36%): The fabric softener prevents static formation because the components of the paper have the property of absorbing the static.
"Its composition made of material that attracts and absorbs the static created when we put our clothes in the dryer. So, there is none left on our clothes." (E 6 ) "Antistatic paper has a material that keeps the static in the paper." (E 20 ) We underline an essential property of paper: it attracts the static (rather negative charges) created (instead, which is transferred from one laundry to another due to their friction). Laundry in the dryer becomes electrified by friction due to electron transfer, and the fabric softener sheets get between the laundry and limit the transfer of electrons.
Other students' (10/80-13%) answers are indecipherable to regroup them in given representations: "It is that statics is a form of energy that we can remove. Indeed, it is like an electric current, if you remove the battery, there is no more current." (E 7 ) "I think this is to prevent an explosion or ignition in the dryer due to the high amount of friction? Perhaps this paper absorbs the electricity produced like a battery?" (E 17 ) "I think the Bounty gives off a scent, which negates the static potency. Adding another derivative through the heat or introducing another material into that heat changes the current. A bit like when we wet a garment: the change in temperature moves away from the static. "(E 27 ) Eight (10%) students have also admitted having no idea why this is so: "As the name suggests, the paper will prevent the build-up of static electricity; however, I do not see how that works." (E 1 ) "I do not have the faintest idea." (E 34 )
Data Analysis: Question 3
Students asked how they explained that a balloon that had previously rubbed on hair could stick to a wall regarding the second question. The goal is to see if they refer to the polarization phenomenon (see Table 1). Five conceptual representations emerged from the data analysis: 1. The balloon sticks to the wall because statics (or static electricity) created by rubbing makes it stick to the wall; 2. Hair's charges were transferred to the balloon because of friction is why it sticks at the wall; 3. The balloon sticks to the wall because the balloon charges negatively due to friction against the hair, and it sticks to the wall positively charged; 4. Static energy is produced by rubbing the balloon against the hair and transferring it to the wall; it is why the balloon sticks to the wall, and 5. The balloon is charged (positive or negative) because of friction against the wool. The opposite charge in the wall interacts with the balloon charges; explains how it becomes stuck on the wall.
Students' numbers identified for each representation, and some of their responses are presented in Table 4, followed by our analyses. No explanation about the static charge created (or statics) gave no more the relationship between the "static" and the balloon stuck to the wall. In this representation, the interaction hair-balloon (friction) is evoking, but the one between the ball and the wall was not. As indicated, the balloon stuck to the wall because a charged balloon creates an electric field that induces polarization in the electrically neutral wall: charges opposite to those of the balloon, which are closest to it, explain the attraction under the law of charges of contrary signs. The hair has positive and negative charges in equal numbers before being rubbed against the wool. As a result of friction, some of their negative charges were transferred to the balloon, which become positively charged. The balloon becomes negatively charged (electron gain), and the wall contains positive and negative charges. The positive wall charges attract the negative balloon charges, and the negative wall charges move further away due to the negative charges of the balloon according to the law of charges. Representation 4 (15/80-19%): Static energy is produced by rubbing the balloon against the hair and transferring it to the wall; it is why the balloon sticks to the wall.
"The static energy created by the friction causes the party balloon to stick to the wall. Electrons and protons must move because of friction." (E 5 ) "When you rub them together, it creates static energy, and that energy then goes to the wall, which is why we get to stick the ball there." (E 14 ) Protons do not move due to friction; it is electrons that move (E 5 ). The object that loses electrons and, by definition, is positively charged. As for electrostatic energy is not transmitted to the wall (E 14 ). Electrostatic energy represents the work to transport a charge (for example, there is a transport of charges in a capacitor between the armature). "The balloon is charged negatively, the wall being neutral, the negative charges of the wall were repelling, and therefore the positive charges of the wall attract the negative charges of the balloon." (E 13 ) "When the balloon rubbed on the head, it will receive or lose charges (transfer of charges from the head to the ball or vice versa). The opposing charges on the wall will attract the ball." (E 45 ) "These are the positive ions that will then attract elements of the opposite charge to them or, on the contrary, repel elements of the same charge." (E 65 ) Only one student implicitly referred to the attraction between charge and electrical neutral objects (E 13 ). E 45 and E 65 implicitly refer to the law of attraction or repulsion between charged objects according to their charges. Thus, one does not specify that the wall is electrically neutral.
Data Analysis: Question 4
Students were asked how they explained that a plastic ruler that had previously been rubbed on a piece of woolen cloth attracts small pieces of paper at a distance regarding the third question. The goal is to see if they refer to the polarization phenomenon illustrated above ( Table 1). Note that this question is related to the same phenomenon studied in the preceding question. Four conceptual representations emerged from the data analysis: 1. When approaching the charged ruler (e.g., negatively charged) of the pieces of paper (electrically neutral), the positive charges of the pieces of paper are closest to the ruler, which explains the observed attraction; 2. The friction of the ruler creates an electric field that attracts the pieces of paper from a distance; 3. The static energy in the ruler acquired due to its friction is transmitted to small pieces of paper and attracts them; and 4. The friction of the ruler creates a magnetic field that attracts the pieces of paper from a distance. Students' numbers identified for each representation, and some of their responses are presented in Table 5, followed by our analyses. Table 5. Student scientific knowledge about the piece of paper attraction and their analyses. Question 4: Why does a plastic ruler previously rubbed with wool attract some small pieces of paper without touching them? Representation 1 (5/80-6%): When approaching the charged ruler (e.g., negatively charged) of the pieces of paper (electrically neutral), the positive charges of the pieces of paper are closest to the ruler, which explains the observed attraction.
Student scientific knowledge Analyse "The rubbing of the plastic ruler on the fabric has caused electrons to transfer from the material to the plastic ruler because plastic tends to attract electrons more than cloth. As a result, the rule became negatively charged. Then the negatively charged ruler placed near the paper ends will attract the ends because the charges positive and negative attract. It is called the phenomenon of electrification by induction. A neutral object (pieces of paper) near a negatively or positively charged object (plastic ruler) generates the reorganization of the charges in the neutral object. The faces of the pieces of paper exposed to the negatively charged rule will accumulate the negative charges transmitted to them by the plastic mater." (E 28 ) The present explanation is appropriate. The student refers adequately to the polarization phenomenon to justify the loaded ruler's attraction of pieces of paper as synthesized in Table 1.
Representation 2 (12/80-15%): The friction of the ruler creates an electric field that attracts the pieces of paper from a distance "The ruler emit an electrostatic field which attracts objects capable of reacting to this force, much like the magnetic field of a magnet." (E 6 ) "The friction caused an electrostatic field between the ruler and the papers." (E 39 ) "The ruler attracts the peace of papers because of the electric field created by friction and the positive electric charges that are now on the ruler. As with the magnet and ferromagnetic objects, the ruler (magnet) will attract peace of paper (ferromagnetic objects)." (E 42 ) "By rubbing the plastic ruler with a piece of cloth, electrons are torn from this ruler, which becomes positive. Since paper is a neutral element, an electric field was created between the ruler and the pieces that attract that ruler." (E 78 ) When the rule is charging due to its friction, it creates an electric field. This field can exert a force (of attraction or repulsion) on a charged body from a distance. The space in which this force exerted an action is called the electric field. The electric field of negative and positive charges attracts each other. The explanations put forward by these students show that they have difficulty explaining the attraction of pieces of paper by using the notion of the electric field, as illustrated in Table 1.
Representation 3 (30/80-38%): The static energy in the ruler acquired due to its friction is transmitted to small pieces of paper and attracts them.
"Rubbing the ruler against the wool allowed a transfer of static energy to it. So, since these static energy was intense, it could attract the pieces of paper towards it." (E 3 ) "There is static energy in the ruler because of its friction with the fabric. This static energy was transferred to each of the small pieces of paper, which then stick to the rule." (E 14 ) "Plastic transmits its electrostatic. Paper is easily attracted to plastic." (E 49 ) "There is a transfer of electrostatic charges towards the neutral substance, here the paper." (E 51 ) Based on these explanations, it would seem to some students that the plastic ruler stores "static energy" due to its friction. Then, this energy was transferred to the pieces of paper hence their attraction. These explanations showed their conceptual difficulties in associating the electric charge and electric field concepts to explain the distant attraction of pieces of paper.
Representation 4 (16/80-20%): The friction of the ruler creates a magnetic field that attracts the pieces of paper from a distance.
"I believe that when you rub the plastic ruler with a piece of cloth, the molecules stir and become a temporary magnet, which explains the effect of attracting objects around it. It is why small objects such as small pieces of paper are attracted to the ruler." (E 16 ) "By rubbing the plastic ruler with the piece of tissue, we magnetize the ruler by stimulating its atoms by friction with the tissue and, as a result, we create an magnetic field which exerts an attraction on the pieces of paper." (E 30 ) "I guess it is with the Law of Attraction. The ruler has previously loaded thanks to the friction between it and the piece of fabric, which created static electricity. A magnetic field was created (like with magnets), so the ruler could attract the paper within a certain radius without contact between it and the pieces of paper." (E 71 ) The explanations show that the students confuse the phenomenon of friction electrification with that of friction magnetization. In this last case, when we rub a magnet against a ferromagnetic material (e.g., iron, nickel) in the North-South direction, these materials turn into temporary magnets. This transformation results from the alignment of the atoms composing these materials. Note that these atoms are small magnets called micro-magnets.
Other students' (17/80-21%) answers are indecipherable to regroup them in given representations. By way of illustration, here are some advanced answers: "I think the plastic and the piece of cloth together contain electrical charges which when rubbed together attract electrons from the pieces of paper." (E 11 ) "I think it is the static between the plastic of the ruler and the piece of cloth. This causes charges." (E 29 ) "This is because the ruler is in an environment where there is much electricity in the air. Also, the temperature of the environment is conducive to activate the latent electricity on the ruler." (E 80 )
Data Analysis: Question 5
The students' explanations on the formation of lightning during a thunderstorm allowed us to identify four conceptual representations presented in Table 6, as well as the students' numbers identified for each one and some of their responses. Table 6. Student scientific knowledge about the lightning and their analyses. "It is the friction between hot and cold air rubbing together. This is why we see flashes of heat in the summer. Lightning does not need rain to form." (E 2 ) "Lightning occurs when a mass of cold air and a mass of warm air meet in the air." (E 71 ) The answers given do not explain the formation of a lightning bolt. The air movements within the cloud create shocks between the particles: drops of water, ice crystals, sleet collide, rub against each other, which modifies their electrical charge. The largest particles (grains of sleet) become negatively charged when the temperature is below −15 • C, and lighter particles become positively charged. Thus, inside a cloud, lightning is triggered when the difference in charges at the top and at the base of the cloud is intense.
Representation 2 (18/80-22%): Lightning results from a collision or friction between (a) two clouds; (b) air and clouds; or (c) atmosphere and drops of water.
"The collision of two clouds together forms lightning." (E 39 ) "The clouds meet, and under the effect of friction, there is an electric shock." (E 79 ) Lightning does not result from the collision between two clouds or because of their meeting. As outlined, the lightning strikes inside a cloud between two clouds or between a charged cloud and the ground. "I think it is the positive and negative charges of the cloud. The positives are at the top of the cloud, and the negatives are at the bottom. This creates the lightning." (E 66 ) "Negative charges accumulate in a cloud, and when this force is great enough, these charges are attracted to those in the ground that are positive. The negative will join the positive, and that makes a flash." (E 67 ) These explanations are correct but incomplete as how to explain, for example, the movement of charges between the lower base of a negatively charged cloud and the positively charged surface of the ground, knowing that air is a good insulator. A slight discharge occurs at the base of the cloud, and it ionizes the air in its path. Thus, an electrical conductor channel is created, called a tracer. This slight discharge is followed by another, which will extend the tracer by a few dozen meters. The tracer is formed in zigzag and successive leaps with several branches in random directions but directed towards the ground.
"It is because of the transfer of electric charges from the cloud to the ground." (E 17 ) "It is a transfer of charge between the clouds and the ground. A big visible shock." (E 54 ) There is indeed a charge transfer. On the other hand, as above, no explanation is given about the charges' formation or how the lightning manifests. Furthermore, the term "shock" is not specified by E 54 .
For 20% of students, lightning formed due to charge transfer between a cloud and the ground. This representation is fair. However, there is a lack of explanations for this phenomenon, and it is the same for the other representations.
Twenty student answers (20/80-25%) were indecipherable to regroup in given representations. By way of illustration, here are some mistaken and confused reasoning answers.
"It is electricity that begins with atoms which are electrons that move in a given space like lightning. It is then an electric current between the cloud and the ground that produces light." (E 38 ) "Lightning is formed in clouds where hot particles contact cold ones. This collision causes positive and negative charges. These are then attracted to the Earth's charge. The lightning thus formed then touches the ground directly or with the aid of an electrically conductive object." (E 64 ) "The creation of electricity comes from the friction between various substances. In the sky, during a thunderstorm, the clouds, therefore the fine water droplets in suspension, are very agitated by the high temperature causing the thunderstorm. This increase in temperature creates friction between the droplets, and at a certain point, this stored energy is released, which forms lightning." (E 70 ) "Ions come together from the clouds to the earth, the heat and light of their movement towards the ground as well as their large number makes them visible." (E 77 )
Data Analysis: Question 6
The students' explanations concerning the shock that one can receive by touching a metal handle after having rubbed our feet on a carpet in dry weather allowed us to identify three conceptual representations presented in Table 7. Table 7. Student scientific knowledge about electrical shock. Question 6: How to explain that by rubbing our feet on a carpet in dry weather, we can catch a shock by touching a metal doorknob or another person?
Representation 1 (24/80-30%): The charges produced by friction pass (or are transferred) from our body to the metal handle (electrons were transferring in the form of shock).
"As we walk, our feet rub on the ground, especially on a carpet, and this creates an electric charge in us. This charge passes through our body towards the metal handle, capable of receiving an electric charge in contact with the handle by the hand. Thus, we undergo a slight electric shock." (E 25 ) The explanations given are correct. These students explain the phenomenon in two stages: 1-the body charges resulting from friction and discharges by contact with the handle.
Representation 2 (12/80-15%): The human body is, therefore, a source of electricity "Our body also contains electricity, and when it touches metal, there is an exchange of charges. It is natural electricity." (E 32 ) This explanation is flawed because the loads transferred from the body to the handle result from friction.
Representation 3 (18/80-22%): Touching the handle gives us a shock because the handle is charging with electricity.
"Because in our body, there is static due to certain clothes that we wear (for example the polar) and that if the handle and charges positively, it will create a little shock." (E 15 ) "The handle is electrically charged; as we are conductors, we may receive a shock." (E 36 ) "The handle is loaded, and it is our body that allows the load to be released." (E 37 ) These explanations are erroneous compared to a scientific explanation as synthesized in Table 1. The charges are not transferred from the handle to the body, rather the reverse. The body charges are due to the friction of the feet against the carpet. However, it is fair to say that the body is a conductor.
Twenty-five student answers (26/80-33%) were indecipherable to regroup in given representations. By way of illustration, here are some mistaken and confused reasoning answers: "I believe this is due to our body temperature. The contact of the handle with the difference in temperature of our body creates a shock. Maybe we have too much static in us." (E 27 ) "Metal is a conductor of electricity, so when we are in motion, the static in the air comes into contact with our body, and when we touch the metal handle, we feel the shock." (E 29 )
Discussion
The data analyses showed that students' knowledge about electrostatic phenomena appears in epistemological rupture with modern scientific knowledge. The representations of the student appear to be irreconcilable with those generally accepted in the scientific community. Table 8 lists the essential student knowledge encountered in the present study. Table 8. Summary of students' scientific knowledge and of their corresponding scientific knowledge.
Student Knowledge
Scientific Knowledge There will be no static if we do not comb the hair in the same direction.
If we comb the hair in the same direction or both directions, we will have static.
The friction of the comb against the hair causes a transfer of protons and neutrons.
Rubbing the comb against the hair transfers electrons, not protons and neutrons.
The friction with the comb and the hair causes a transfer of protons and neutrons.
The friction between the comb and the hair causes electron transfer (negative charges).
When we rub a plastic ruler, it becomes electrified. The electrons bound from the nuclei of the atoms which constitute it loosen from the nuclei and become free.
When we rub a plastic ruler, it charges (positively or negatively). The bound electrons of the nuclei of the atoms which constitute it do not become free and they remain attached to their nuclei because the plastic is an insulator.
When there is friction between two objects, there is an exchange of electrons, and the two objects acquire opposite electric charges.
When there is friction between two objects, there is a transfer of electrons. Whoever donates electrons becomes positively charged and whoever receives electrons becomes negatively charged.
When there is friction between two bodies, there is an exchange of electrons.
When there is friction between two bodies electrons transfer from one body to another.
When we rub hair with a woolen cloth, we charge electrons positively.
When we rub the hair with a woolen cloth, the wool charges positively and the hair negatively, and the electrons are negatively charged particles.
The friction between two objects (e.g., piece of woolen fabric and a plastic ruler) creates an exchange of particles, electrons, and neutrons.
The friction between two objects (e.g., a piece of woolen fabric and a plastic ruler) creates a transfer of negative charges (electrons) from one to the other).
So, students' scientific knowledge used to explain the electrification phenomena produced by friction is inadequate compared to the modern scientific knowledge related to these phenomena. Students used scientific terms, such as electric field, magnetic field, static energy, static electricity, static charge, positive charge, negative charge, and charged object leaning their logical thinking and everyday interaction with electrostatic phenomena. It is important to continue research to have more details on some of their answers.
Interestingly, these findings are consistent with many studies, including Suma et al. [6], which showed that high school students' scientific knowledge about static electricity concepts is wrong compared to scientific knowledge despite teaching. According to him, for many students, "a balloon rubbed by silk will have the static electric charge that it can attract paper torn pieces. The term static electricity is identical to a static charge." Furthermore, "plastic rubbed by cloth will get additional electrons from the cloth, that the plastic charge becomes positive; the cloth will have the negative charge so that the cloth and the plastic will attract each other." The identified student knowledge is highly relevant and appropriate to develop a two-tier test to rapidly diagnose the pre-service elementary teachers during their education at the university. Furthermore, one can build experiments confronting student knowledge.
Conclusions and Didactical Impact
While being clear about the limits inherent in qualitative research, the results have concluded that teachers in training construct misconceptions about static electricity. The data analyses indicate that they interpret the various phenomena related to static electricity by referring mainly to three representations. The first involves opposing charges whose interaction seems to justify the observed phenomenon. The second refers to an equilibrium mechanism that redistributes charges between objects. The third relies on an accumulation of charges and possible flow in certain conditions. Other researchers have also identified these representations, as presented in the international literature review. So that these future teachers do not transmit their erroneous conceptions to their students, they should first discover the law of attraction and repulsion between rubbed bodies and between rubbed and non-rubbed bodies by carrying out different experiments. Unlike traditional teaching, they will analyze these experiments on a macroscopic scale, such as those performed at the beginning of the development of static electricity before understanding matter at the atomic scale. Then, they will study some historical considerations on the development of the particulate aspect of the matter. Pre-service teachers will have to confront their conceptual representations with scientific conceptions constructed throughout history to acquire scientific knowledge. In this regard, we must favor conceptual change strategies [23,24], which consist of comparing their misconceptions with those commonly accepted by scientists, as presented in the analysis of the responses put forward by the students concerning each of the six situations retained. | 9,944.2 | 2022-02-25T00:00:00.000 | [
"Education",
"Engineering",
"Physics"
] |
Pulse Sequences for Efficient Multi-Cycle Terahertz Generation in Periodically Poled Lithium Niobate
The use of laser pulse sequences to drive the cascaded difference frequency generation of high energy, high peak-power and multi-cycle terahertz pulses in cryogenically cooled periodically poled lithium niobate is proposed. Detailed simulations considering the coupled nonlinear interaction of terahertz and optical waves show that unprecedented optical-to-terahertz energy conversion efficiencies>5%, peak electric fields of hundred(s) of Mega volts/meter at terahertz pulse durations of hundred(s) of picoseconds can be achieved. The proposed methods are shown to circumvent laser-induced damage at Joule-level pumping by 1$\mu$m lasers to enable multi-cycle terahertz sources with pulse energies>>10 milli-joules. Various pulse sequence formats are proposed and analyzed. Numerical calculations for periodically poled structures accounting for cascaded difference frequency generation, self-phase-modulation, cascaded second harmonic generation and laser induced damage are introduced. Unprecedented studies of the physics governing terahertz generation in this high conversion efficiency regime, limitations and practical considerations are discussed. Varying the poling period along the crystal length and further reduction of absorption is shown to lead to even higher energy conversion efficiencies>>10%. An analytic formulation valid for arbitrary pulse formats and closed-form expressions for important cases are presented. Parameters optimizing conversion efficiency in the 0.1-1 THz range, the corresponding peak electric fields, crystal lengths and terahertz pulse properties are furnished.
Introduction
Multi-cycle or narrowband terahertz pulses in the frequency range of 0.1 to 1 THz have garnered interest as drivers of compact particle acceleration [1,2] , coherent X-ray generation [3] and electron beam diagnostics. An impediment to the widespread deployment of these applications has been the inadequate development of accessible sources of narrowband terahertz radiation (hundred(s) of picoseconds (ps) pulse duration) with simultaneously high pulse energy (> 10 milli-joules (mJ)) and peak powers (> 100 Mega-Volt per meter (MV/m) peak electric fields).
Among existing methods, photoconductive switches can be efficient [4] but relatively challenging to scale to high pulse energies, vacuum electronic devices [5] are limited in their frequency of operation and peak powers while free electron lasers [6] are relatively expensive.
With the rapid scaling of laser pulse energies, laser driven approaches employing second order nonlinear processes such as difference frequency generation (DFG) or optical rectification (OR) are promising. However, scaling this approach to high terahertz pulse energies will require achieving high optical-to-terahertz energy conversion efficiencies (or conversion efficiency for short) as well as the development of high energy optical lasers. Here, we describe approaches to improve conversion efficiencies for multi-cycle terahertz generation, which are still relatively low at the sub-percent level. This problem must be distinguished from broadband or single-cycle source development where percent level conversion efficiencies have been demonstrated [7,8,9]. Correspondingly, we only survey work pertinent to multi-cycle or narrowband sources.
Works using quasi-phase-matched (QPM) Gallium Arsenide (GaAs) [10,11,12,13,14] with conversion efficiencies of 10 -4 , Gallium Phosphide (GaP) [15] with conversion efficiencies of 10 -6 have been reported. However, these materials require pumping by 1.3 µm or longer wavelengths where it is still relatively challenging to develop laser technology with the requisite pump pulse energies. Organic materials have produced multi-cycle radiation with 10 -5 conversion efficiencies [16] but are also limited by the requirement of optical pumping at wavelengths of 1.3-2µm, large absorption coefficients at lower terahertz frequencies and laser-induced damage.
Lithium niobate (LN) possesses high second order nonlinearity and is compatible with rapidly developing 1µm [17] and established 800 nm laser technology. Furthermore, the large absorption in LN maybe reduced by cryogenic cooling. In combination with feasible Jouleclass 1 µm lasers and large cross-section crystals, cryogenically cooled LN may thus offer solutions to the problem of high energy multi-cycle terahertz generation.
In LN, multi-cycle THz generation by interfering chirped and delayed copies of a pulse with tilted-pulse-fronts (TPF) has been demonstrated [18,19]. However, TPFs have limitations induced by angular dispersion [20,21,22] which also affect other non-collinear approaches [23,24] . Collinear geometries based on periodically poled lithium niobate (PPLN) [25] could circumvent angular dispersion based limitations but are characterized by a large walk-off between the optical and terahertz radiation. As a result, even the highest conversion efficiency for multi-cycle generation in PPLN crystals at cryogenic temperatures was only 0.1% [26].
For terahertz generation in the frequency range of 0.1 to 1 THz, the conversion efficiency will be limited to the sub-percent level even if every optical photon (300 THz) is converted to a terahertz photon, due to the large quantum defect. To surpass this limitation, repeated energy down-conversion of optical photons or cascaded DFG [27,28,29,30] was proposed conceptually. However, methods to utilize this concept and produce large conversion efficiencies at the percent level or more for multi-cycle sources have not been demonstrated or even proposed.
Here, we study a family of approaches comprised of a sequence of pulses that can achieve very high conversion efficiencies > 5% in cryogenically cooled PPLN crystals. Employing a sequence of uniformly spaced pulses in time circumvents walk-off and coherently boosts the generated terahertz field. In combination with the low loss of cryogenically cooled PPLN crystals, this results in ~cm interaction lengths. Additionally, the low dispersion of LN in the 1µm region permits a large number of phase-matched, repeated energy down conversions of the optical pump photons. Finally, distributing the pump energy over a long sequence mitigates nonlinear phase accumulation and laser-induced damage. For Joule-level pumping, this reduces the required PPLN aperture area to 1-2 cm 2 , which has already been demonstrated [31]. As a result, the set of proposed methods can produce the desired terahertz sources with pulse energies >>10 mJ and peak electric fields of hundreds of MV/m. It is worth pointing out that recently we proposed another set of approaches employing terahertz driven cascaded parametric amplification which yield similar performance [32].
We discuss different approaches to realize a sequence of pulses in time such as direct generation of a burst of pulses, beating multiple frequency lines and interfering chirped and delayed broadband pulses. Related pulse formats have been briefly explored in bulk ZnTe [28] [33], with TPFs in bulk LN at room temperature [19] and quasi-phase-matched GaAs [13,11]. However, the work presented here is significantly different for a few reasons. Firstly, a different system, i.e. cryogenically cooled PPLN is studied. Secondly, here the emphasis is on feasible designs that enable dramatic cascaded DFG to achieve unprecedented conversion efficiencies >>1%, particularly for high energy pumping. Furthermore, unprecedented studies of the physics in this high conversion efficiency regime, limiting factors and corresponding correction mechanisms are presented. Finally, in this work, the various pulse sequence formats z jk THz Here ) , ( z A op ω represents the envelope at angular frequency ω and c n k is the corresponding wave number. The exact dispersion of the optical refractive index over a large bandwidth is accounted for by ). (ω n In Eq.(1), the first term on the right hand side (RHS) accounts for terahertz absorption. The second term on the RHS of Eq.(1) corresponds to the aggregate of all possible DFG processes between various spectral components of the optical field. The periodic inversion of the second order nonlinearity in PPLN crystals is explicitly accounted for by the ) ( directly considers all orders of forward propagating QPM waves in the PPLN crystal. The specified pulse formats will only permit phase-matching of harmonics of a single terahertz frequency and hence backward phase-matched waves need not be considered. Equation 2 represents the corresponding evolution of the envelope term on the RHS of Eq.(1) represents the generation of optical radiation due to DFG between higher optical frequencies and THz radiation. This term is responsible for cascading effects. The second term corresponds to sum-frequency generation (SFG) between lower optical frequency components and terahertz radiation and causes some blue shift of the optical spectrum. The final term corresponds to the cumulative SPM effect which is a third order process. While second harmonic generation (SHG) is highly phase mismatched in lithium niobate (∆k~10 6 m -1 ), phase-mismatched cascaded SHG (not to be confused with cascaded DFG) can influence the optical pump radiation as an effective third order effect [34]. The explicit consideration of SHG would significantly increase computation time. However, they may be accounted for by a cascaded SHG approximation absorbed into the effective nonlinear refractive index eff n , 2 term (See Appendix B). The optical-to-terahertz energy conversion efficiency (or conversion efficiency) η is readily calculated by aggregating energy over all terahertz spectral components as follows. Here, pump F is the input optical pump fluence.
The model thus presented considers only one spatial dimension since transverse beam effects are not expected to be significant for the large pump beam sizes pertinent to this work. The length scales of transverse effects estimated in Section 5 justify this assumption.
Analytic formulation for arbitrary pulse formats
In addition to the above depleted calculations, analytic expressions for arbitrary pulse formats using undepleted pump approximations are derived. While they do not account for cascading effects, they show good qualitative and quantitative agreement with full numerical simulations considering cascading effects. These expressions alleviate the computational challenge of exploring the problem over large parametric spaces, provide overview and consistency checks.
We set and retain only the first term in the Fourier series expansion of ( ) in the second term on the RHS of Eq.(1). Furthermore, the optical wave numbers ) ( ), ( ω ω k k Ω + are expanded via a Taylor series around the central angular frequency ω 0 of the pump to the second order to result in Eq.(4).
In Eq. β is the group velocity dispersion due to material dispersion (GVD-MD). Therefore, only the second order term in dispersion is accounted for in the analytic formulation which is contrary to Eqs.(1)- (2) where the complete dispersion of the optical refractive index is considered. Equation (4) is then integrated to yield the following analytic expression for the terahertz envelope (See Appendix A for derivation).
corresponds to the Fourier transform between the time (t),and angular frequency (ω) domains. Upon setting " β =0, we see that the ) , or the Fourier transform of the optical intensity profile. Note that no assumption about the optical spectrum has been made in Eq. .Invoking this approximation leads the conversion efficiency η from Eq.(3) to only increase linearly with crystal length z in the absence of absorption. Thus, the walk-off limitation present in lithium niobate may be deduced from Eq.(6).
Terahertz generation with pulse sequences: A general discussion
In this paper, three pulse formats suitable for high energy terahertz generation using cryogenically cooled PPLNs are presented. They comprise of (i) a burst of pulses of equal intensity, (ii) a set of quasi-continuous wave (quasi-CW) lines and (iii) interfering chirped and delayed broadband pulses. All of these formats constitute different forms of pulse sequences in time with common underlying physics. In this section, we provide an overview of the overarching mechanisms of terahertz generation using pulse sequences, their limitations and corresponding correction mechanisms.
Physical motivation: Alleviating walk-off and laser induced damage
Consider the case of a single optical pump pulse propagating through a PPLN crystal phase-matched for terahertz generation at frequency THz f . The electric field of the optical pump pulse is aligned with the extraordinary axis of the crystal for maximum nonlinearity. The optical group refractive index g n = 2.21 at 1030 nm, while the terahertz phase refractive index is ) (Ω n~ 5. In the case of such large g n n n − Ω = ∆ ) ( , the optical pump pulse walks off very rapidly from the generated terahertz radiation. As a result, rather than adding on top of the already generated terahertz radiation, the optical pump merely adds another cycle to the back of it. This produces only a linear growth of conversion efficiency with length, in accordance with the discussion following Eq.(6). Secondly, notwithstanding SPM or self-focusing effects, the permissible peak intensity (and hence fluence) of the optical pump pulse is limited by laser-induced damage (or damage). Therefore, continuously increasing the intensity of the pulse is not a feasible solution to scaling conversion efficiencies. Equivalently, this damage limitation prohibits the use of very large pump energies for feasible crystal apertures. Now consider a sequence of two optical pump pulses instead of just one incident on a cryogenically cooled PPLN with crystal temperature T =100 K as simulated in Fig.1a. The PPLN period Λ=237.74 µm has a spatially dependent second order nonlinearity ) ( Table.1 for list, Appendix A for details). Each pump pulse (blue) is Gaussian with a transform limited (TL) full-width at half-maximum (FWHM) duration = τ 400fs optimized for terahertz generation at inside the crystal), the second pump pulse will coherently add to the terahertz electric field (red) generated by the first ultrafast pulse as shown in Fig.1b-c. In general, a sequence of N pump pulses will serially boost the terahertz field generated by the first pump pulse in the sequence, thereby alleviating walk-off. The laser induced damage threshold intensity I d reduces as the square root of the pulse duration [36]. Consequently, the damage threshold fluence F d increases as the square root of the pulse duration. The lower limit of the reported values F d for a pulse duration of 10 ns in lithium niobate is ~ 1 GW/cm 2 [37]. Therefore, we use the following empirical expressions to determine I d and F d . Among various factors, the quality of anti-reflection coatings, crystal growth methods, choice and concentration of dopants all influence laser-induced damage. For instance, in preliminary tests on Magnesium Oxide doped (5% mol.) congruent Lithium Niobate crystals, we recorded damage threshold values, four times larger than those in Eq.(7) [38].
, since the time interval between pulses (~ps) is smaller compared to carrier decay time scales (>ns). Therefore, the permissible peak intensity scales as 2 / 1 − N but due to the serial reinforcement by N pump pulses, the conversion efficiency increases as 2 1 − × N N or as N . Simultaneously, the damage fluence also scales as N , which increases the energy loading capacity of the crystal. Thus, the approach of using a sequence of N pump pulses alleviates walk-off, permits larger pump fluences and results in higher conversion efficiencies.
The scenario illustrated in Fig.1 addresses a case when every pump pulse in the sequence is of equal intensity. However, the above arguments are generally true even when this is not so. It is worth mentioning that the scaling of the damage threshold depends on the envelope of the pulse, which may result in quantitative alterations. The ramifications of pulse envelopes on damage shall be reported elsewhere. In Fig.2, we plot the conversion efficiency (η), according to Eq.(3), as a function of PPLN crystal length for a burst of N optical pump pulses of equal intensity. As in Section 3.1, the simulated pump pulses have Gaussian full-width at half maximum (FWHM) durations = τ 400 fs, separated by 2 ps each, optimized for generation of 0.5 THz radiation. The remaining material parameters are presented in Table.1.
Terahertz generation mechanisms, limiting factors and correction mechanisms
In Fig.2a, undepleted calculations using Eq.5 at a crystal temperature T= 300 K are presented (PPLN period Λ=219 µm). From Fig.2a, we see that the optimal interaction lengths are < 0.5 cm due to large absorption (~7.5cm -1 at 0.5 THz), which limits the conversion efficiency to < 1%. In Fig.2b, undepleted calculations at a crystal temperature of 100K (Λ=238 µm) are presented. The interaction lengths increase to ~ 2 cm due to a reduction in absorption (~1.36cm -1 at 0.5 THz). The simultaneous increase in interaction length and reduction of absorption drastically enhances conversion efficiencies to 8%.
The values with larger N initially increase at a slower rate since the intensity of a single pulse in the sequence is lower due to damage limitations (Eq.7a). However, due to mitigation of walk-off, they grow monotonically over a longer length, eventually resulting in higher conversion efficiencies. The optimal number of pulses in the sequence will be proportional to the optimal interaction lengths. Consequently, there is not as much benefit in increasing the number of pump pulses in the sequence from N=16 to 32 at T=300K as it is for T=100K.
Conversion efficiencies > 0.1%, are only possible when cascading is present, i.e. the optical pump undergoes repeated energy down conversions. Therefore, we evaluate the same cases via numerical solutions to Eqs.(1)-(2) in Fig.2c. These depleted pump calculations incorporating cascading effects agree quantitatively and qualitatively with that in Fig.2b during the initial increase of conversion efficiency. However, after reaching a smaller maximum value of ~ 5% at shorter interaction lengths ~1.5 cm, a fall in conversion efficiency, contrary to the saturation observed in analytic undepleted calculations (Fig 2a, 2b) is seen.
The fall in conversion efficiency is attributed to a change in the phase-matching condition caused by the spectral broadening and red-shift of the optical spectrum due to cascading. In Fig.2d, the optical spectrum as a function of crystal length is plotted for the N=32 case, which shows a steady red-shift due to repeated energy down conversion of the optical photons to terahertz photons. The corresponding terahertz spectrum (Fig.2e) is quasi-monochromatic at 0.5 THz, with a large suppression of higher harmonics. Fig.3a. Conversion efficiency as a function of crystal length with various effects selectively switched on or off for N=32 pulses. The drop in conversion efficiency observed is due to alteration of phase-matching conditions caused by spectral broadening and red-shift of the optical spectrum. Therefore, further abatement of losses and gradual variation of PPLN periods can yield conversion efficiencies >> 10% (blue,solid). (b) Optical spectrum for the case of a dispersion and absorption free medium (blue curve, Fig.3a) shows dramatic red-shift > 100 THz.
In Fig.3a, we perform simulations to identify the reasons for the drop in conversion efficiency observed in Fig.2c. Equations (1)-(2) are solved with various effects switched on or off to identify their relative influence. As in Fig.2, a burst of N=32 pulses of equal intensity, with FWHM durations τ=400 fs each, separated by 2 ps are incident on a PPLN crystal phasematched for 0.5 THz. When a medium with no dispersion (denoted by β"=0 in Fig.3, ) (ω n is set to a constant value in Eqs.(1)-(2) ), no terahertz absorption (α=0) and no SPM (n 2,eff =0) is assumed, the conversion efficiency even increases to ~ 30% (blue, solid curve). The inclusion of SPM does little to change this result. This suggests that further reduction of absorption by cooling to 10 K (0.25cm -1 at 0.5 THz) and reduction of phase-slippage by gradually varying the PPLN period along the crystal length can lead to significantly higher conversion efficiencies. However, when dispersion was switched on but absorption was zero (red, solid curve), the conversion efficiency was dispersion limited. In this case, the change in the phasematching condition caused by cascading leads to a saturation of conversion efficiency. If absorption was switched on but dispersion was switched off (blue, dashed curve), phaseslippage is minimal but the conversion efficiency is limited by absorption. Consequently it saturates when the interaction length is on the order of the absorption length, similar to Fig.2b.
When both dispersion and absorption are switched on (red,dashed curve), the conversion efficiency drops rapidly after reaching a maximum value. This is because, after phasematching is compromised and no net energy is transferred to the terahertz field, inclusion of absorption can only lead to an effective dissipation of THz energy. Further inclusion of SPM does not alter the results significantly (green,dashed curve overlaps with red,dashed curve). In Fig.3b, the optical spectrum corresponding to the case without absorption or dispersion (blue, solid curve in Fig.3a) is depicted. A dramatic spectral red shift and broadening is evident. The simulations in Fig.3, thus show that the drop in conversion efficiency observed in Fig.2c is caused by the change in phase-matching condition produced by the shift in the optical spectrum due to cascading. However, further mitigation of losses and reduction of phaseslippage can lead to significantly higher conversion efficiencies.
Temporal properties of terahertz pulses
Here, we examine the temporal properties of terahertz pulses generated by a sequence of optical pump pulses. First, the case of a burst of pump pulses with equal intensity is considered. The understanding is readily extended to other pulse sequence formats. burst has a temporal extent τ p , which is the main quantity influencing the properties of the generated terahertz pulse. In Fig.4b, the relative time scales of the optical pump pulse and the generated terahertz pulse are depicted in the top panel. The bottom panel depicts the absorption experienced by the terahertz pulse over its duration. From Fig.4b, we see that the first pump pulse in the sequence entering the crystal at time t=0, emerges after a time 1 − c Ln g and with it appears the first terahertz radiation generated near the crystal exit. The terahertz generated by the last pump pulse in the burst with duration p τ , at the beginning of the crystal, will emerge after a time Based on (i)-(ii), we understand the depicted terahertz pulse schematic in Fig.4b. It shows an initial increase in amplitude due to overlap and subsequent drop due to absorption. In Fig.4c-d, we plot the temporal waveforms of the terahertz electric field obtained from numerical simulations. The waveforms are plotted at the location of maximum conversion efficiency.
In Fig.4c, the terahertz electric field for a burst of N= 32, τ=400 fs pulses separated by 2 ps each are incident on a PPLN crystal phase-matched for 0.5 THz (Fig.2c) is shown. The conversion efficiency is maximized at L =1.35 cm (Fig.2c). The optical pump pulse sequence is depicted in Fig.4d. The inset in Fig.4d, shows the sequence of pulses separated by 2 ps each, corresponding to generation of 0.5 THz radiation. In line with previous reasoning, the amplitude of the terahertz electric field increases up until the duration of the pump pulse has elapsed. Thereafter, a decline initially due to increasing absorption is observed. A third steeper drop, corresponding to a decline in overlap occurs at about t- In general, the terahertz electric field is proportional to , or the convolution of the pump intensity envelope with a sinusoidal function at the frequency of the phase-matched terahertz wave. However, the critical time instants described in the overlap process above are similar for various pulse sequence formats.
Optimizing the number of pulses in a sequence
A recurring theme in the various pulse formats described will be the optimal number of pump pulses in the sequence or the optimal value of the effective pulse duration p τ (See Fig.4a). An intuitive estimate of this is obtained by examining the time scales of the optical pump pulse sequence and generated THz pulses as viewed at the end of a crystal of length L in Fig.4b. For most efficacious terahertz generation, temporal overlap between optical and terahertz pulses must be maximized. This translates to the condition However due to absorption delineated in Fig.4b, a trade- The maximum crystal length L=L max itself is set by the strongest limitation to terahertz generation. This approximate condition on the value of p τ agrees well with various simulations from ensuing sections. Quantitative corrections to this picture appear upon considering details of the pump pulse envelope.
Simulation methods and parameters
All ensuing calculations assume MgO doped (5% mol.) congruent lithium niobate as the nonlinear material. The crystal temperature is fixed at 100K. Material and other parameters used in the calculations are tabulated below. The large cascading of the optical spectrum and long terahertz pulses necessitate simulations with large time-bandwidth products. The accuracy of simulations was verified by examining the numerical conservation of total energy (Appendix B.2). A bandwidth of 300 THz, a frequency resolution of 0.5 GHz and spatial resolution of 1.25µm is necessary. For available large aperture PPLN lengths of ≤ 5 cm, this translates to computational domain sizes with dimensions of 600,000×40,000 or 24 billion points.
Parameter Analytic calculations Numerical calculations Second order susceptibility (T=100 and T=300 K)
The direct optimization of the numerical solution to Eqs (1)-(2) is thus impractical. Fast analytic calculations based on Eq.(5) are therefore used to obtain optimal pump parameters for maximizing conversion efficiency. Full numerical solutions are then performed to obtain accurate values of (i) conversion efficiency, (ii) peak electric field, (iii) terahertz pulse durations and (iv) optimal crystal lengths corresponding to these optimal pump parameters.
Burst pulse format
We further analyze terahertz generation due to a burst of N pump pulses with equal intensity.
The electric field envelope of such a train is given by , or for integral multiples of the time period of the phase-matched terahertz wave as was illustrated in Section 3.1. Assuming this condition, it is evident from Eq.(8) that the conversion efficiency scales as N since the total pump fluence F pump scales as N due to damage limitations. From the first exponential term within brackets, we see how the GVD-MD term " β results in the spreading of the pump pulse in time. The second term within brackets corresponds to terahertz absorption. For 0 "≠ β , the conversion efficiency gradually vanishes due to reduction in peak intensity by spreading of the pulse in time, albeit rather slowly for small " β (e.g. in Table.1). For example, a τ= 400 fs pulse only suffers a pulse duration increase of less than 5 fs over 2 cm of propagation in lithium niobate. Furthermore, the pulse-spreading effect is less prevalent when the optical spectrum of the pulse is continuously increasing, as is the case with the numerical simulations based on Eqs.(1)-(2). In Fig.5a, we plot the conversion efficiency for various τ and number of pump pulses N in the sequence using Eq.(8) and simulation parameters from Table.1. The solid lines in Fig.5a represent the case of N=2. For small pump pulse durations (i.e. large bandwidth), the dominant limitation is the low pump fluence due to damage limitations. On the other hand, for large τ, there is insufficient bandwidth for efficient terahertz generation. Consequently, an optimal τ is observed which decreases for larger terahertz frequencies. The dashed lines represent the case for N=4 pulses. The expectation of N scaling is evident upon comparing the two cases.
In Fig.5b, we plot the conversion efficiency as a function of N using Eq.(8) at optimal values of τ for various THz f . After the initial N scaling, at larger values of N, a saturation of conversion efficiency is observed. This is understood by recognizing that the parameter 0 NT assumes the role of the effective pump pulse duration τ p described in Section 3.3-3.4.
Since it was deduced that optimal τ p~( . In the undepleted limit, the conversion efficiency begins to saturate at interaction lengths of ~ 2 cm at 0.5 THz (See Fig.2b). This translates to a saturation of conversion efficiency for N~100 pulses which is well supported by the calculations in Fig.5b. The scaling is however not quadratic as simplistic expectations from Eq.(6),(8) suggest due to trade-offs incurred by terahertz absorption. The conversion efficiency may be expected to saturate at even higher terahertz frequencies due to the increasing values of absorption. For lower conversion efficiencies or smaller terahertz frequencies, almost exact quantitative agreement between analytic and numerical calculations is observed. However, at larger terahertz frequencies, greater deviation is observed due to a greater impact of the modification of phase-matching conditions by cascading effects. As described in Section 3, this results in a reduction of the optimal interaction lengths and conversion efficiencies. The properties of the emergent terahertz pulse are also altered. Despite these limitations, conversion efficiencies in excess of 5% at 0.5 THz are still predicted by numerical calculations, which is significantly higher in relation to existing approaches.
In Fig.5d, the peak free-space (blue) and focused (green) terahertz electric fields obtained from analytic (solid) and numerical simulations (square markers) are presented. The focused terahertz fields are estimated by scaling the free space values by a factor The scaling reflects the reduction in terahertz beam radius from in w to ~1 − THz cf (or the terahertz wavelength as may be obtained in linear accelerator structures [1]) upon focusing. Focused terahertz fields approaching the GV/m range are thus obtained using numerical calculations. Consistent with the case of conversion efficiencies in Fig.5c, the numerical calculations for corresponding peak terahertz electric fields are smaller compared to analytically obtained values at larger terahertz frequencies.
In Fig.5e, the numbers of terahertz field cycles corresponding to e -2 pulse durations (fullwidth) are presented using analytic and numerical calculations. Naturally, lower terahertz frequencies contain smaller number of cycles, due to their longer time period. At 0.5 THz, ~100 cycles or 200 ps terahertz pulses are predicted. While the total temporal extent of terahertz pulses is altered due to the modification of interaction lengths in the case of numerical solutions, due to overlap effects, the e -2 terahertz pulse duration remains roughly similar to analytic calculations.
In Fig.5f, the optimal interaction lengths obtained from numerical and analytic calculations are plotted. Quantitative agreement at 0.1 THz is obtained, which is consistent with trends from Figs.5c-5e. For larger terahertz frequencies, the modification of phasematching conditions by cascading leads to shorter interaction lengths for the case of numerical calculations. The optimal crystal lengths obtained above are accessible with current PPLN technology.
Difference frequency generation with multiple lines
Beating two or more quasi-CW lines separated by the terahertz frequency THz f with each line corresponding to a pulse with transform limited duration of hundreds of picoseconds also generates a sequence of pulses separated by a time interval into Eq.(5), we obtain Eq.(9) in the undepleted limit. . For example, at 0.5 THz, an optimal τ~200 ps is obtained which unsurprisingly resembles the optimal values of 0 NT for the burst pulse format (N~ 100) in Section 4.1. In Fig.6b, the conversion efficiency for various THz f as a function of the number of lines M is plotted using Eq. (9). As deduced in the preceding paragraph, the optimal number of lines is M ≤ 5 for all frequencies.
In Fig.6c, the optimal conversion efficiency for various THz f are calculated analytically Eq.(9) (solid lines) and numerically using Eqs.(1)-(2) (square markers). Conversion efficiencies of > 5% at 0.5 THz are predicted by the numerical calculations including cascading and SPM. As with the previous case, the deviation of numerical from analytic results for larger terahertz frequencies or conversion efficiencies is owed to the greater impact of the modification of phase-matching conditions by cascading effects. In Fig.6d, the freespace (blue) and focused terahertz electric fields (green) are shown analytically (solid) and numerically (square markers). Numerical calculations reveal peak focused electric fields of several hundred MV/m. The lower conversion efficiencies obtained numerically in relation to analytic results yields proportionally smaller peak electric fields. The corresponding number of terahertz cycles obtained analytically and numerically are shown in Fig.6e. Due to overlap effects as in the case of Fig.5e, the number of cycles remains roughly similar for analytic and numerical calculations. However, in contrast to Section 4.1, due to the Gaussian envelope of the sequence of sub-pulses, the generated terahertz pulse also has a roughly Gaussian shape.
As with the burst pulse format, the optimal interaction lengths obtained numerically are smaller compared to those obtained analytically for larger terahertz frequencies due to a modification of phase-matching conditions by cascading effects as evident in Fig.6f.
Chirp and delay
Interfering copies of chirped broadband pulses delayed with respect to each other offers an alternate method of generating a sequence of pulses. Since the bandwidth of the pulse is spread over time, the approach is similar to DFG between two long narrowband pulses. In particular, the method is attractive for use with off-the-shelf high energy broadband 800 nm titanium: sapphire (Ti:Sa) lasers. Here, we calculate the efficiency of the approach for cryogenically cooled PPLNs. We consider the optical laser field due to a chirp and delay setup The total pump fluence F pump is bound by damage according to Eq.(7b). The damage threshold duration is equal to that of two narrowband pulses or the case of M=2 from Section 4.2 and is given by τ d = τ/2 (See Appendix.B.3). Equation (10b) indicates that the generated terahertz is centered about the terahertz frequency Thus, the center frequency can be tuned with an appropriate chirp rate b and delay ∆t. However, a large delay causes ) , ( t P THz ∆ Ω to drop due to insufficient overlap as evident in the exponential dependence on ∆t 2 in Eq. (10b). For example, in Fig.7b, the optimal chirped pulse duration τ= 200 ps at 0.5 THz which is similar to the values obtained for both the M quasi-CW line case (Fig.6a), as well as the burst pulse case (Fig.5b). The conversion efficiencies obtained with the chirp and delay approach will be similar but lesser than the case with M= 2 quasi-CW lines (See Fig.6b,7b) factor produced by the relative delay between optical pulses in Eq.(10b). In Fig.7c, the optimal conversion efficiency as a function of terahertz frequency is plotted along with numerical calculations at the corresponding points. Consistent with earlier depicted trends in Section 4.1 and 4.2, the conversion efficiency obtained via numerical calculations are smaller at larger terahertz frequencies due to a more adverse impact of the modification of phasematching conditions by cascading effects. Despite these limitations, conversion efficiencies of ~ 2% are predicted at 0.5 THz. The peak terahertz fields are plotted in Fig.7d and corresponding numbers of cycles are plotted in Fig.7e. As with prior cases (Section 4.2, 4.3) , the peak electric fields obtained numerically are smaller than the analytic values at larger terahertz frequencies due to modification of phase-matching conditions by cascading effects. Focused field strengths of few 100 MV/m are observed. The obtained number of terahertz field cycles is similar for numerical and analytic calculations, consistent with the trends from Section 4.2-4.3. The reduction of interaction lengths due to cascading effects is evident in Fig.7f for larger frequencies as in other pulse sequence formats.
Self-focusing
Since our studies were limited to one dimensional spatial calculations, we calculate selffocusing distances [42] as follows: In n w z eff in sf = (11) Here λ 0 is the central wavelength of the optical pump, n 2,eff is the effective nonlinear refractive index (Table.1) , in w is the input pump beam radius and I is the peak pump intensity. The peak intensity is calculated considering laser induced damage limitations from Eq.(7a) and is proportional to In Fig.8, we plot the damage limited beam radii in w , Rayleigh and self-focusing lengths at 0.5 THz as a function of the effective pulse duration d τ . It is seen that the Rayleigh lengths are significantly larger than the optimal interaction lengths required. For effective damage threshold durations = d τ 100-500 ps, the maximum crystal apertures ( in in w w 2 2 × ) required would be < 1.2 cm 2 , which can be realized with demonstrated technology [31] Multi-photon absorption For pump wavelengths in the range of ~1µm, 4-photon absorption of the fundamental and 2photon absorption of the second harmonic are possible. The former is expected to be weak even for intensities as large as 100 GWcm -2 [21,43], while the latter is not expected to be significant since the SHG is not phase-matched. However, for 800 nm, 3-photon absorption of the fundamental and 2-photon absorption of second harmonic+fundamental are possible [44]. These multi-photon processes are stronger than those for the case of ~1µm wavelengths. Therefore, for 800 nm, lower damage thresholds and conversion efficiencies are anticipated.
Practical implementation of the pulse formats
To generate highly efficient terahertz fields, three pulse formats -a burst of pulses, multiple quasi-CW lines and chirped and delayed broadband pulses have been presented. From a laser engineering point of view, the chirp and delay pulse format can be readily implemented with existing Titanium Sapphire lasers. It involves stretching of a pulse (which is typically done anyway during chirped pulse amplification), splitting and delaying it. However, high pulse energies from Titanium sapphire lasers are typically limited to low repetition rates. For higher average powers, Ytterbium based lasers [17] are strong candidates. However, the limited bandwidth of these lasers makes chirp and delay approaches less attractive. Therefore, a burst of pulses with a pulse separation of just a few picoseconds can be achieved by splitting and delaying one laser pulse. The multiple quasi-CW line approach relies on spatially superimposing multiple highly stable narrow band lasers. To achieve high pulse energies, modulators maybe needed prior to pulse amplification. Such multi quasi-CW line sources may also be self-generated starting with a single quasi-CW pump and weak quasi-CW optical seed pulse. The terahertz radiation, initially generated by the DFG between the pump and seed pulse then rapidly causes the optical spectrum to cascade, thus generating a series of quasi-CW lines [32] .
Conclusion
Highly efficient terahertz generation approaches employing cascaded difference frequency generation (DFG) of pulse sequences in periodically poled lithium niobate (PPLN) were proposed and analyzed over a large parameter space. Optimal energy conversion efficiencies > 5%, peak electric fields of several hundred MV/m for terahertz pulses with hundreds of picoseconds durations are predicted using calculations employing pump depletion. At Joule level pumping, this translates to terahertz pulse energies >> 10 mJ. Analytic formulations for calculating the conversion efficiencies for arbitrary pulse formats were presented which included the effects of dispersion and absorption. These showed good qualitative and quantitative agreement with detailed numerical simulations including cascaded DFG, selfphase modulation, cascaded second harmonic generation and laser induced damage, particularly at low conversion efficiencies and terahertz frequencies. The physics of terahertz generation using pulse sequences in PPLN's was discussed. At sufficiently high conversion efficiencies, significant cascading of the optical spectrum results in a change in the phasematching condition, thereby limiting conversion efficiency. Changing the PPLN period along the propagation direction and further mitigation of losses could lead to even higher conversion efficiencies >>10%. Dynamics and Control of Matter at the Atomic Scale" of the Deutsche Forschungsgemeinschaft.
A. Analytic formulation
The Fourier transform for a function [45].
Proceeding along similar lines for other integrals, Eq.(5) is obtained.
B. Simulation details B.1 Material properties
Since the terahertz frequency is much smaller than the optical frequency, the relevant second order nonlinear effect is the electro-optic effect. The electro-optic tensor element which maximizes terahertz generation in lithium niobate is 33 r . This corresponds to extraordinarily polarized terahertz and optical fields. For lithium niobate, 33 r~ 32 pm/V. The effective nonlinear coefficient for bulk lithium niobate at a pump wavelength 0 λ is then , where sgn is the signum function. The terahertz absorption coefficients at 300 K and 100 K are obtained from [40] in Fig.9a. At 0.5 THz, the absorption coefficient at 300 K is 7.2cm -1 . At 100K, the value reduces to about 1.4cm -1 , representing a fivefold decrease. Even further decrease in absorption may be obtained by cooling the crystal to 10K, which yields an absorption coefficient of 0.25cm -1 at 0.5 THz [40]. In Fig.9b [39]. The group refractive index is n g (λ 0 ) =2.21 and is relatively insensitive to temperature change. Second harmonic generation is highly phase-mismatched in bulk lithium niobate. Corresponding to the fundamental at 1030 nm with refractive index 2.15, the second harmonic at 515 nm has a refractive index SHG n = 2.24. The corresponding phase-mismatch ∆k=1.02×10 6 m -1 , which corresponds to a coherence length of 1 − ∆k π~ 3 µm that is significantly smaller than the PPLN periods under consideration. However, phase-mismatched SHG can manifest itself as a third order nonlinearity which can considerably alter the selfphase-modulation effect. These cascaded SHG effects have an effective third order susceptibility given by the following [34]. is the m th order phasemismatch. The value of d SHG = 25 pm/V for lithium niobate. The total third order susceptibility is given by Eq.(13b). | 9,333.8 | 2016-06-16T00:00:00.000 | [
"Physics"
] |
Finite amplitude method in linear response TDDFT calculations
The finite amplitude method is a feasible and efficient method for the linear response calculation based on the time-dependent density functional theory. It was originally proposed as a method to calculate the strength functions. Recently, new techniques have been developed for computation of normal modes (eigenmodes) of the quasiparticle-random-phase approximation. Recent advances associated with the finite amplitude method are reviewed.
Introduction
The nuclear energy density functional methods are extensively utilized in studies of nuclear structure and reaction [1]. For the linear response calculations, the quasiparticle-random-phase approximation (QRPA), based on the time-dependent density-functional theory [2,3,4,5], is a standard theory [6]. However, for realistic energy functionals, it demands both complicated coding and large-scale computational resources. To resolve these issues, there have been several developments [7,8,9], including the finite amplitude method (FAM) [10].
The FAM allows us to avoid explicit evaluation of complex residual fields, which significantly reduces the necessary coding effort for realistic energy functionals. In addition, the use of iterative solution of the FAM equations also reduces the computational task and the memory requirement. The FAM was first adopted for the linear response calculations for the electric dipole mode, using the Skyrme energy functionals without the pairing correlations [11,12]. The method was soon extended to superfluid systems [13]. Then, the FAM was adopted in the code hfbtho to study monopole resonances in superfluid deformed nuclei with axially symmetry. Mario Stoitsov played a leading role in this project, which started during his visit to RIKEN in 2010 [14]. Later, there have been further developments in the FAM with relativistic and non-relativistic frameworks [15,16,17] The FAM was originally proposed for a feasible method to calculate the response function of a given one-body operator. Recently, new methodologies have been developed, for calculating discrete eigenstates using the FAM. In this article, we review some of these recent developments.
The finite amplitude method
In this section, we briefly illustrate the essential idea of the FAM. Let us start from the Hartree-Fock-Bogoliubov (HFB) equations, where H, E i > 0, and Φ i = U i V i are the HFB Hamiltonian including the particle-number cranking term (−µN ), the quasiparticle energies, and the quasiparticle states associated with the HFB ground state, respectively.
are conjugate to Φ i with negative quasiparticle energies [6]. They are orthonormalized as The generalized density matrix R [6] at the ground state can be written in a simple form as In the small-amplitude (linear) approximation, the density fluctuation δR can be decomposed into normal modes δR (n) , In the linear order, the matrix elements of δR (n) between Φ i andΦ j are denoted by X The other matrix elements of δR vanish in the quasiparticle basis: Note that δR (n) are in general non-Hermitian, nevertheless δR(t) is Hermitian. The density fluctuation δR (n) induces an residual field in the HFB Hamiltonian, H = H + δH (n) , where δH (n) linearly depends on δR (n) . The essential idea of the FAM is that, instead of explicitly expanding δH (n) in the linear order in δR (n) , we compute δH (n) by the finite difference using a small parameter η as The Hamiltonian H[R+ ηδR (n) ] should be evaluated at the density, i , using the following quasiparticles For a given set of forward and backward amplitudes, {X (n) , Y (n) }, the calculation of Eq. (7) is relatively easy because all we need to calculate are the one-body quantities with two-quasiparticle indices, such as δH In contrast, the QRPA matrices have four-quasiparticle indices [6], A ij,kl and B ij,kl . The calculation of these matrix elements demands significant coding effort.
Calculation of the QRPA matrices
In this section, we recapitulate the matrix-FAM (m-FAM) proposed in Ref. [15]. This provides a simple numerical method to calculate the QRPA matrices using the principle of the FAM, Eqs. (7), (8), and (9).
The QRPA equation in the matrix form is given by H Z n = ω n N Z n , where The most demanding part is the calculation of the matrix elements, A ij,kl and B ij,kl , which are formally defined by the derivative of the HFB Hamiltonian with respect to the density. However, since the FAM allows us to evaluate H Z for a given vector Z, these matrix elements are provided by the FAM in the following way. Let us define the "forward" unit vectorê kl as X ij = δ ik δ jl and Y ij = 0, and the "backward" oneẽ kl as Y ij = δ ik δ jl and X ij = 0. Namely, these unit vectors arê Then, it is trivial to see that the upper component of (H e n ) ij and (Hẽ n ) ij are identical to A ij,kl and B ij,kl , respectively.
On the other hand, using the FAM, (Hê kl ) up reads where Φ † i δHΦ j in the right hand side can be computed according to Eq. (7). Here, R + ηδR is given by the ground-state quasiparticles, Ψ † i = Φ † i and Ψ ′ i = Φ i , except for i = k and l, Following the same procedure with the "backward"ẽ kl , we can calculate B ij,kl . The numerical coding of the present method is extremely easy. All we need to calculate is the HFB Hamiltonian at the density R + ηδR which is defined by Eq. (14). After constructing the QRPA matrix, the QRPA normal modes of excitation are obtained by diagonalizing the QRPA matrix [6].
Iterative FAM with a contour integral in the complex frequency plane
In this section, we present a contour integral technique combined with the FAM developed in Ref. [17].
The iterative FAM (i-FAM) with a complex frequency ω provides a solution of the linear response equation (H − ωN ) Z(ω) = −F, where F is the one-body external field [10,13].
Here, we assumeF is a Hermitian operator. The QRPA response function is calculated as [10] This can be expressed in terms of the QRPA normal modes as This shows that the transition strength for the n-th normal mode, | n|F |0 | 2 , is a residue at ω = ω n . Therefore, if we choose the contour C n in the complex ω-plane that encloses ω = ω n > 0, it reads The corresponding QRPA normal modes, X (n) and Y (n) , are also given by the contour integral as [17] X (n) The contour integral method with i-FAM is complementary to the m-FAM. In the present approach, the eigenmodes are obtained by solving the i-FAM equations for complex frequencies combined with the contour integral. In the m-FAM, we do not resort to an iterative algorithm to solve the FAM equations, but we need to diagonalize the QRPA matrix at the end. For a small model space, the m-FAM has a significant advantage, however, the computational task of the m-FAM strongly depends on the size of the QRPA matrix D, which typically scales as D 3 . | 1,660.4 | 2014-09-10T00:00:00.000 | [
"Physics"
] |
Minimax Bridgeness-Based Clustering for Hyperspectral Data
: Hyperspectral (HS) imaging has been used extensively in remote sensing applications like agriculture, forestry, geology and marine science. HS pixel classification is an important task to help identify different classes of materials within a scene, such as different types of crops on a farm. However, this task is significantly hindered by the fact that HS pixels typically form high-dimensional clusters of arbitrary sizes and shapes in the feature space spanned by all spectral channels. This is even more of a challenge when ground truth data is difficult to obtain and when there is no reliable prior information about these clusters (e.g., number, typical shape, intrinsic dimensionality). In this letter, we present a new graph-based clustering approach for hyperspectral data mining that does not require ground truth data nor parameter tuning. It is based on the minimax distance, a measure of similarity between vertices on a graph. Using the silhouette index, we demonstrate that the minimax distance is more suitable to identify clusters in raw hyperspectral data than two other graph-based similarity measures: mutual proximity and shared nearest neighbours. We then introduce the minimax bridgeness-based clustering approach, and we demonstrate that it can discover clusters of interest in hyperspectral data better than comparable approaches.
Introduction
Hyperspectral (HS) imaging combines spectroscopy and imaging to capture the reflectance (or radiance) of surfaces within a scene. It is used in remote sensing applications to determine nutrient deficiency in crops [1], for vegetation mapping [2] or to model phytoplankton dynamics in the ocean [3]. The data produced by HS sensors is, however, very large (spatially) and HS pixels typically form high-dimensional clusters of arbitrary sizes and shapes in the feature space spanned by the spectral channels, which significantly hinders HS data mining. In remote sensing applications, ground truth data is often used for validation and information recovery in what is referred to as the supervised framework. However, obtaining such data is costly and time-demanding. "Ground truthing" involves surveying and processing campaigns with trained personnel, high-end equipment and laboratory tests. Furthermore, preparing ground truth data can be error-prone [4]. Semi-supervised or unsupervised methods are then highly desirable as they require little to no training data. In this paper, we focus on unsupervised classification, also known as clustering. Our objective is that each extracted cluster of pixels corresponds to a meaningful class of material within the scene. Furthermore, we also aim for the method to be easy to use and interpret, with no need for user input.
One of the main challenges in HS data clustering is that traditional distance metrics, such as the Euclidean distance, are meaningless in high-dimensional spaces [5,6]. Dimensionality reduction techniques, such as Principal/Independent Component Analysis [7][8][9], manifold learning [10] or band selection [11], have been used to address this issue, but they are also subject to the aforementioned problems, i.e., the need for ground truth data and the parameter tuning. In particular, the choice of an appropriate number of dimensions remains an open problem. However, we leave dimensionality reduction aspects outside the scope of this paper, as we focus solely on the clustering technique. In terms of comparing high-dimensional vectors, graph-based measures of similarity such as the number of shared nearest neighbours [12] and mutual proximity [13] have been proven useful to alleviate the curse of dimensionality [14]. The minimax distance [15] is another type of such measure, defined as the longest edge on a path between two distinct nodes of a graph, minimised over the range of all possible paths between the nodes. It is also the longest edge on the path between two nodes of a minimum spanning tree. The minimax distance is very powerful to separate clusters of arbitrary shape, size and dimensionality, especially if the data is relatively noise-free [16,17]. Furthermore, unlike the shared nearest neighbours, the minimax distance is completely parameter-free. In this paper, we propose a clustering method that harnesses the advantages of the minimax distance for HS data mining. The method creates a minimum spanning tree and determines which edges are most likely bridges between clusters based on four intuitive and parameter-free heuristics that we introduce. It then works as a sequential graph partitioning approach using the silhouette index as an objective function. In the next sections, we review related work, present the minimax distance and demonstrate its suitability for the task. We then introduce the proposed clustering approach, present results and conclude.
Related Work
Unsupervised classification of HS data has been an active research topic for several decades with many existing methods based on template-matching, spectral decomposition, density analysis or hierarchical representation [18]. Recent trends have also seen the emergence of deep models [19,20] that can model the distribution of information in high-dimensional data. However, these methods currently do not generalise nor scale well and require a tremendous amount of training data. The main drawback with template-matching methods (e.g., centroid-based, mixture of Gaussians) is that the shape of the clusters is presumed known a priori, i.e., they are parametric. For example, K-means and its variants are primarily meant to detect convex-shaped clusters, which occur only rarely in HS data. In spectral clustering [21,22], the eigenvectors of a Laplacian matrix representing the data are used to project the clusters in a subspace where they are more separable. Performance depends on how the Laplacian is defined and how clustering is eventually performed after projecting the data.
Clustering based on density analysis has received a lot of attention in the geoscience and remote sensing communities. Their rationale is that pixels are sampled from a multivariate probability density function (PDF) of unknown shape and parameters, which can be estimated from the data. Most methods are based on a search for the local maxima of the PDF, also referred to as modes. Most existing mode-seeking approaches, such as Mean Shift [23,24] or Fast Density Peak Clustering [25], assume that each cluster contains a single dominant mode, although it may not always be true (consider for instance the case of a ring-shaped cluster). This motivates methods, such as those based on space partitioning [26], or support vector machines [27], that seek local minima of the density functions as they represent the boundaries between clusters. These methods are independent of cluster shape. DBSCAN (Density-based spatial clustering of applications with noise) [28] is another adaptive approach in which "core points" are selected to better separate clusters and facilitate their extraction. Unfortunately, the performance of DBSCAN and most density-based clustering methods depend heavily on parameter tuning, which generally comes down to finding the right amount of smoothing for density estimation. This problem also applies also to k nearest neighbours (kNN)-based methods [29][30][31] as k is generally not an intuitive parameter for end-users. Automatic tuning with the elbow method [32,33] gives no theoretical guarantee of finding the optimal parameter. Adaptive density estimation (e.g., based on diffusion [34]) requires a lot of data to find significant patterns in high dimensions, and there is currently no consensus on whether they can outperform global methods on high-dimensional data [35].
Parameter-laden algorithms present an important pitfall as incorrect settings may prevent the retrieval of the true patterns and instead lead the algorithm to greatly overestimate the significance of less relevant ones [36]. This makes parameter setting a burden to the end-user. Existing parameter-free approaches are based on finding either a natural cutoff or peaks in some distribution [37,38] or rely on maximising a quality criterion [39] or combining strategies [40]. The recently proposed FINCH [41] is based on a simple and intuitive nearest-neighbour rule and hierarchically combines clusters based on the position of their respective means.
To address the problem of similarity in high dimensions, an effective approach consists of comparing points in terms of their neighbourhoods rather than only their coordinates. The number of shared nearest neighbours [14,42] is based on this principle, but it requires setting k, the neighbourhood size. A parameter-free alternative known as mutual proximity [13], has also shown promise for clustering but, as we will demonstrate, it leads to sub-optimal performance, and it comes at a high computational cost. The minimax distance is another type of graph-based similarity measure that shows promise for data classification [15]. In this paper, our main contributions are an evaluation of graph-based similarity measures based on the silhouette coefficient, as well as a new parameter-free clustering algorithm named MBC (Minimax Bridgeness-based Clustering).
Definition
Consider a connected, undirected and edge-weighted graph G(V, E) where V = {v i |i = 1..N} is a set of N vertices and E = {e i |i = 1..M} is a set of M edges, with N < M. Note P v i , v j the set of all loopless paths between vertices i and j in G. The largest of all edge weights along a given path p v i , v j is denoted w max (p). Then, the path that satisfies: is the minimax path between vertices i and j. The edge in which the weight is w max p mnx v i , v j will be referred to as the minimax edge, its weight is the minimax distance between vertices i and j.
The minimax distance matrix, similarly to the mutual proximity and shared nearest neighbours distance matrices, is computed based on E, the weights of the graph. These weights typically represent the Euclidean distance. They can be obtained from the data in O(n 2 ), with n the number of data points, but parallel computing can be used to increase efficiency [43,44]. The minimax distance matrix is then computed based a minimum spanning tree (MST) on G, which constitutes a subset of E. An MST is defined as a set of edges that connects all the data points, without any cycles and with the smallest total edge weight. Each edge of an MST is a minimax edge [15], and the distance matrix can then be obtained in linear time from the MST [45], which can also be constructed in linear time [46].
Minimax Silhouette
An ideal measure of similarity should be so that it returns a small value if the data points belong to the same class and a large value otherwise. This property is captured by the silhouette index, which measures how similar a data point x is to its own cluster compared to other clusters: where s(x) is the silhouette of x, a(x) is the average similarity between x and all other points in the same cluster and b(x) is the smallest average similarity x and all other points in different clusters. A small average silhouette indicates that clusters are poorly separated, and negative values mean that they overlap significantly. While the silhouette index is typically used to assess a clustering result, we use it here to compare similarity measures. We calculated the average silhouette of the ground truth of each of the data-sets described in Section 5.2, with the squared Euclidean norm, the spectral angle mapper and their respective minimax, mutual proximity and shared nearest neighbours versions.
In Table 1, we report the results obtained on the whole data-set (limited to 10,000 points-see Section 5.2) or only on a core set Γ 50 consisting of the 50% data points of highest density (see next paragraph). We then compared the regular measures, i.e., Squared Euclidean norm (SE) and Spectral Angle Mapper (SAM), to their corresponding shared nearest neighbour (SNN), mutual proximity and minimax distance counterparts. Note that SNN requires to tune the parameter k, i.e., the size of the set of NN within which the shared neighbours are searched for each pair of pixels. The results we report are the best from all values of k between 5 and N, with a step size of 5. These results indicate that even the largest silhouette (0.260 with Minimax SE on Γ 50 ) is relatively low, confirming that HS clusters are generally not well separable in the feature space spanned by all spectral bands. Nevertheless, we can observe the clear inferiority of the regular measures, with a full-set silhouette of −0.428 at most. The minimax distance and mutual proximity surpass the shared nearest neighbours overall, except on the Massey data. Further, note the improvement obtained by discarding the 50% least dense (i.e., noisiest) pixels, particularly for the minimax distance, which gives the best results overall. This suggests that the minimax distance is better suited to extract classes of interest from HS data, especially on core sets. Using the core set Γ 50 allows to tackle two drawbacks of the minimax distance: sensitivity to noise and computational complexity. Core sets have previously been used for similar purposes [28,40,47]. To select a representative core set in a computationally efficient and parameter-free manner, we estimate the underlying probability density function of the data and discard the 50% least dense points. We compared several parameter-free and scalable density estimators in terms of their ability to produce core sets with compact and well-separated classes at several threshold values [48]. It is particularly noteworthy that diffusion-based approaches [34] scale poorly to high-dimensional data and suffer from the Hughes phenomenon: they require a tremendous amount of sampling points to correctly estimate the multivariate density. Instead, we found that convolving the data with an isotropic Gaussian kernel with a bandwidth equal to the average distance to a point's nearest neighbour allowed for a good balance between low computational footprint and usefulness in identifying core-sets. We used this approach to estimate density in the remainder of our experiments.
Minimax Distance-Based kNN Clustering
In order to further demonstrate the usefulness of the minimax distance, we compared the performance of two state-of-the-art kNN-based algorithms: KNNCLUST [49] and GWENN [50]. Both rely on a measure of distance to perform clustering. We evaluated whether using the minimax distance can improve performance in terms of overall accuracy (OA) and cluster purity [51] on the five data-sets presented in Section 5.2. Figure 1 shows results obtained on one of these data-sets. Note that, for a given data point x in the feature space, all its neighbours that are at the same minimax distance to x are sorted based on regular distance. Results indicate that using the minimax distance can improve the performance of kNN-based clustering on HS data. On all five data-sets, the peak overall accuracy was improved by at least 3% and up to 8% (on the Kennedy Space Centre scene), and cluster purity was improved for most values of k. We also found that, as k increases, the number of clusters decreases. This should be considered when evaluating cluster purity, which is known to increase with the number of clusters. Purity is a more meaningful measure of clustering quality when this number is low, which is the case in our experiments.
Minimax Bridgeness-Based Clustering
As we demonstrated, the minimax distance is well suited for class separation in HS data. However, a large minimax distance informs only of the existence of a gap on a path between two points, but not so much of its significance. The latter can be established when the gap's existence is confirmed by multiple pairs of end-nodes. The number of these pairs gives what we refer to as the minimax bridgeness: β(E), where E is an edge in G. It can also be defined as the number of paths on which E is a minimax edge. A high bridgeness indicates a consensus between data points that the node crosses a border between clusters. Figure 2 illustrates the concept of minimax bridgeness with a simple example.
The proposed clustering algorithm, hereby referred to as Minimax Bridgeness-based Clustering (MBC), works as a sequential graph partitioning with four main steps (see Figure 3):
•
Step 2: Discard edges that are unlikely to separate clusters.
•
Step 3: Rank the remaining edges based on minimax bridgeness.
•
Step 4: Remove next highest ranking edge that does not significantly decrease the minimax silhouette. Repeat until all edges have been assessed. For Step 1, there are numerous algorithms to extract the MST of the data efficiently, but we consider these aspects outside the scope of this paper. Note that the MST is unique if all pairwise distances between pixels are different.
In Step 2, to find edges that are unlikely to be inter-cluster bridges (ICBs), we first identify four important properties of ICBs:
1.
They are longer than most edges. 2.
They have a higher minimax bridgeness than most edges. 3.
They have a lower density point at their centre than most edges.
4.
Neither of the vertices they connect is the other's first neighbour, nor do they have the same nearest neighbour (see [41]).
With regards to the first three properties listed above, we found that the distributions of edge length, minimax bridgeness and central point density typically have a single dominant mode each. Specifically, we use diffusion-based density estimation [34] to determine the peak locations and estimate these modes. We observed that, for at least one of these three attributes, ICBs always have a value significantly larger than the mode. We then established that edges that do not satisfy this property are unlikely to be ICBs. Note that we tested various density estimation methods, keeping in mind that we need a parameter-free and computationally efficient method to increase clustering performance, first by allowing for better identification of ICBs, and then by creating a representative subset Γ of the data. The latter can be used to create a pseudo-ground truth of the data and apply a more computationally-efficient clustering on the remaining data points. As previously mentioned, we found that an isotropic Gaussian kernel with a bandwidth equal to the average distance to any point's nearest neighbour gave the best results overall.
Finally, in Steps 3 and 4, we use the minimax silhouette as guiding criterion. In an approach similar to that employed in GWENN [50], candidate edges are ranked in order of descending β and removed one by one in a sequential manner. The edge with the largest β is systematically removed. The minimax silhouette then is calculated after each edge removal. If the edge removed last caused the silhouette to decrease by more than half its value at the previous iteration, or to become negative, the edge is put back in the graph. This ad hoc rule is particularly efficient when the data contains well-separated clusters. As previously demonstrated, the minimax silhouette captures cluster separation better than other measures of distances on HS data. Our experiments showed that removing an edge that does not separate clusters tend to decrease the minimax silhouette by more than half its current value.
Alternative Clustering Methods
To validate the proposed method, we compared it to five state-of-the-art clustering methods, which are summarised in Table 2. Note that the latter two are parameter-free. Fuzzy C-means requires the number of clusters K and a (typically parametrised) de-fuzzification parameter m. For FDPC and GWENN, we manually tuned their respective parameter to obtain the right number of clusters. We also implemented a variant of MBC, which allows specifying the number of clusters K. We named it K-MBC. In this case, the iterative approach of step 4 stops when K − 1 edges have been removed or when all edges not discarded in step 2 have been examined, whichever comes first. Finally, we evaluated MBC on Γ 50 for each data-set to determine how it performs on a smaller and cleaner set of points.
Data
We used five HS images: Pavia University, Kennedy Space Centre, Salinas, Botswana and Massey University.
The KSC image was acquired over the Kennedy Space Centre, Florida (Lat-Long coordinates of scene centre: 28 • 37 50 N, 80 • 46 45 W), by the airborne AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) NASA instrument. It contains 224 spectral bands (400 to 2500 nm), but only 176 of them were kept after water absorption bands removal. The ground resolution is 18 m and the ground truth is made of 13 classes (Scrub, Willow swamp, CP hammock, CP/Oak, Slash pine, Oak/Broadleaf, Hardwood swamp, Graminoid marsh, Spartina marsh, Cattail marsh, Salt marsh, Mud flats, Water).
The Botswana HSI was acquired over the Okavango Delta, Botswana (Lat-Long coordinates of scene centre: 19 • 33 33 S, 23 • 07 37 E), by the Hyperion sensor onboard the EO-1 satellite (within the same spectral range than AVIRIS), with a ground resolution of 30 m. It has 1476 × 256 pixels and 145 bands (from 242 original bands) after water absorption bands removal. The ground truth map includes 14 classes (Water, Hippo grass, Floodplain Grasses 1, Floodplain Grasses 2, Reeds, Riparian, Firescar 2, Island interior, Acacia woodlands, Acacia shrublands, Acacia grasslands, Short mopane, Mixed mopane, Exposed soils).
The Massey University scene [53] was captured in Palmerston North, New Zealand (Lat-Long coordinates of scene centre: 40 • 23 17 S, 175 • 37 07 E), with an airborne AisaFENIX hyperspectral sensor covering visible to short-wave infrared (380 to 2500 nm). It has 339 bands (after removal of water absorption and noisy bands) and 9564 pixels with corresponding ground truth. The latter contains a total of 23 different land-cover classes, which includes nine different types of roof tops, five vegetation types, water, soil, and two different shadows.
The characteristics of these five data sets are summarised in Table 3 and their colour composite are shown in Figure 4. Table 3. Description of data-sets used: number of labelled pixels N, dimensionality D and number of classes. The data was pre-processed to remove noisy spectral bands, burnt pixels and inconsistent ground truth data (see [4]). Furthermore, the number of pixels was capped to 10,000 per scene, mostly due to the memory complexity incurred by the computation of the similarity matrix. For each data-set with more than 10,000 pixels, we performed 20 random selections (Note that the same proportion of each class was kept after subsampling.) of 10,000 points and computed the average results.
Criteria
The criteria used to validate the proposed approach and compare it to other state-of-the-art methods are as follows: • Overall Accuracy (OA) (see Table 4) is the average number of pixels that are correctly classified. It is calculated as the sum of diagonal elements of the confusion matrix divided by the total number of pixels. Here, the confusion matrix is obtained owing to the Hungarian (Munkres) algorithm [54].
•
Purity [51] (see Table 5) measures the tendency of clusters to contain a single class.
•
Normalised Mutual Information (NMI) (see Table 6) measures the probabilistic resemblance between cluster and class labels.
•
The average difference between the number of classes and the number of clusters (see Table 7).
In terms of computational cost, we found that it takes about 2 min to cluster 10,000 points with MBC with a Matlab implementation (hardware: x64-based Intel Core i7-8750H CPU @ 2.20GHz, 32GB of RAM), mostly spent on the creation of graph G from the data.
Results
From these results, we make the following observations: • K-MBC significantly outperforms FCM, FDPC and GWENN on all data-sets except Botswana in terms of OA and NMI. It also yields the best pixel purity on all data-sets except Massey University, where it is surpassed only by GWENN by 0.08.
•
The clustering maps in Figure 5 confirm that our approach performs particularly well at creating clusters of high purity.
•
Although it is expected that MBC would give the best overall results when applied to the core set Γ 50 , or when the number of clusters is known, this does not appear as obvious from our results. However, we note that MBC tends to find too few clusters in the data (especially on the Botswana data-set, where it misses six classes of pixels). • FINCH and LPC generally perform poorly overall and especially in terms of number of clusters. They each tend to detect too many clusters on all data-sets. Interestingly, they also yield poor pixel purity values. Usually, high purity is expected when the number of clusters is high. The fact that we observe the contrary indicates that these two methods are really not well suited to deal with raw hyperspectral data.
•
On the core-set Γ 50 , MBC performs very well and even finds the right number of clusters in the Pavia University and Salinas scenes. It over-estimates this number by one on the Massey University scene and under-estimate by two on the KSC scene.
•
The Botswana scene seems to the most challenging for the proposed methods. The only case where the MBC clustering surpasses the benchmark with this scene is in terms of pixel purity. We noted that this particular scene contains classes of pixels that are among the least pure in the benchmark, with strong variations in reflectance spectra within classes. We hypothesise that this is the main reason for our method under-performing on this scene. On the other hand, it is well known that clustering is generally an ill-posed problem and that different applications may require different types of clustering approaches. Clearly, in this case, FDPC performs better, but it should be noted that it was tuned manually, unlike MBC. Overall, these results indicate that MBC can handle high-dimensional data well and recognise meaningful classes of materials and surfaces in a scene. While it comes with a certain computational cost, it performs better than existing clustering methods, even without parameter tuning. These results also validate our hypothesis that the minimax distance is well suited for hyperspectral data exploration.
Conclusions
We introduced MBC, a parameter-free clustering algorithm based on a new graph-based measure of similarity, which we named minimax bridgeness. We demonstrated its ability to automatically discover clusters in high-dimensional remote sensing data without user input, as well as its superiority over other graph-based similarity measures such as the number of shared nearest neighbours. The proposed method has two drawbacks: its sensitivity to noise and its high computational cost. To address these, we used a simple approach based on density estimation to select a subset of relevant data points, a so-called core set and applied MBC on it, which resulted in stronger performance across the board. Future work should focus on the selection of core sets for a more efficient exploitation of the minimax distance and minimax bridgeness. Also, the use of unsupervised dimensionality reduction techniques based, for instance, on band selection is expected to improve performance by reducing the curse of dimensionality. Lastly, future work should also focus on developing efficient methods to produce graph-based similarity matrices that scale to large data-sets.
Author Contributions: S.L.M. conceived of the paper, designed the experiments, generated the dataset, wrote the source code, performed the experiments, and wrote the paper. C.C. provided detailed advice during the writing process and revised the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Acknowledgments:
The authors wish to thank Reddy Pullanagari and Gabor Kereszturi for providing the Massey University data-set.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,277.6 | 2020-04-04T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Mechanical Properties and Microstructure of AZ31B Magnesium Alloy Processed by I-ECAP
Incremental equal channel angular pressing (I-ECAP) is a severe plastic deformation process used to refine grain size of metals, which allows processing very long billets. As described in the current article, an AZ31B magnesium alloy was processed for the first time by three different routes of I-ECAP, namely, A, BC, and C, at 523 K (250 °C). The structure of the material was homogenized and refined to ~5 microns of the average grain size, irrespective of the route used. Mechanical properties of the I-ECAPed samples in tension and compression were investigated. Strong influence of the processing route on yield and fracture behavior of the material was established. It was found that texture controls the mechanical properties of AZ31B magnesium alloy subjected to I-ECAP. SEM and OM techniques were used to obtain microstructural images of the I-ECAPed samples subjected to tension and compression. Increased ductility after I-ECAP was attributed to twinning suppression and facilitation of slip on basal plane. Shear bands were revealed in the samples processed by I-ECAP and subjected to tension. Tension–compression yield stress asymmetry in the samples tested along extrusion direction was suppressed in the material processed by routes BC and C. This effect was attributed to textural development and microstructural homogenization. Twinning activities in fine- and coarse-grained samples have also been studied.
Mechanical Properties and Microstructure of AZ31B Magnesium Alloy Processed by I-ECAP MICHAL GZYL, ANDRZEJ ROSOCHOWSKI, RAPHAEL PESCI, LECH OLEJNIK, EVGENIA YAKUSHINA, and PAUL WOOD Incremental equal channel angular pressing (I-ECAP) is a severe plastic deformation process used to refine grain size of metals, which allows processing very long billets. As described in the current article, an AZ31B magnesium alloy was processed for the first time by three different routes of I-ECAP, namely, A, B C , and C, at 523 K (250°C). The structure of the material was homogenized and refined to~5 microns of the average grain size, irrespective of the route used. Mechanical properties of the I-ECAPed samples in tension and compression were investigated. Strong influence of the processing route on yield and fracture behavior of the material was established. It was found that texture controls the mechanical properties of AZ31B magnesium alloy subjected to I-ECAP. SEM and OM techniques were used to obtain microstructural images of the I-ECAPed samples subjected to tension and compression. Increased ductility after I-ECAP was attributed to twinning suppression and facilitation of slip on basal plane. Shear bands were revealed in the samples processed by I-ECAP and subjected to tension. Tensioncompression yield stress asymmetry in the samples tested along extrusion direction was suppressed in the material processed by routes B C and C. This effect was attributed to textural development and microstructural homogenization. Twinning activities in fine-and coarsegrained samples have also been studied.
I. INTRODUCTION
MAGNESIUM alloys are metallic materials that can be used for lightweight, energy-saving structures, and medical devices. Unfortunately, their low formabilities at room temperature make their practical usage very challenging. Many thermomechanical processes were developed to enhance ductility of magnesium alloys. The most promising method that can improve their strengths and formabilities is severe plastic deformation (SPD). [1] In SPD processes, a large plastic strain is imposed to material (without changing its dimensions) to refine its microstructure. SPD is commonly used to convert coarse-grained metallic materials into ultrafine-grained (UFG) materials with the average grain size less than 1 micrometer.
Equal channel angular pressing (ECAP) is one of the most developed SPD processes. [2] In ECAP, a billet is pressed through a die with two channels; the deformation zone is localized at channels' intersection; and deformation mechanism is simple shear. The amount of plastic strain imposed to material depends on the channel intersection angle and its outer curvature; it can be calculated theoretically. [3] Due to frictionrelated force limitation, only relatively short billets can be processed using this method. The solution of the problem can be incremental ECAP (I-ECAP) developed by Rosochowski and Olejnik [4] In I-ECAP, a force needed to conduct the process is reduced significantly because of separation of material feeding and deformation. This method can be used to produce UFG rods, plates [5] and sheets. [6] Due to low formability of magnesium alloys at room temperature, ECAP must be realized at elevated temperatures. A typical processing temperature varies from 448 K to 523 K (175°C to 250°C); the lower the temperature the smaller the grain size obtained. The mean grain sizes reported for different conditions were 1 lm at 423 K (150°C), [7] 2 lm [8,9] at 473 K (200°C), and 6 lm at 523 K (250°C). [10,11] In order to enable further grain refinement, the processing temperature must be decreased along with a simultaneous decrease of strain rate. Different experimental plans were proposed to conduct ECAP at temperatures lower than 473 K (200°C) without crack occurrence. They are based on the idea of gradual temperature decrease with subsequent passes. [12][13][14] The smallest reported grain size obtained using this method was equal to 0.37 lm at the final temperature 388 K (115°C). [13] The grain-refinement mechanism during the processing of magnesium alloys by ECAP is significantly different from that for fcc metals. [15] According to a model proposed by Figueiredo and Langdon [16] and experimental results, [8,9,17,18] the grain-refinement process is controlled by dynamic recrystallization (DRX). Observations of necklace-like and bimodal microstructures after ECAP of initially coarse-grained materials have confirmed this hypothesis. A critical grain size term was introduced in this model to account for a complete microstructural homogenization. [16] The occurrence of DRX also explains why grains with only limited size can be obtained at a given temperature. The studies on DRX in AZ31 magnesium alloy during compression testing show that the size of recrystallized grains is dependent on temperature and strain. However, relation between inverse grain size and strain is not linear, and after reaching a maximum point, the size of recrystallized grains remains constant despite further deformation. [19,20] Similarly in ECAP, after reaching a minimum mean grain size for a given temperature, subsequent passes do not lead to further grain refinement but only to microstructural homogenization. The multiscale cellular automata finite element (CAFE) model of microstructural evolution during ECAP of magnesium alloys based on DRX approach was developed by the authors of this article, and the numerical results were found to be in good agreement with experiments. [21] In contrast to fcc metals, the yield stress of magnesium alloys is usually decreased by ECAP processing. The inverse Hall-Patch effect, a negative slope of yield stress versus inverse mean grain size, was reported for ECAPed AZ61. [17] The decrease of the yield strength with subsequent passes of ECAP was also shown in other articles. [10,22] On the other hand, increase of yield stress after four passes was reported in Reference 18. Significant improvement in the strength of AZ31 magnesium alloy (0.2 pct proof strength equal to 372 MPa) was reported only for grain sizes smaller than 1 lm. [13] Mukai et al. [23] reported similar properties (yield stress 400 MPa) for the mean grain size 1.5 lm after multidirectional rolling. Ductility of magnesium alloys is usually enhanced by ECAP. Maximum elongation can exceed 40 pct after ECAP and subsequent annealing. [24,25] A strong texture is produced in magnesium alloys processed by ECAP. [13,26] Mukai et al. [24] suggested that majority of grains change their orientation in a way such that their basal planes are coincident with a shear plane. This observation was later confirmed by textural measurements. [25] Textural development is believed to be responsible for the decrease of yield stress and enhancement of ductility in ECAPed Mg alloys despite the significant grain refinement. Nevertheless, Mukai et al. [24] also showed that ECAP followed by annealing can remarkably improve room temperature ductility of AZ31 magnesium alloy. The mean grain size obtained measured about 1 lm after ECAP and 15 lm after subsequent annealing. Agnew et al. [25] reported that annealing had not changed the texture developed during ECAP. Therefore, enhancement of ductility was explained by recrystallization effects and grain growth during heat treatment.
The ECAP process can be realized using different processing routes. [15] Route A means that a billet is not rotated between subsequent passes of I-ECAP, routes B C and C indicate rotation by 90 and 180 deg, respectively. Experiments showed strong dependence between an ECAP route and generated texture. [13,26,27] After processing by routes B C and C, basal planes are inclined at nearly 45 deg to extrusion direction (ED). However, for route A basal planes remain nearly parallel to ED. In Reference 13, higher strength obtained after processing by route A was attributed to textural development since the measured grain sizes were almost equal for different routes (1.8 ± 0.1 lm).
The results presented suggest that mechanical properties of magnesium alloys are dependent on the combination of grain size and texture.
The most favorable deformation mechanism occurring in magnesium alloys at room temperature is slip on basal plane. Critical resolved shear stresses (CRSS) for pyramidal and prismatic slip are much higher so that their contribution to plastic deformation at room temperature is small. [28][29][30][31] Activation of basal slip is dependent on grain orientation. Experiments [32] and theoretical calculations [33] showed that Schmid factor for basal slip is increased when basal planes are inclined by~45 deg to deformation direction. Since only two independent slip systems can operate on basal plane, twinning is necessary to accommodate plastic strain and fulfill von Mises criterion. [34] Moreover, slip on basal plane is impossible if the c-axis of the crystallographic cell is aligned exactly parallel to tension/compression direction. In this case, twinning or nonbasal slip needs to be activated to accommodate plastic deformation. It was shown that twinning activity is dependent on grain size, and it is more likely to occur in coarse rather than fine grains. [35] A comprehensive study on grain size influence on compressive deformation of AZ31 magnesium alloy has been conducted by Barnett et al. [36] The flow stress curves were obtained for hot-extruded AZ31 with grain sizes varying from 3 to 22 lm at the temperatures ranging from ambient to 473 K (200°C). A distinctive concave shape of flow stress curve, typical for twinning-dominated deformation, was reported at room temperature even for grains as small as 3 lm. Transition from twinning to slipdominated deformation was presented as a function of temperature and grain size. It was concluded that twinning in compression can be avoided in fine-grained AZ31 by temperature increase up to 423 K (150°C). However, recent reports [37,38] show that ECAP processing can lead to twinning suppression during compression along ED at room temperature. Therefore, a textural effect should also be taken into account when investigating twinning behavior of magnesium alloys.
In the current article, the effects of processing route of I-ECAP on mechanical properties, grain size, and texture of an AZ31B magnesium alloy are investigated. Microstructural characterizations of the tensioned and compressed coarse-and fine-grained samples are performed to determine the dominating deformation mechanism. The tension-compression yield stress anisotropies of coarse-and fine-grained materials are also studied. Moreover, the influence of texture on the twinning activity in fine-grained samples obtained by different routes of I-ECAP and subjected to compression is shown.
A. Material
Bars with square cross section 10 9 10 mm 2 and length equal to 120 mm were obtained from commercially extruded AZ31B rods (16 mm in diameter). As seen in Figure 1, the as-received microstructure was very heterogeneous with coarse grains (~83 lm) surrounded by colonies of small grains (~12 lm). Small grains were arranged in long strings aligned along ED, which were separated by coarse-grain regions. Occurrence of smallgrain colonies was attributed to DRX during hot extrusion. Twins were hardly observed in the as-received material.
B. Experimental Procedure
Initially, mechanical properties and microstructure of the as-received material were examined. Then, bars machined from hot-extruded AZ31B were subjected to I-ECAP at 523 K (250°C) using different routes, namely A, B C , and C. Images of fine-grained microstructures after 4 passes of I-ECAP were taken. Then, the samples machined from the processed material were subjected to tension and compression tests. Microstructural images were taken in the region of uniform deformation in the samples tested to fracture in tension. Compression tests were stopped after reaching true strain of 0.1 ± 0.01. Specimens for the microstructural characterization were cut out from the middle of the deformed cylinders. Samples with different processing histories are referred to in the text as AR-as-received hot-extruded; A-processed by route A; B-processed by route B C ; C-processed by route C.
C. Details of I-ECAP
A double-billet variant of I-ECAP, with 90 deg angle between channels, was realized using a 1 MN hydraulic servo press. Schematic illustration of the process is shown in Figure 2. The billets were fed using a motor-driven screw jack action of which was synchronized with the reciprocating movement of the punch. The feeding stroke was equal to 0.2 mm. The punch movement followed an externally generated sine waveform with frequency 0.5 Hz and peak-to-peak amplitude equal to 2 mm.
Composite coating was applied on the billet surface before lubrication with conventional molybdenum disulfide (MoS 2 ). The process is classified as nonchromate conversion coating with precipitation of ceramic phase by sol-gel mechanism. [39] Billet preparation procedure included: (1) surface cleaning; (2) pickling; (3) application of composite coating; and (4) lubrication with MoS 2 grease. The composite coating layer improved the grease adhesion significantly.
Heating of billets was realized by holding them for 15 minutes before processing in the die preheated to 523 K (250°C). The die temperature during processing was kept constant within ±2 K, based on the readings obtained from a thermocouple located near the deformation zone. Each billet was subjected to four passes of I-ECAP with rotation about its axis adequate for a particular route.
D. Mechanical Testing and Microstructural Characterization
Mechanical properties of the as-received and I-ECAPed materials were investigated using an Instron 5969 testing machine with the maximum load capacity 50 kN. Tension and compression tests were carried out at room temperature with the initial strain rate equal to 1 9 10 À3 s À1 . All specimens were cut out along ED. Flat tensile specimens, with thickness equal to 2 mm and dimensions of the gauge section 2.5 mm 9 14 mm, were machined using wire electrical discharge machining. Height and diameter of the compression specimens were 8 and 7 mm, respectively. Microstructural characterizations were performed for the as-received rod, I-ECAPed bars, and tensile and compression specimens after testing. Samples from the as-received rod were cut out along ED. Billets after I-ECAP were characterized on planes perpendicular and parallel to ED. No significant difference in grain size between the two examined planes was observed. Tensile samples were analyzed on a plane parallel to the tested specimen's direction. Compressive specimens were examined on a plane perpendicular to compression direction. Samples were ground, polished, and etched after mechanical testing.
Olympus GX51 optical microscope and Quanta FEG 250 scanning electron microscope were used to perform microstructural characterization. Sample preparation procedure included grinding, polishing, and etching. Samples were ground using SiC paper P600 and P1200. Then, they were mechanically polished using polycrystalline suspensions with particle sizes: 9, 3, and 1 lm. Colloidal silica was used for final polishing. After polishing, specimens were etched using acetic picral to reveal twins and grain boundaries. Mean grain size was measured by a linear intercept method using Olympus analysis software. Representative images were chosen for grain size measurement; at least 500 grains were measured for each specimen.
The EBSD images of samples processed by different routes of I-ECAP were obtained using a Jeol 7001FLV scanning electron microscope equipped with a Nordlys II EBSD detector from Oxford Instruments. After grinding and polishing steps, samples were additionally chemically etched using nital 10 pct and polished with alumina 0.02 lm. The analyzed zones were about 400 lm 9 270 lm with a 0.2 lm step size to be representative for the whole material; the indexation fraction was over 90 pct.
A. Mechanical Properties
Influence of I-ECAP processing route on mechanical behavior is shown in Figure 3. True tensile strain-stress curves are plotted in Figure 3(a). The strain at failure was increased from the initial value of 0.09 reported for hot-extruded rod, irrespective of the route used. However, enhancement of ductility for the sample A was less effective than for B and C. Tensile strains at failure increased by 55, 156, and 155 pct for routes A, B C , and C, respectively. Processing by routes B C and C had the same effect on enhancement of ductility at room temperature. Using different processing routes had only a limited effect on tensile strength of the material. True tensile strength was equal to 290 ± 5 MPa for the samples machined from the as-received rod and I-ECAPed billets processed by routes A and B C . However, tensile strength decreased to 262 MPa for route C. Yield stress, defined as 0.2 pct offset proof stress, was decreased from initial 220 to 150 MPa using route A. Increases in tensile ductility reported for the samples B and C came along with yield stress decreases to 95 and 60 MPa, respectively.
Results of room-temperature compression tests are shown in Figure 3(b). It is apparent from the presented plots that the shapes of the flow stress curves are strongly dependent on the processing route. Concave shape of the flow stress curve is observed for the extruded rod as well as the billet processed by route A. Yield stress and ductility of those samples are also very similar; however, only tensile strength is decreased by I-ECAP processing using route A. Similar to tensile tests results, yield stresses were decreased for routes B C and C but the difference was within 50 MPa. Remarkable enhancement of ductility was reported for those routes, and compressive true strain at failure was increased from~0.13 to 0.2 ± 0.02. Moreover, convex shape of flow stress curves and decreased strain hardening rate, in comparison with the samples AR and A, were observed for the samples B and C. Tension and compression flow stress curves for each processing route were plotted in Figure 4 to display tension-compression asymmetry. Yield stress asymmetry, A YS , defined as a ratio of tensile yield stress to compressive yield stress was calculated for each sample. The largest asymmetry is reported for the sample AR, A YS = 2.25. Yield stress asymmetry is decreased by I-ECAP to 1.5 and 1.25 for the samples A and B, respectively. Processing by route C led to suppression of tension-compression yield stress anisotropy (A YS = 1.01). It is apparent from Figure 4 that flow behavior in tension is significantly different from that in compression for the samples AR and A. Higher strain hardening rates observed in compression tests could be attributed to the occurrence of {10À12} tension twins. [40] Strain hardening rate in compression decreased in the samples B and C, which makes their compressive flow behavior similar to the tensile one.
Tensile and compressive samples after testing are shown in Figure 5. The samples obtained from the as-received material and from a billet processed by route C of the I-ECAP (samples from billets processed by other routes behaved in a similar way) are displayed to compare different phenomena observed during the deformations of coarse-and fine-grained magnesium alloys. It is clear from Figures 5(a) and (b) that fracture mechanisms of the samples AR and C are different. Failure mechanism of the sample AR can be described as abrupt, very moderate ductile fracture (almost brittle) across a very short neck. The I-ECAPed samples exhibited uniform deformation without evident signs of necking. Moreover, each fine-grained sample fractured along shear plane coinciding with the orientation of I-ECAP shear plane. AR and C specimens compressed to strain 0.1 are shown in Figure 5(c). The compression flow behaviors of both the samples are quite different. Anisotropic flow observed for the sample C is not present in the sample AR. Material flow prevails in a direction perpendicular to ED; however, it is not clear if it is in transverse or normal direction. Ratio of the largest to the smallest radius, A R , was introduced as a measure of anisotropy. Anisotropic flow was not observed for the sample AR but A R parameters were equal to 1.08, 1.1, and 1.13 for the samples A, B, and C, respectively.
B. Microstructure
Remarkable homogenization and grain refinement were achieved after four passes of I-ECAP, irrespective of the route used. The measured mean grain sizes were very similar for all the three processing routes and were equal to 5.53, 5.42, and 5.37 lm for routes A, B C , and C, respectively. The effect of the billet rotation between subsequent I-ECAP passes on the microstructural homogenization and grain shape was not observed. The representative microstructural image of the sample A, with the corresponding grain size distribution chart, is shown in Figure 6. More than 50 pct of grains are within range from 3 to 6 lm; they are equiaxed grains, most likely arising from DRX taking place at 523 K (250°C). Nevertheless, serrated coarse grains, in the as large range as from 20 to 30 lm, account for 4 pct of all grains. They could be interpreted as unrecrystallized grains observed in the initially extruded sample (Figure 1). Similar grain size and microstructural homogeneity after ECAP processing at 523 K (250°C) were reported in Reference 10. Experimental results confirming occurrence of DRX in ECAP of magnesium alloys ware also presented in References 9, 18.
C. Texture
Textures of the I-ECAPed samples calculated from EBSD images are shown in Figure 7. The supplied rods were obtained by hot direct extrusion, and it was reported in the past that strong ring fiber texture is formed in this forming operation. [33] Basal planes are aligned almost parallel to ED; the angle between basal plane and extrusion plane is~5 deg. Strong basal textures, with intensities in the range from 12 to 16, were obtained after I-ECAP. The sample A exhibited basal plane inclination of 20 deg with textural intensity being equal to 12 and 13 deg with textural intensity 6. Grains are orientated more favorably for basal slip in the other two I-ECAPed samples. The inclination angles of basal planes in the samples B and C are 46 and 35 deg, respectively. Moreover, the basal planes are inclined at 45 deg to the transverse direction in the sample B, which is not observed in the sample C. It would result in similar mechanical properties along ED but different flow behaviors when testing along transverse direction (through-thickness).
D. Microstructures of Deformed Coarse-Grained Samples
Microstructural images of the sample AR subjected to tension and compression are shown in Figure 8. As seen in Figures 8(a) and (b), almost every coarse grain (bigger than~50 lm) subjected to tension underwent massive twinning. It is also apparent from those images that some grains are deformed by twins operating on only one plane, whereas there is relatively a large group of grains, in which activities of two and more twinning systems are observed. In Reference 41, EBSD technique was used to show that different {10À12} twin variants can operate in the same grain. It is apparent from Figure 8(b) that twins as well as slip lines are hardly observed for grains smaller than~20 lm. Similar results were obtained for the samples compressed to strain 0.1 (Figures 8(c) and (d)) where massive twinning in coarse grains, sometimes operating on more than one slip plane, is not accompanied by deformation in smaller grains. Voids or shear bands were not revealed in the tested samples.
E. Microstructures of Deformed Fine-Grained Samples
As shown in Figure 9, microstructural images of finegrained samples subjected to tension revealed the occurrence of deformation bands in each sample. In the sample A, some bands of extremely deformed grains are parallel to tensile direction (TD) and some inclined at~45 deg to TD, while in the samples B and C only the latter are observed. For the samples B and C, a large number of twins as well as slip lines are observed within a band. However, dark regions, caused by a larger amount of deformation than in the sample A, make detailed observation more difficult. A small relief on the surface visible in grains outside bands is attributed to a slip-dominated flow. This region is also almost free from twins. The number of shear bands observed in different samples is not the same. The samples B and C exhibit more extensive shear banding than sample A.
Microstructures of the samples subjected to compression are shown in Figure 10. Twins are observed in each sample, irrespective of the route used. However, twinning is more intensive in sample A, compared wit B and C. No relation between grain size and twinning activity was reported in the examined samples. As seen in Figures 10(b) and (c), twins are observed in small (~5 lm) and large grains (~15 lm). However, large grains without signs of twinning are also revealed. It is shown that twinned grains form small colonies or bands. In the sample A, colonies of twinned grains are bigger that in the samples B and C but grains free from twins are also observed. It was not revealed if any ''mesoscopic'' effects, e.g., shear bands, have arisen from those small colonies/bands. In contrast to the coarse-grained sample, occurrence of only one twin variant in the most of twinned grains is reported.
A. Effects of Grain Size and Texture on Mechanical Properties
Mechanical properties of hot-extruded AZ31B magnesium rod were significantly changed by I-ECAP. The initial high strength and low ductility of the sample AR can be explained by occurrence of coarse grains and texture produced during extrusion. Basal planes alignment almost parallel to ED resulted in increased yield strength and reduced ductility. Twinning in extruded coarse-grained samples subjected to tension and compression observed in the current study was also reported in Reference 42. It was shown in the same article that twinned coarse-grained samples (70 lm) fractured earlier in tension as well as compression than fine-grained (8 lm) which did not undergo twinning. Grain refinement resulting in enhancement of ductility is also confirmed in the current article.
Mean grain size obtained after processing by I-ECAP (~5 lm) is comparable with results obtained after conventional ECAP at the same temperature. [10] Grain refinement led to formability enhancement accompanied by a yield stress decrease, which is usually observed for AZ31 magnesium alloy subjected to ECAP. [17,22] However, results obtained for the sample A indicate that grain size cannot be the only explanation for change of mechanical properties after I-ECAP. Different textural developments, arising from processing route, also contribute to changes in mechanical properties. Results of textural measurements show that slip on basal plane is a favorable mode of deformation in the samples B and C. It explains lower yield stress and enhanced ductility of those samples. In the sample A, basal planes are inclined by 13 to 20 deg to ED which makes slip on basal plane much more difficult. Activation of ''harder'' deformation mechanisms gives rise to improvement in strength, compared with the samples B and C. However, it was not confirmed that processing by the route A leads to both strength and ductility improvement, as has been shown in Reference 13. This inconsistency can be explained by the fact that samples with different initial textures were used in both studies.
Tension-compression anisotropy of the yield stress observed for the sample AR was almost suppressed for the samples B and C. It can be also explained by textural development in those samples. The measured grain orientations show that hcp cells are inclined at~45 deg to their c-axes, their strength in tension and compression are almost the same due to the symmetric alignment of basal planes to both deformation directions. In case of the sample A, basal planes are aligned similar to the extruded sample, and slip on basal plane is limited in compression. Activation of twinning leads to a greater strain hardening and a lower ductility. Distinctive concave shapes of compressive flow stress curves observed for the samples AR and A were also reported for extruded fine-grained samples with grain size 4 lm subjected to compression along ED. [36] It clearly shows that grain refinement is not enough to suppress the tension-compression anisotropy, but generation of appropriate texture is also required.
Anisotropic flow was also reported during compression of fine-grained samples. However, it is not clear if the lower yield stress is observed along transverse or normal direction; therefore, this effect needs further investigation. This phenomenon was also studied by conducting a set of compression tests along three different directions of an ECAPed billet. [37] Increased tendency for twinning was observed when sample was tested along throughthickness direction, compared with testing along ED. Textural measurement performed for the sample C shows that basal planes are inclined at~35 deg, which favors basal slip along ED. However, their orientation in through-thickness direction requires activation of different deformation mechanisms. It explains why the largest anisotropy was observed for the sample C, smaller for A and B, and isotropy for the extruded sample. Deformation bands were revealed in each fine-grained sample subjected to tension. It was difficult to define deformation mechanism operating within a band because of high strain accumulation. However, twins as well as slip lines were identified in the most of grains lying within a band. The mechanism of shear band formation is not clear. Lapovok et al. [43] showed shear band formation in AZ31 subjected to osingle pass of ECAP at 523 K (250°C). Nevertheless, the effect of shear bands on flow behavior during tension was not studied. In the current article, shear bands were not revealed even after four passes of I-ECAP, but they appeared during tensile testing. There is no evidence of any relation between grain size and band formation, because the average grain size within and outside a band is similar. Therefore, observed strain localization was attributed to initial grain orientation.
The mechanism of shear band formation during tension in the I-ECAPed samples is proposed in the current article. It is suggested that grains within a band are deformed by twinning, which was also shown for biaxial tension. [44] However, strong basal texture reported for the samples B and C indicate that slip on basal plane is also important deformation mechanism operating both within and outside a band. Lower yield stress of the sample C, compared with sample B, can be attributed to suppression of twinning activity. It is apparent from Figure 9 that more twinned grains are observed in the sample B than in the sample C, which gives rise to the strength improvement of the former one. The quantitative analysis of the twinned area ratio was not possible because of high deformation level.
Texture of the sample A does not favor slip on basal plane; therefore, its ductility is lower compared with other fine-grained samples. Higher strength of the sample A is attributed to greater activity of twinning. The deformation mechanism different from those shown for the samples B and C is illustrated in Figure 11. The following hypothesis is proposed to explain the origin of the observed microstructure: grains are first twinned which causes their reorientation to position which favors basal slip. Twinning behavior resulting in reorientation to basal slip favorable was identified as {10À11}-{10À12} double twinning elsewhere. [45] Since slip on basal plane is much easier to occur than any other deformation mechanism, remarkable strain localization appears. Observed strain localization is probably the main reason for the earlier occurrence of the failure of the sample A, compared with the samples B and C. It should be emphasized that the proposed explanation is only a hypothesis and requires further verification using more sophisticated techniques than optical microscopy.
C. Deformation Mechanisms in Fine-Grained Samples Subjected to Compression
Shear failure along diagonal of cylindrical sample observed during compression of coarse-and fine-grained samples could be explained in terms of shear bands formation. It was already shown that strain localization occurs in magnesium samples subjected to compression and leads to earlier failure; this mechanism was described in Reference 41. Initially twinned grains undergo secondary twinning and form double twins. They reorient parent grain to the position which favors slip on basal plane; Schmid factor for basal slip in double-twinned grains was calculated to be 0.5. [45] Moreover, regions previously occupied by double twins are expected to act as sites of void formation since twin-sized voids were revealed in compressed magnesium sample using the EBSD technique. [41] According to the presented fracture mechanism in compression, suppression of twinning can lead to enhancement of ductility of magnesium alloys. Indeed, grain refinement obtained by I-ECAP resulted in limiting of twinning activity and led to enhancement of ductility in the current study. Due to texture developed during I-ECAP, most of the grains could be deformed by slip on basal plane instead of twinning which delayed the process of double twinning and void formation. It should be noticed that the previous statement is correct when the sample is compressed along ED; there is no experimental evidence in the current study to show that it is also true along other directions.
The roles of grain size and its orientation in twin formation are ambiguous. Although grain size is believed to control twinning behavior, [36] twins were observed in grains as small as from 1 to 3 lm, as shown in Figure 12. Moreover, large grains surrounding twinned regions were completely free from twins. It proves that not only grain size but also grain orientation has strong effect on twinning behavior. Twinned grains are arranged in small colonies which can indicate that not individual grains but whole regions exhibit orientation which favors twinning. It is suggested that, in the current study, those twinned colonies form shear bands where the voids are formed at the subsequent stages of deformation. It would explain why the extensively twinned sample A fractured earlier than the samples B and C. Higher twinning activity in the sample A compared with the samples B and C compressed to true strain of 0.1 was confirmed by calculating the total twinned area, and the results are shown in Table I. It is apparent that the twinned area in the sample A is greater than two times that in the samples B and C. These results are also supported by the distinctive concave shape of the compressive flow stress curve observed for the sample A and not in the cases of the samples B and C, as shown in Figure 3(b). Since the grain size is almost the same (5.45 ± 0.1 lm) for each fine-grained sample tested in the current study, the increased twinning activity in the sample A was attributed to the texture produced during I-ECAP.
V. CONCLUSIONS
In the current study, AZ31B magnesium alloy was processed for the first time using three different routes of I-ECAP at 523 K (250°C). The effect of the route used on mechanical properties was investigated. Microstructures of the coarse-and fine-grained samples subjected to tension and compression were studied using OM and SEM. The following conclusions are drawn from the current study: 1. I-ECAP, a continuous SPD process, can be used to improve the ductility of AZ31B magnesium alloy, and the obtained grain sizes and mechanical properties are comparable with conventional ECAP. 2. Texture controls the mechanical properties of AZ31B magnesium alloy subjected to I-ECAP at 523 K (250°C). The results obtained showed that the processing route did not influence its grain size but significantly changed its mechanical properties. The EBSD measurements confirmed the effect of texture on flow behavior of the I-ECAPed magnesium alloy. 3. Tension-compression asymmetry is suppressed by using routes B C and C of I-ECAP in the samples tested along extrusion direction. This effect was also attributed to textural development and microstructural homogenization after processing. 4. Increased ductility after I-ECAP was attributed to suppression of twinning and facilitation of slip on basal plane. Shear bands were observed in the I-ECAPed samples subjected to tension. The larger number of shear bands and smaller spacings between them were observed in samples which exhibited improved ductility. 5. Twinning activity in compression is higher for the samples processed by route A than by B C and C. This conclusion is supported by the concave shape of the flow stress curve and quantitative analysis of microstructural images. Since the average grain size was almost the same for the samples A, B C , and C, this effect was attributed to the texture produced during different routes of I-ECAP. | 8,545.6 | 2014-03-01T00:00:00.000 | [
"Materials Science"
] |
GEP- and MLR-based equations for stable channel analysis
For decades, research on stable channel hydraulic geometry was based on the following parameters: river discharge, dimensionless discharge, the median size of bed material and the slope. Although significant research has been conducted in this area, including applied machine learning to increase the geometry model prediction accuracy, there has been no remarkable improvement as the variables used to describe the geometry relationship remain the same. The novelty of this study is demonstrated by the parameters used in the stable channel geometry equations that outperform the existing equation’s accuracy. In this research, sediment transport parameters are introduced and analysed by applying the multiple linear regression (MLR) and gene expression programming (GEP) methods. The new equation of the width, depth and bed slope can give much-improved results in efficiency and lower errors. Furthermore, a new parameter B/y is introduced in this study to solve the restriction issue, either in width or depth prediction. The results from MLR and GEP show that in addition to the existing hydraulic geometry parameter, the B/y parameter is also able to give high accuracy results for width and depth predictions. Both calibration and validation for the B/y parameter yield high R and NSE values with low mean squared errors and mean absolute errors.
seem to be complicated and require a deep understanding of fluid flow, which is often time-consuming. Bonakdari et al. (2020) suggested including different key geomorphological variables to increase the prediction accuracy.
Currently, the use of machine learning in water resources engineering has been the method of choice for many researchers (Muhammad et al. 2018;Kargar et al. 2019;Montes et al. 2020;Roushangar & Shahnazi 2020;Sharafati et al. 2020). This method was extensively investigated by Gholami et al. (2019aGholami et al. ( , 2019b and Shaghaghi et al. (2018) for use in a stable channel. Machine learning can be applied by using various methods such as artificial neural networks, adaptive network-based fuzzy inference systems, support vector machines, gene expression programming (GEP) and evolutionary polynomial regression (EPR) (Giustolisi & Savic 2009;Ebtehaj et al. 2019;Khosravi & Javan 2019;Roushangar & Ghasempour 2019;Yahaya 2019;Asheghi et al. 2020;Najafzadeh & Oliveto 2020). The research done by Bonakdari et al. (2020) showed that GEP and EPR had improved the accuracy of the model prediction; however, the improvement is not remarkable compared to the original equation.
These earlier findings suggest that a new equation should be developed to increase the accuracy of stable channel prediction. To date, many researchers have focused on parameters Q, d 50 S o and Q* to determine stable channel geometry. This paper will focus on introducing a new equation for stable channel geometry by adopting sediment transport parameters that resemble the hydraulic characteristic of the river. The width/depth (B/y) parameter will be introduced to the equation for predicting stable channel geometry to solve the problem when there is a restriction in width or depth, as discussed earlier in the analytical approach. The proposed equation will be further enhanced by using the machine learning method to increase the model prediction accuracy.
Multiple linear regression
This study uses ordinary multiple regression in linearised form after log transforming all the variables included in the equation. Since all the possible test cases are described in one power law equation, the fitting of all regressions can be determined using statistical analysis software, Statistical Product and Service Solutions (SPSS). The dependent variable used for this equation is B*, y*, S o and B/y.
Gene expression programming
GEP has been developed as an extension to genetic programming (Koza 1994). This program involved several search techniques, such as decision trees and logical expression, that help to evolve computer programs. GEP is encoded in linear chromosomes that can be represented by an expression tree (ET). With the aim to analyse particular data through genetic modification, the population of an ET will adapt and discover the traits of the data. The decision tree and logical expression have to be initially carefully determined to ensure the program yields the best results (Ferreira 2001b). The chromosome of each individual is randomly selected and evaluated by the fitness function and selected by the genetic operator to reproduce with modification (mutation). The same process occurs in the new individual for whom the trends are repeated until the required accuracy is obtained (Ferreira 2001a(Ferreira , 2001b. Figure 1 summarises the steps taken to execute the GEP. The data are first randomised before continuing to the next step. Approximately, 80% of the data are used for training and the remaining data for testing purposes. The setting of the function and chromosome architecture includes chromosome length, gene number, head size, linking function and genetic operators. The root-mean-square error (RMSE) was used as a fitness function to fit the curve to the target value. The stopping criteria were set at 25,000 generations for all the runs of the various models. The summarised parameters used in the GEP model are shown in Table 2.
Study area
The river data used in this study were obtained from the Malaysian Department of Irrigation and Drainage. Three rivers were chosen for the study to represent different hydraulic characteristics of a large river (Muda River), a medium river (Langat River) and a small river (Kurau River). The locations of these rivers are shown in Figure 2.
Muda River Basin
The Muda River originates in the highland areas of Kedah, a northern state of Malaysia, which borders Thailand. It is the largest river in Kedah and is important in supplying water to the three states of Kedah, Perlis and Pulau Pinang (Sim et al. 2015). The Muda River Basin covers a drainage area of 4,210 km 2 . In terms of length, the Muda River is approximately 180 km long, with a slope of 1/2,300 (or 0.00043). The channel width of the Muda River is typically around 10 m upstream, 100 m in midstream and widest at its estuary, averaging 300 m (DID 2009). In terms of depth, bathymetric surveys show that the shallowest point in the river is located 2.5 km upstream from the river mouth, resulting in difficulty for navigation during low tides (Julien et al. 2010).
Langat River Basin
The Langat River forms one of the four major river systems in the state of Selangor. The Langat River is a medium-sized river, approximately 180 km long with an average annual flow of 35 m 3 /s and a mean annual flood of 300 m 3 /s. The Langat River system flows from the north-eastern state of Selangor to Negeri Sembilan and the Federal Territory of Putrajaya, finally emptying into the Straits of Malacca. The Langat River Basin has a total catchment area of 2,396 km 2 (DID 2009).
Kurau River Basin
The Kurau River Basin is a typically small river basin, draining an area of approximately 682 km 2 . In terms of elevation, elevations at the river headwaters are moderately high, being 900-1,200 m. In terms of slopes, the upper 6.5 km of the river averaged 12.5%, while those lower down the valleys are significantly lower, in the order of 0.25-5%. The average velocity of the Kurau River ranges from 0.45 to 0.636 m/s with the highest sediment load being 0.878 kg/s (Saleh et al. 2018). A reservoir was constructed at the middle section of the river, approximately 65 km upstream. Two major systems, the Kurau River system and the Merah River system, are upstream of this reservoir, and both rivers drain into the reservoir (DID 2009).
Field data collection
River surveys, flow measurements and field data collection were conducted based on the Guidelines for Field Data Collection and Analysis of River Sediment by Ab. Ghani et al. (2003). The data collection includes flow discharge, bed and bank material, suspended load, bedload and water surface slope. In addition, bed elevation, water surface and thalweg measurements were also conducted at selected cross-sections. The range of the data for this study is shown in Table 3.
Development of new equations for stable channel geometry
The new equations were formulated by using dimensionless hydraulic geometry relations as proposed by Kaless et al. (2014), Parker (1978), Parker (1979) and Pitlick & Cress (2002). Along with the dimensionless stable geometry equation in terms of B*, y* and S, a new dimensionless B/y equation was introduced to solve the restriction issue in either width or depth prediction. The equation also uses dimensionless full bank discharge Q* as one of the parameters in the development of the equation.
Researchers have divided the significant parameters that contribute to sediment transport into four-parameter classes, namely, mobility, transport, sediment and flow resistance (Ebtehaj & Bonakdari 2016;Harun et al. 2020). This concept has been widely used by researchers to define the sediment transport in the river (Sinnakaudan et al. 2006;Harun et al. 2020), in closed channels or in pipes (Ebtehaj et al. 2019;Danandeh Mehr & Safari 2020). These parameters were also found to be prevalent in the application of sewer sedimentation analysis Kargar et al. 2019). The details of the parameters are shown in Table 4. These parameters were combined to get the best relationship with regard to the dimensionless hydraulic geometry relations to width, depth, slope and width/depth. The parameters governing the stable channel geometry are presented in a dimensionless form, where ψ is the flow parameter, R is the hydraulic radius, U* is the shear velocity, S o is the bed slope, S s is the specific gravity of the sediment, v s is the fall velocity of the bed material, V is the cross-sectional velocity, g is the acceleration due to gravity, A is the area of cross-section, v is the kinematic viscosity of water, ϕ is the transport parameter, D gr is the dimensionless grain size and l s is the friction factor.
Goodness of fit of model performance
The equations were evaluated by using several indices, namely, R 2 , Nash-Sutcliffe efficiency coefficient (NSE), mean squared error (MSE), mean absolute error (MAE) and discrepancy ratio (DR). Different statistical indexes used are presented in Equations (1)-(5), where O i and P i are the observed and predicted values, while O i and P i are the mean observed and predicted values. R 2 represents the correlation between measured and modelled values, MSE is the calculated mean error in the form of data units squared and MAE represents the absolute error between the measured and modelled values. In large events, MAE helps to reduce bias (Bennett et al. 2013). The NSE is used to describe how much the modelling error is to the variance of the observed data. One is the perfect value for the NSE; an NSE less than 0 indicates that the predicted model is unreliable and an NSE value closer to 1 represents high accuracy between the observed and the predicted model ( (Gholami et al. 2017) and the perfect correlation of computed and measured stable channel geometry should be one (Yadav et al. 2019). This study uses a DR of 0.8-1.2 (20% accuracy) to analyse the prediction model in multiple linear regression (MLR).
Stable channel geometry by using MLR analysis
The significance of each parameter in Table 4 was assessed to ascertain which parameter is influential to the stable channel geometry. The correlation analysis determines the significance of the parameter. The probability (p) value , α ¼ 0.05 indicates convincing evidence that the parameter is influencing the stable channel geometry. The results from the correlation analysis show that the stable channel geometry (B/y, B*, y*, S) were significant to the parameters Q*, ψ, The selected best combination of characteristic parameters, together with the related equations from multiple linear analyses, were further assessed in the study of variance (ANOVA) approach. Figure 3 explains the different input combinations for the determination of dimensionless width/depth, width, depth and the slope of a stable channel.
Width/depth (B/y) prediction
The B/y parameter was evaluated with a different combination of inputs, as shown in Figure 3. This study shows that B/y was significant to the combination of parameters Q*, ψ, U Ã =V, V=gd 50 (S s À 1), VS o =v s , D gr , U Ã =v s and R/d 50 . In total, there were 37 combinations that were significant to the prediction of B/y. The best five combinations of the parameters are given in Table 5.
The R 2 value for model one is the highest among all the models (0.890) followed by model two (0.638), model three (0.598), model four (0.542) and model five (0.453). In contrast, MSE for model one yields the lowest MSE (45.549) followed by model two (157.900), model three (187.70), model four (208.298) and model five (253.270). As for the NSE, model one yields the highest value (0.889), followed by model two (0.614), model three (0.542), model four (0.492) and model five (0.382). A comparison between all models shows that model one yields the highest DR value which ranges between 0.8 and 1.2 (79.06%), and model five yields the lowest DR value (31.62%). The results of this study indicate that the B/y is best described by the combination parameters of Q*, V= ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi gd 50 (S s À 1) p and R/d 50. The above findings conclude that model one is superior in comparison to the other models.
Width prediction by using dimensionless width (B*)
Results from the correlation analysis revealed that B* was significant with the parameters Q*, ϕ, ψ, U Ã =V, VS o =v s , V=gd 50 (S s À 1), D gr , U Ã =v s , l s and R/d 50 . There were 51 combination models altogether for the B* prediction. Table 6 lists the best five combinations of the parameters that are significant to the B prediction.
Model one has the highest DR (93.16%), followed by model two (64.10%), model four (55.56%), model three (46.15%) and model five (37.61%). In terms of R 2 , model one has the highest value (0.964), and model five has the lowest value (0.745). The trend was also the same for NSE, with model one showing the highest value (0.964), thereby indicating that the model predicts better compared to the other models. Moreover, model one tends to yield the lowest MAE (3,031.499); meanwhile, model five has the highest MAE among the models (10,167.036). Model one is considered superior compared to the other models because it has the highest R 2 and NSE and is able to give better DR that has 93.16% accuracy.
Mean depth prediction by using dimensionless depth (y*)
Significant parameters were combined to yield the best equation that can predict the average depth of the river. In the present study, it is observed that parameter Q*, C v , ϕ, ψ, VS o =v s , V=gd 50 (S s À 1) , U Ã =V, D gr , l s and R/d 50 were significant to the development of depth prediction. The analysis showed that 51 combination parameters were significant to the y* prediction. Table 7 lists the best five combinations of parameters that are significant to the depth prediction. The bold values highlight the best model for every stable channel geometry prediction.
Journal of Hydroinformatics Vol 23 No 6, 1255 Models one and two have the highest value of DR square (92.31%) followed by model three (91.88%), model four (91.45%) and model five (91.03%). Among all the models, Model two has the lowest in terms of MSE (58,451.679) and the highest in terms of R 2 (0.977) and NSE (0.977). Model two is selected to predict the depth instead of model one because model two gives a higher R 2 and NSE and the lowest MSE compared to the other models.
Slope (S) prediction
The prediction of the slope was calculated by combining the significant parameter, as discussed in Figure 3. In this study, parameters C v , ϕ, ψ, VS o =v s , D gr and R/d 50 were observed significant to the slope prediction. The study revealed that 57 models were significant to the slope prediction. The best five combinations for the slope prediction are listed in Table 8.
Model one has the highest R 2 (1) followed by model three (0.807), model two (0.795), model four (0.789) and model five (0.763). The same trend also applies to the NSE, but for MSE, the trend is in the reverse order. Model one has the lowest MSE, and model five has the highest MSE. Model one is found to be superior compared to the other models, as it has the highest DR ratio (100%) followed by model two (55.56%), model three (54.70%), model four (53.85%) and model five (50.43%).
The summary of the stable channel geometry prediction model in dimensionless form by using the MLR method is shown in Table 9.
Stable channel geometry by using GEP
The best model for every stable channel parameter was further determined by using GEP. The function for the stable hydraulic geometry was analysed based on the significant parameter, as shown in Table 9. The formulations of GEP for stable hydraulic geometry are shown in Table 10. The bold values highlight the best model for every stable channel geometry prediction. The bold values highlight the best model for every stable channel geometry prediction.
Journal of Hydroinformatics Vol 23 No 6, 1256
The performance of the training and testing of the GEP model is shown Table 11. The GEP model for all stable channel geometries has a high correlation for training and testing data. The same trend was also observed for the errors in terms of MSE and MAE.
The performance of the GEP model in comparison to the MLR model is shown in Table 12. Both models were analysed based on R 2 , NSE, MSE and MAE. In all parameter predictions, GEP models outperformed MLR models, in which the GEP model yields better R 2 and NSE and produces the least errors in terms of MSE and MAE. It can be concluded that the GEP model is a far superior model compared to the traditional MLR method.
Evaluation of the existing equation and comparison with the newly developed equation
The performance of the currently developed equation by using MLR and GEP was compared to the previous equation (Figures 4-6), and the results are shown in Tables 13 and 14. The channel width, depth and slope of the stable channel are estimated by using the existing equation. As the equations were presented in dimensionless form, calculations of the width and the depth were done by multiplying the equation with d 50. For the B/y equation, the width is calculated by multiplying the equation by the depth of the channel. However, the equation will be divided by the width to obtain the depth of the channel.
In the prediction of channel width (B), the majority of the previous equations had moderate R 2 values but with low or negative values in terms of NSE. This shows that several of the equations were underpredicted, which explains the higher observed values compared to the predicted data. Only Pitlick & Cress (2002), Lee & Julien (2007) and Haron et al. (2019) have a positive value for NSE. Lee & Julien (2007) has the highest NSE of all, which is 0.61. Bonakdari et al. (2020) have the highest values of R 2 in EPR and GEP, namely, 0.685 and 0.655; however, these lack accuracy in the prediction, as the NSE value transpired to be negative. The NSE for both equations is À0.06 and À0.54, respectively. In terms of the error, Julien & Wargadalam (1995) have the highest MSE and MAE values, which are 1,335.68 and 24.76, respectively. Lee & Julien (2007) have The bold values highlight the best model for every stable channel geometry prediction. (10) In the prediction of flow mean depth y, many of the existing equations have high R 2 and NSE values, indicating that most of the equations are able to predict the depth of the river with low error (MSE and MAE). Hey & Thorne (1986), Lee & Julien (2007) and Bonakdari et al. (2020) EPR were particularly excellent in predicting the depth of the river with the DR (0.8-1.2) 39.32, 37.6 and 52.99%, respectively. Hey & Thorne (1986) have the lowest MSE (0.122) and MAE (0.282) among all the existing equations. Conversely, Simons & Alberstone (1960) have the highest MSE (40.49) and MAE (6.311). The newly developed equation is able to predict with better accuracy, coupled with less error. The current study GEP (y) equation has the highest R 2 and NSE values (R 2 ¼ 0.960, NSE ¼ 0.960) followed by the current study MLR (y) equation ( Vol 23 No 6, 1260 In the slope prediction, many existing equations are unable to predict the slope accurately, and many of the existing equations had low R 2 and NSE values. Julien & Wargadalam (1995) and Lee & Julien (2007) equations were found to be excellent in predicting the slope with an extremely high R 2 value (1, 0.982) and an high NSE value, which are 1.0 and 0.861, respectively. The MSE and MAE for both equations are also fairly low, being 3.131 Â 10 À11 and 4.985 Â 10 À8 (MSE), and 3.8467 Â 10 À6 and 2.054 10 À4 (MAE). In all the equations, the EPR and GEP equations of Bonakdari et al. (2020) were found to have the highest error. This may be due to the sensitivity of the equations used and is, therefore, not suitable for studying Malaysian rivers. The newly developed slope equation using GEP and MLR is able to predict with high R 2 and NSE values (R 2 ¼ 1, NSE ¼ 1). The MSE for both equations is fairly low, namely, 1.773 Â 10 À12 and 6.6217 Â 10 À12 , respectively, and MAE is 1.092 Â 10 À6 and 5.766 Â 10 À6 , respectively. Both Julien & Wargadalam (1995) and the current study equation (MLR and GEP) with fewer variables are determined to be suitable to predict the stable channel slope.
The results from Table 15 show that both MLR and GEP equations can predict the width of the river precisely. The R 2 and NSE values are high for all the validation data except for the current study GEP (B/y) equation and for the Ariffin (2004) data, in which the values for R 2 and NSE are 0.880 and 0.543, respectively. The predicted data are highly correlated but lack accuracy. This could be due to the fact that the GEP equation developed is sensitive to the characteristics of the current river data. The data from the Ariffin (2004) study are generally based on small river data. Conversely, the present study MLR (B/y) Table 16 shows that the developed MLR and GEP equations have an excellent degree of depth prediction accuracy based on the evidence of high R 2 and NSE values with low MSEs and MAEs. The current equation also outperformed the existing equations, as many of the existing equations yielded underpredicted values. However, the current study MLR (B/y) and current study GEP (B/y) equations yield low accuracy on the Saleh et al. (2017) data, whereby low R 2 and negative NSE values were observed. The model prediction is poorly correlated and inaccurate, as the predicted model performed worst. This could be due to the characteristics of the river that has a higher discharge compared to the current study data. This indicates that the equation is not suitable to apply to rivers with high discharge.
The slope validation prediction as shown in Table 17 revealed an equation from this study that is also able to predict the slope with high accuracy, particularly current study S (GEP), which yields better R 2 and NSE values and lower errors. This study also applies to the much-improved parameter that only consists of two parameters, namely, flow parameter and R/d 50 .
As shown in Table 18, the new equation B/y also performed well in the validation data. The current study GEP (B/y) and and current study MLR equations have high R 2 and NSE values in all the validation data with low MSEs and MAEs. This indicates that parameter B/y is suitable to represent the geometry of the river. In the case of restriction either in the width or depth of the river, this equation is fundamental in helping decision-makers to determine the suitable channel dimension.
Limitation of the proposed model
The current study focuses on developing a new equation for stable channel geometry that is suitable for the median size of bed material between 0.29 and 3.0 mm. Different key geomorphological variables such as bank profile and vegetation types need to be considered in future studies to enhance researchers' understanding of the effect of changes to the stable channel geometry. Applications of different machine learning techniques with higher accuracy can be conducted to increase the accuracy of the model prediction. For the B/y model prediction, a different variable that resembles the hydraulic characteristic of the river should be further explored to increase the efficiency of the equation, especially in rivers with a high volume of discharge.
CONCLUSION
The current study set out to develop a new equation that can improve the stable channel geometry prediction accuracy. This study also aims to establish an equation in the dimensionless form of B/y to provide an alternative solution for the width and depth prediction. Existing equations predict stable channel geometry with low accuracy, as many model predictions use the same parameter. This research has enhanced our knowledge in predicting stable channel geometry by associating the sediment transport parameters to predict stable channel geometry. The newly proposed equations by using MLR and GEP give a better prediction for stable channel hydraulic geometry with a far better R 2 , NSE and DR. Moreover, it was established that B/y is a good parameter that can be used to predict the geometry of the river. This result also suggests that the proposed MLR and GEP models are robust in predicting stable channel geometry, with lower MSEs and MAEs compared to the existing equations. 1.000 0.999 6.951 Â 10 À9 7.041 Â 10 À5 1.000 1.000 3.436 Â 10 À9 0.0000399 1.000 0.893 4.169 Â 10 À11 6.454 Â 10 À6 1.000 1.000 9.262 Â 10 À10 2.138 Â 10 À5 Current study GEP (S) 1.000 1.000 6.697 Â 10 À11 7.306 Â 10 À6 1.000 1.000 3.430 Â 10 À11 0.0000045 1.000 1.000 1.568 Â 10 À12 4.47 Â 10 À6 1.000 1.000 3.430 Â 10 À11 4.47 Â 10 À6 Journal of Hydroinformatics Vol 23 No 6, 1266 | 6,202 | 2021-08-27T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Visible Light Communication System Using Silicon Photocell for Energy Gathering and Data Receiving
Silicon photocell acts as the detector and energy convertor in the VLC system. The system model was set up and simulated in Matlab/Simulink environment. A 10 Hz square wave was modulated on LED and restored in voltage mode at the receiver. An energy gathering and signal detecting system was demonstrated at the baud rate of 19200, and the DC signal is about 2.77 V and AC signal is around 410 mV.
Introduction
Solar cell has drawn great interest over the past 30 years, and there is a tendency to use it more widely and practically.Visible light communication is also very amazing [1] as a new kind of wireless communication technology with less energy consumption, higher response speed, and more privacy.
Energy gathering and signal detecting system is a new idea.Energy harvesters are widely used in sensor networks.But energy gathering can be hardly seen in the VLC.We noticed that the silicon-based solar panels could receive VLC data and gather energy at the same time.
Research works in this area can be found in [2]; the researchers from Korea used a solar cell as a simultaneous receiver of solar power and visible light communication (VLC) signals.Some research on the efficiency and frequency response of solar cell had been launched.
In our works, solar cell was studied totally under visible light.We set up models similar to the real lighting conditions and run simulations in Matlab/Simulink.Simulation results indicate that it is possible to gather energy and receive data through the same solar panels.We implement the system using commercial components.Our experiments based on the prototype show that the solar panels can gather energy for low power circuit and detect the VLC signal at the same time.
Model Analysis
In this section, we analyzed the model of LED and solar cell and then formulated their relationship with some approximations.
Model of LED Light Source.
The LED conforms to Lambert emission rule.When the transmitted optical power is , the received power (w/m 2 ) is expressed as [3] where is the distance between LED and PD, is the irradiance angel, is the incidence angel of PD, () is the optical filter gain, () is the optical concentrator gain, is the field of view of PD, and is Lambert emission order.The SNR for VLC and the illuminance value on PD are given as follows: (2) It can be formulated as where 1 is the number of solar cells in parallel, 2 is the series number, is the light current, 0 is the diode saturation current, is the output voltage of solar cell, is the output current, and is a constant which is typically in the rang 1 to 3. As th ≫ , if set then (3) can be written as For solar cell, the light current is positively proportional with received illuminance power: is the illuminance power of solar cell.The standard sun light illuminance power at normal room temperature is 1000 w/mm 2 . SC is the short circuit current.We can set = 0, so the solar cell works in the open state; then (5) can be expressed as (7) 2.3.Model of the System.For our system, solar cell is used as the PD.The two models can be connected by making = .
In this way, (3) can be expressed as follows: Combine ( 5), (6), and (7) together: ℎ is the load resistance of solar cell.In conclusion, 1 , 2 , and 3 are constants related to 1 , 2 , SC , and OC , so the relationship between of solar cell and the LED power can be formulated as (11): (11)
Results and Discussions
We set up the two models in Matlab/Simulink and combined them for simulation.Solar cell model was simulated separately first.The model is based on the equations of ( 5), (6), and (7).Assuming that it works in the stable room temperature at 298 K, we chose solar cell AM-5308 for our experimental study.Parameters are set in Table 1.
The LED illumination model and Si photocell array model were combined to simulate the practical system.Figure 2 shows that OC for 4 × 4 and SC for 2 × 8 are half of values for 4 × 8 arrays individually. OC of 2 × 8 and 4 × 8 are 3∼3.5 V, which possibly charge lithium battery.In Figure 3, we got the 2 × 8 arrays solar cell's I-V curves through the simulations under different illumination from 300 Lx to 1000 Lx.These numbers represent the daily scene illumination value, including living room, library, hospital illumination value is above 500 Lx.It is stable for supplying power.The simulation value and experiment matched perfectly in Figure 6.
Then, a square signal as in Figure 7 is modulated on LED as the transmitting data.The period of the square signal is 0.1 s.The duty cycle is 50%.
The output power of the solar cell depends on the load resistance.The maximum output power about 1.2 × 10 −3 W can be achieved, when load resistance is 4 kΩ, under illumination at 300 lx.The output power of solar cell with different load resistance is shown in Figure 8; the -axis unit is 10 KΩ.
The output voltage of solar cell rises to 2.5 V after several pulses.The waveform of output voltage of solar cell under continuous pulse modulation is shown in Figure 9; the -axis unit is 1 S.
An energy gathering and signal detecting system was demonstrated as Figure 10.To fit the working condition of solar cell, we used a 15 W LED which could simulate the different indoor lighting conditions.The distance between the 2 × 8 photocell array and the 15 w LED is 1.8 m.The illumination value on photocell was 690 lx, when the LED was not modulated.The illumination value on photocell was 637.5 lx, when the LED was modulated.The baud rate of computer's output was 19200.The output data was the repetition of "A5" in HEX form and the polarity was reversed by RS485 converter chip.The yellow line in Figure 11 represents a DC coupled output signal of the silicon photocell which is about 2.77 V.The green line in Figure 11 represents the AC coupled output signal of the silicon photocell, filtered by a 0.1 F coupling capacitor.And the AC signal is around 410 mV.The baud rate and AC amplitude could be higher after one stage amplifier circuit [5].
Conclusion
In our works, we set up a model of solar cell VLC system which was simulated in Matlab/Simulink.We had verified the correction of the model and gave reasonable design to optimize the system.
The energy gathering and signals detecting system was demonstrated.The data rate of it is 19200 bps.The DC voltage of photocell was about 2.77 V which is enough for low voltage power supply circuits.The AC voltage of photocell was about 410 mV and could be optimized by one stage amplifier circuit.It was proved that solar cell can act as energy converting and detecting device simultaneously in VLC system.
The channel influences [6], response of solar cell to frequency, room lighting conditions, and other factors were ignored in our model.Further studies can take these factors into consideration.At the same time, we will optimize the design for the actual application.
Figure 1 : 2 . 2 .
Figure 1: The equivalent circuit diagram of a typical solar cell.
Figure 10 :
Figure 10: Energy gathering and signal detecting demo system.
Table 1 :
Parameters for solar cell.
I (A) | 1,740.2 | 2017-01-11T00:00:00.000 | [
"Engineering",
"Physics"
] |
An Iterative Method for Solving Split Monotone Variational Inclusion Problems and Finite Family of Variational Inequality Problems in Hilbert Spaces
*e purpose of this paper is to study the convergence analysis of an intermixed algorithm for finding the common element of the set of solutions of split monotone variational inclusion problem (SMIV) and the set of a finite family of variational inequality problems. Under the suitable assumption, a strong convergence theorem has been proved in the framework of a real Hilbert space. In addition, by using our result, we obtain some additional results involving split convex minimization problems (SCMPs) and split feasibility problems (SFPs). Also, we give some numerical examples for supporting our main theorem.
Introduction
Let H 1 and H 2 be real Hilbert spaces whose inner product and norm are denoted by 〈·, ·〉 and ‖ · ‖, respectively, and let C, Q be nonempty closed convex subsets of H 1 and H 2 , respectively. For a mapping S: C ⟶ C, we denoted by F(S) the set of fixed points of S (i.e., F(S) � x ∈ C: Sx � x { }). Let A: C ⟶ H be a nonlinear mapping. e variational inequality problem (VIP) is to find x * ∈ C such that 〈Ax * , y − x * 〉 ≥ 0, ∀y ∈ C, (1) and the solution set of problem (1) is denoted by VI(C, A). It is known that the variational inequality, as a strong and great tool, has already been investigated for an extensive class of optimization problems in economics and equilibrium problems arising in physics and many other branches of pure and applied sciences. Recall that a mapping A: C ⟶ C is said to be α-inverse strongly monotone if there exists α > 0 such that A multivalued mapping M: H 1 ⟶ 2 H 1 is called monotone if for all x, y ∈ H 1 , 〈x − y, u − v〉 ≥ 0, for any u ∈ Mx and v ∈ My. A monotone mapping M: H 1 ⟶ 2 H 1 is maximal if the graph G(M) for M is not properly contained in the graph of any other monotone mapping. It is generally known that M is maximal if and only if for (x, u) ∈ H 1 × H 1 , 〈x − y, u − v〉 ≥ 0 for all (y, v) ∈ G(M) implies u ∈ Mx. Let M: H 1 ⟶ 2 H 1 be a multivalued maximal monotone mapping. e resolvent mapping J M λ : H 1 ⟶ H 1 associated with M is defined by where I stands for the identity operator on H 1 . We note that for all λ > 0, the resolvent J M λ is single-valued, nonexpansive, and firmly nonexpansive.
where c ∈ (1, 1/L) with L being the spectral radius of the operator T * T.
He obtained the following weak convergence theorem for algorithm (6).
Theorem 1 (see [1]). Let H 1 , H 2 be real Hilbert spaces. Let T: H 1 ⟶ H 2 be a bounded linear operator with adjoint T * . For i � 1, 2, let A i : H i ⟶ H i be α i -inverse strongly monotone with α � min α 1 , α 2 and let M i : H i ⟶ 2 M i be two maximal monotone operators. en, the sequence generated by (6) converges weakly to an element x * ∈ Ω provided that Ω ≠ ∅, λ ∈ (0, 2α), and c ∈ (1, 1/L) with L being the spectral radius of the operator T * T.
On the other hand, Yao et al. [20] presented an intermixed Algorithm 1.3 for two strict pseudo-contractions in real Hilbert spaces. ey also showed that the suggested algorithms converge strongly to the fixed points of two strict pseudo-contractions, independently. As a special case, they can find the common fixed points of two strict pseudocontractions in Hilbert spaces (i.e., a mapping S: C ⟶ C is said to be κ-strictly pseudo-contractive if there exists a constant κ ∈ [0, 1) such that ‖Sx − Sy‖ 2 ≤ ‖x − y‖ 2 + k ‖(I − S)x − (I − S)‖, 0.3 cm ∀x, y ∈ C). Algorithm 2. For arbitrarily given x 0 , y 0 ∈ C, let the sequences x n and y n be generated iteratively by x n+1 � 1 − β n x n + β n P C α n f y n + 1 − k − α n x n + kTx n , n ≥ 0, y n+1 � 1 − β n y n + β n P C α n f x n + 1 − k − α n y n + kSy n , n ≥ 0, where α n and β n are two sequences of the real number in (0, 1), T, S: C ⟶ C are λ-strictly pseudo-contractions, Under some control conditions, they proved that the sequence x n converges strongly to P F(T) f(y * ) and y n converges strongly to P F(S) f(x * ), respectively, where x * ∈ F(T), y * ∈ F(S), and P F(T) and P F(S) are the metric projection of H onto F(T) and F(S), respectively. After that, many authors have developed and used this algorithm to solve the fixed-point problems of many nonlinear operators in real Hilbert spaces (see for example [21][22][23][24][25][26][27]). Question: can we prove the strong convergence theorem of two sequences of split monotone variational inclusion problems and fixed-point problems of nonlinear mappings in real Hilbert spaces? e purpose of this paper is to modify an intermixed algorithm to answer the question above and prove a strong convergence theorem of two sequences for finding a common element of the set of solutions of (SMVI) (4) and (5) and the set of solutions of a finite family of variational inequality problems in real Hilbert spaces. Furthermore, by applying our main result, we obtain some additional results involving split convex minimization problems (SCMPs) and split feasibility problems (SFPs). Finally, we give some numerical examples for supporting our main theorem.
Preliminaries
Let H be a real Hilbert space and C be a nonempty closed convex subset of H. We denote the strong convergence of x n to x and the weak convergence of x n to x by notations "x n ⟶ x as n ⟶ ∞" and "x n ⇀x as n ⟶ ∞," Definition 1. Let H be a real Hilbert space and C be a closed convex subset of H. Let S: C ⟶ C be a mapping. en, S is said to be It is well known that if S is α-inverse strongly monotone, then it is 1/α-Lipschitz continuous and every nonexpansive mapping S is 1-Lipschitz continuous. We note that if S: H ⟶ H is a nonexpansive mapping, then it satisfies the following inequality (see eorem 3 in [28] and eorem 1 in [29]): Particularly, for every x ∈ H and y ∈ F(S), we have For every x ∈ H, there is a unique nearest point P C x in C such that Such an operator P C is called the metric projection of H onto C.
Lemma 1 (see [30]). For a given z ∈ H and u ∈ C, Furthermore, P C is a firmly nonexpansive mapping of H onto C and satisfies Moreover, we also have the following lemma.
Lemma 2 (see [31]). Let H be a real Hilbert space, let C be a nonempty closed convex subset of H, and let A be a mapping of C into H. Let u ∈ C. en, for λ > 0, where P C is the metric projection of H onto C.
Lemma 3.
Let C be a nonempty closed and convex subset of a real Hilbert space H. For every i � 1, 2, . . . , N, let where 0 < a i < 1 for all i � 1, 2, . . . , N and N i�1 a i � 1. Moreover, I − λ N i�1 a i A i is a nonexpansive mapping for all λ ∈ (0, 2α).
Proof. By Lemma 4.3 of [32], we have that
Let λ ∈ (0, 2α) and let x, y ∈ C. As the same argument as in the proof of Lemma 8 in [16], we have I − λ N i�1 a i A i as nonexpansive.
Next, we give an example to support Lemma 7.
□ Lemma 8 (see [34]). Let s n be a sequence of nonnegative real numbers satisfying s n+1 ≤ (1 − α n )s n + δ n , ∀n ≥ 0 where α n is a sequence in (0, 1) and δ n is a sequence in R such that en lim n⟶∞ s n � 0.
Main Results
In this section, we introduce an iterative algorithm of two sequences which depend on each other by using the intermixed method. en, we prove a strong convergence theorem for solving two split monotone variational inclusion problems and a finite family of variational inequality problems.
Theorem 2.
Let H 1 and H 2 be Hilbert spaces, and let C be a nonempty closed convex subset of H 1 . Let T: H 1 ⟶ H 2 be a bounded linear operator, and let f, g: Let x n and y n be sequences generated by for all n ≥ 1 where δ n , σ n , η n , α n ⊆[0, 1] with δ n + σ n + η n � 1, a x 1 , a x 2 , . . . , a x N , a y 1 , a y 2 , . . . , a y N ⊂ (0, 1), and μ x n , μ y n ⊂ (0, ∞). Assume the following condition holds: en, x n converges strongly to x � P F x f(y) and y n converges strongly to y � P F y g(x).
Proof. We divided the proof into five steps.
□
Step 1. We will show that x n and y n are bounded. Let x * ∈ F x and y * ∈ F y . en, from Lemma 7 and Lemma 6, we get From (21), Lemma 3, and (22), we have Similarly, from definition of y n , we have Hence, from (23) and (24), we obtain By induction, we have International Journal of Mathematics and Mathematical Sciences for every n ∈ N. us, x n and y n are bounded.
Step 2. We will show that lim By applying Lemma 3, we get that From the definition of x n , (27), and (28), we have By the same argument as in (27) and (29), we also have 6 International Journal of Mathematics and Mathematical Sciences From (29) and (31), we obtain that From (32), conditions (1), (2), and (5), and Lemma 8, we obtain that Step have that International Journal of Mathematics and Mathematical Sciences en, we have Observe that From (33) and (37), we have By the same argument as above, we also have that Note that By (37) and (39), we get that By the same argument as (41), we also obtain By (33) and (39), we get that However, It follows from (33) and (45) that Consider From (39) and (47), we obtain Applying the same method as (48), we also have Step 4. We will show that lim sup n⟶∞ 〈f (y) − x, z n − x〉 ≤ 0 and lim sup n⟶∞ 〈g(x) − y, z n − y〉 ≤ 0, where x � P F x f(y) and y � P F y f(x). First, we take a subsequence z n k of z n such that limsup n⟶∞ 〈f(y) − x, z n − x〉 � lim k⟶∞ 〈f(y) − x, z n k − x〉.
(51) 8 International Journal of Mathematics and Mathematical Sciences Since x n is bounded, there exists a subsequence x n k of x n such that x n k ⇀q 1 as k ⟶ ∞. From (39), we get that z n k ⇀q 1 .
Next, we need to show that . Assume that q 1 ∉ Ω x . By Lemma 7, we get that q 1 ≠ G x q 1 . Applying Opial's condition and (49), we get that is is a contradiction. us, q 1 ∈ Ω x . Assume that q 1 ∉ ∩ N i�1 VI(C, B x i ). en, from Lemma 3 and Lemma 2, we have q 1 ∉ F(P C (I − μ x n N i�1 a i B x i )). From Opial's condition and (42), we obtain is is a contradiction. us, q 1 ∈ ∩ N i�1 VI(C, B x i ), and so, (54) However, z n k ⇀q 1 . From (54) and Lemma 1, we can derive that limsup n⟶∞ 〈f(y) − x, z n − x〉 � lim k⟶∞ 〈f(y) − x, z n k − x〉, By the same method as (55), we also obtain that limsup n⟶∞ 〈g(x) − y, z n − y〉 ≤ 0. (56) Step 5. Finally, we show that the sequences x n and y n converges strongly to x � P F x f(y) and y � P F y f(x), respectively. From the definition of z n , we have which implies that International Journal of Mathematics and Mathematical Sciences From the definition of x n and (58), we get (59) Applying the same argument as in (58) and (59), we get From (58) and (59), we have According to condition (2) and (4), (61), and Lemma 8, we can conclude that x n and y n converge strongly to x � P F x f(y) and y � P F y g(x), respectively. Furthermore, from (39) and (40), we get that z n and w n converge strongly to x � P F x f(y) and y � P F y g(x), respectively.
is completes the proof. □ One of the great special cases of the SMVIP is the split variational inclusion problem that has a wide variety of application backgrounds, such as split minimization problems and split feasibility problems.
If we set A x i � 0 and A y i � 0 in eorem 2, for all i � 1, 2, then we get the strong convergence theorem for the split variational inclusion problem and the finite families of the variational inequality problems as follows: VI(C, B y i )) ≠ ∅. Let x n and y n be sequences generated by x 1 , y 1 ∈ H 1 and Ty n , for all i � 1, 2, and 0 < c x , c y < 1/L with L being a spectral radius of T * T. Assume the following conditions hold: en, x n converges strongly to x � P F x f(y) and y n converges strongly to y � P F y g(x).
Applications
In this section, by applying our main result in eorem 2, we can prove strong convergence theorems for approximating the solution of the split convex minimization problems and split feasibility problems.
Split Convex Minimization Problems.
Let φ: H ⟶ R be a convex and differentiable function and ψ: H ⟶ (− ∞, ∞] be a proper convex and lower semicontinuous function. It is well known that if ∇φ is 1/α-Lipschitz continuous, then it is α-inverse strongly monotone, where ∇φ is the gradient of φ (see [10]). It is also known that the subdifferential zψ of ψ is maximal monotone (see [35]). Moreover, Next, we consider the following the split convex minimization problem (SCMP): find and such that y * � Tx * ∈ H 2 solves where T: H 1 ⟶ H 2 is a bounded linear operator with adjoint T * , φ i , and ψ i defined as above, for i � 1, 2. We denoted the set of all solutions of (64) and (65) by Θ. at is, eorem 2, then we get the strong convergence theorem for finding the common solution of the split convex minimization problems and the finite families of the variational inequality problems as follows. � 1, 2, . . . , N, let B Let x n and y n be sequences generated by x 1 , y 1 ∈ H 1 and y n+1 � δ n y n + σ n P C I − μ y n N i�1 a y i B y i ⎛ ⎝ ⎞ ⎠ y n + η n α n g x n + 1 − α n G y y n , for all n ≥ 1 where δ n , σ n , η n , α n ⊆[0, 1] with δ n + σ n + η n � 1, en, x n converges strongly to x � P F x f(y) and y n converges strongly to y � P F y g(x).
e Split Feasibility Problem.
Let H 1 and H 2 be two real Hilbert spaces. Let C and Q be the nonempty closed convex subset of H 1 and H 2 , respectively. e split feasibility problem (SFP) is to find a point x ∈ C, such that Ax ∈ Q.
(67) e set of all solutions (SFP) is denoted by is problem was introduced by Censor and Elfving [8] in 1994. e split feasibility problem was investigated extensively as a widely important tool in many fields such as signal processing, intensity-modulated radiation therapy problems, and computer tomography (see [36][37][38] and the references therein).
Let H be a real Hilbert space, and let h be a proper lower semicontinuous convex function of H into (− ∞, +∞]. e subdifferential zh of h is defined by en, zh is a maximal monotone operator [39]. Let C be a nonempty closed convex subset of H, and let i C be the indicator function of C, i.e., i C (x) � 0 if x ∈ C and i C (x) � ∞ if x ∉ C. en, i C is a proper, lower semicontinuous and convex function on H, and so the subdifferential zi C of i C is a maximal monotone operator. en, we can define the resolvent operator J Recall that the normal cone We note that zi C � N C , and for λ > 0, we have that u � J zi C λ x if and only if u � P C x (see [31]).
Setting M 1 � zi C , M 2 � zi Q , and in (SMVI) (4) and (5), then (SMVI) (4) and (5) are reduced to the split feasibility problem (SFP) (67) Now, by applying eorem 2, we get the following strong convergence theorem to approximate a common solution of SFP (67) and a finite family of variational inequality problems.
Theorem 4.
Let H 1 and H 2 be Hilbert spaces, and let C and Q be the nonempty closed convex subset of H 1 and H 2 , respectively. Let T: H 1 ⟶ H 2 be a bounded linear operator with adjoint T * , and let f, g: H 1 ⟶ H 1 be ρ f , ρ g -contraction mappings with ρ � max ρ f , ρ g . For i � 1, 2, . . . , N, Let x n and y n be sequences generated by x 1 , y 1 ∈ H 1 and y n+1 � δ n y n + σ n P C I − μ x n N i�1 a y i B y i ⎛ ⎝ ⎞ ⎠ y n + η n α n g x n + 1 − α n P C y − c y T * I − P Q Ty n , for all n ≥ 1, where δ n , σ n , η n , α n ⊆[0, 1] with δ n + σ n + η n � 1, a x 1 , a x 2 , . . . , a x N , a y 1 , a y 2 , . . . , a y N ⊂ (0, 1), μ x n , μ y n ⊂ (0, ∞), λ x i , λ y i ∈ (0, ∞) for all i � 1, 2, and 0 < c x , c y < 1/L with L being a spectral radius of T * T. Assume the following condition holds: en, x n converges strongly to x � P F x f(y) and y n converges strongly to y � P F y g(x). □ e split feasibility problem is a significant part of the split monotone variational inclusion problem. It is extensively used to solve practical problems in numerous situations. Many excellent results have been obtained. In what follows, an example of a signal recovery problem is introduced.
Example 2. In signal recovery, compressed sensing can be modeled as the following under-determined linear equation system: where x ∈ R N is a vector with m non-zero components to be recovered, y ∈ R M is the observed or measured data with noisy δ, and A: R N ⟶ R M (M < N) is a bounded linear observation operator. An essential point of this problem is that the signal x is sparse; that is, the number of nonzero elements in the signal x is much smaller than the dimension of the signal x. To solve this situation, a classical model, convex constraint minimization problem, is used to describe the above problem. It is known that problem (69) can be seen as solving the following LASSO problem [40]: where t > 0 is a given constant and ‖ · ‖ 1 is ℓ 1 norm. In particular, LASSO problem (70) is equivalent to the split feasibility problem (SFP) (67) when C � x ∈ R N : ‖x‖ 1 ≤ t and Q � y .
. Also, it is easy to see that all parameters satisfy all conditions in eorem 2. en, by eorem 2, we can conclude that the sequence x n converges strongly to (1, 1) and y n converges strongly to (− 2, − 2). Table 1 and Figure 1 show the numerical results of x n and y n where x 1 � (− 10, 10), y 1 � (− 10, 10), and n � N � 50.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare no conflicts of interest. | 5,342.4 | 2021-12-31T00:00:00.000 | [
"Mathematics"
] |
A light-curve analysis of the X-ray flash first observed in classical novae
An X-ray flash, expected in a very early phase of a nova outburst, was at last detected with the {\it SRG}/eROSITA in the classical nova YZ Reticuli 2020. The observed flash timescale, luminosity, and blackbody temperature substantially constrain the nova model. We present light curve models of the X-ray flash for various white dwarf (WD) masses and mass accretion rates. We have found the WD mass in YZ Ret to be as massive as $M_{\rm WD}\sim 1.3 ~M_\odot$ with mass accretion rates of $\dot M_{\rm acc}\sim 5 \times 10^{-10}- 5\times 10^{-9} ~M_\odot$ yr$^{-1}$ including the case that the mass accretion rate is changing between them, to be consistent with the {\it SRG}/eROSITA observation. The X-ray observation confirms the luminosity to be close to the Eddington limit at the X-ray flash. The occurrence of optically thick winds, with the photospheric radius exceeding $\sim 0.1~R_\odot$, terminated the X-ray flash of YZ Ret by strong absorption. This sets a constrain on the starting time of wind mass loss. A slight contamination of the core material into the hydrogen rich envelope seems to be preferred to explain a very short duration of the X-ray flash.
INTRODUCTION
A nova is a thermonuclear runaway event on a massaccreting white dwarf (WD) (see, e.g., Kato et al. 2022, for a recent self-consistent calculation).
A hydrogen-rich envelope on the WD quickly brightens up to L ph ∼ 10 38 erg s −1 or several 10 4 L ⊙ just after the start of a runaway of the hydrogen-shell burning. Here, L ph is the photospheric luminosity. In an early phase of expansion of the photosphere, its surface temperature increases up to T ph ∼ 10 6 K and the nova emits supersoft X-rays at a rate of L X ∼ 10 38 erg s −1 . The duration of the bright soft X-ray phase is so short that it is called the "X-ray flash" in the rising phase of a nova. Such X-ray flashes have long been expected to be observed, but no one detected until UT 2020 July 7 when the eROSITA instrument on board Spectrum-Roentgen-Gamma (SRG) scanned the region of YZ Reticuli (König et al. 2022).
The nova outburst of YZ Ret (Nova Ret 2020) was reported first by McNaught (2020) at visual<EMAIL_ADDRESS>tude 5.3 on UT 2020 July 15. This object was known as a cataclysmic variable (MGAB-V207), a novalike VY Scl-type variable with irregular variations in the V magnitude range 15.8 − 18.0 mag (Kilkenny et al. 2015). The nova was classified as a He/N-type by Carr et al. (2020). The distance to the nova is estimated to be d = 2.53 +0.52 −0.26 kpc by Bailer-Jones et al. (2021) based on the Gaia/eDR3 data. The galactic coordinates are (ℓ, b) = (265. • 3975, −46. • 3954) (ep=J2000), so the nova is located at 1.8 kpc below the galactic disk. The galactic absorption toward YZ Ret is as low as E(B − V ) ∼ 0.03 (Sokolovsky et al. 2022). The orbital period was obtained by Schaefer (2022) to be P orb = 0.1324539 ± 0.0000098 days (=3.17889 hr). Thanks to the short distance d = 2.5 kpc from the Earth and very low galactic absorption E(B − V ) ∼ 0.03, the nova was observed in mutiwavelengths, including optical, X-ray, and gamma-ray from a very early phase of the outburst until a very later phase through a supersoft X-ray source (SSS) phase. X-ray and γ-ray observations are reported by Sokolovsky et al. (2022) and König et al. (2022).
The most remarkable characteristic of YZ Ret observation is the detection of an X-ray flash. This is the first positive detection among any types of nova outbursts. König et al. (2022) reported the X-ray flash on UT 2020 July 7 observed with the SRG/eROSITA. This detection is serendipitous during its all sky survey. Since the X-ray flash of a nova is a brief phenomenon that occurs before the optical brightening, it is not possible to exactly predict when it occurs.
Historically, there were a few attempts for detecting an X-ray flash. Morii et al. (2016) searched MAXI data for X-ray flashes that would possibly occurred during the MAXI survey at the position and time of known nova outbursts, but unsuccessful. Kato et al. (2016) attempted to detect an X-ray flash just before the expected nova outbursts of one-year-period recurrent nova M31N 2008-12a. This was the first planned observation, but no X-ray flash was detected in its 2015 outburst.
Theoretical models predict X-ray flashes detectable only for a very short time ( 1 day) depending on the WD mass and mass-accretion rate (Kato et al. 2016). In low mass WDs, the surface temperature does not rise as high as to emit much X-rays, and most of photon energy is far-UV instead of X-ray (e.g, Kato et al. 2017Kato et al. , 2022. Thus, the X-ray flash should be detectable only in massive WDs. The time interval between the X-ray flash and optical maximum also depends on the WD mass and mass-accretion rate, which is, however, poorly understood. A very early phase, before a nova brightens optically, is one of the frontiers in nova studies. Because no planned observations had been successful, only the serendipitous detection of the X-ray flash with the SRG/eROSITA gives us invaluable information on the very early phase of a nova.
In this paper, we present theoretical light curve models of X-ray flashes, the duration of which are short enough to match the SRG/eROSITA observation. Only massive WDs are responsible for the flash like in YZ Ret. This paper is organized as follows. Section 2 presents our numerical method and results for the X-ray flash, and compares them with the observational data for YZ Ret. Conclusions follow in section 3.
MODEL CALCULATION OF X-RAY FLASH
We have calculated models of nova outbursts with a Henyey type time-dependent code combined with steady-state optically thick wind solutions. The numerical method is the same as that in Kato et al. (2022). We list our model parameters in Table 1. From left to right, model name, WD mass, mass accretion rate, additional carbon mixture in the hydrogen-rich envelope, recurrence period of nova outbursts, starting time of winds since the onset of thermonuclear runaway, maximum nuclear burning rate that represents the strength of a flash, ignition mass, and pass or not the requirement from the scan detection. The 1.0 M ⊙ WD model (Model A) is taken from Kato et al. (2022). The mass accretion ratė M acc = 5 × 10 −9 M ⊙ yr −1 is close to the median value in the distribution of mass-accretion rates for classical novae obtained by Selvelli & Gilmozzi (2019) while the mass accretion rateṀ acc = 5×10 −10 M ⊙ yr −1 is close to the empirical rate (Knigge et al. 2011) for cataclysmic variable systems with an orbital period of P orb = 3.18 hr. Since many old novae are observed to fade significantly on timescales of ∼ 100 years (e.g., Duerbeck 1992;Johnson et al. 2014), we have taken into account a gradual decrease of accretion rate in Model H, in which accretion resumes at a rate ofṀ acc = 5 × 10 −9 M ⊙ yr −1 just after the end of a previous flash, while the accretion rate gradually decreases.
We have assumed solar composition for accreted matter (X = 0.7, Y = 0.28, and Z = 0.02) for all the models except Model J. In many classical novae, heavy element enrichment is observed in the ejecta (e.g., Gehrz et al. 1998). To mimic such a heavy element enrichment one may replace the envelope composition with that polluted by the WD core composition at the onset of thermonuclear runaway (e.g., Starrfield et al. 2020), or assume a CO enhancement in the accreting matter (e.g., Chen et al. 2019). In Model E, F, G, H, and I we have increased carbon mass fraction by 0.1 and decreased helium mass fraction by the same amount at the onset of thermonuclear runaway. In Model J, we have assumed carbon rich mixture (X = 0.6, Y = 0.28 X C = 0.1, and Z = 0.02) for accreting matter. In these high temperature phases, the WD photosphere emits X-ray/UV photons, which corresponds to an Xray/UV flash. After that, the envelope expands and the photospheric temperature begins to decrease. Optically thick winds start when the envelope expands and the surface temperature decreases to log T ph (K)=5. a Starting time of optically thick winds since the Lnuc peak (t = 0).
Cycle of nova evolution in the HR diagram
b pass or not the detection requirement of 22nd (no), 23rd (yes), and 24th (no) scan.
c Model taken from Kato et al. (2022).
d Increased carbon mass-fraction by 0.1 at ignition.
f Mass accretion of carbon rich matter by 0.1.
The filled red circle with error bars indicates the position of the X-ray flash of YZ Ret observed by SRG/eROSITA. The point lies on the evolution track of Model I (1.35 M ⊙ ) just before optically thick winds start, which is important because the winds possibly self-absorb soft X-rays as discussed in the next subsection.
When the photospheric radius attains its maximum expansion, the wind mass loss rate also reaches maximum. The hydrogen-rich envelope mass quickly decreases mainly due to wind mass loss. The photospheric radius begins to shrink while the photospheric temperature turns to increase. In a later phase, optically thick winds stop (at each open square) and the photospheric temperature becomes as high as log T ph (K) =5.5 -6.1 and the WD again emits X-ray/UV photons. This phase is called the supersoft X-ray source (SSS) phase in the decay phase of a nova outburst.
The wind phase after the optical maximum and following SSS phase have been observed well in a number of nova outbursts. However, an early phase before the optical maximum has rarely been studied. (1)
Optically thick winds absorb X-rays
Parameters of our models soon after the start of winds are, R ph ∼ 10 10 cm andṀ wind ∼ 10 −7 M ⊙ yr −1 . Using these values in equation (1), we find the optical depth of X-ray in the winds to be as high as τ X ∼ 10 6 for E X ∼ 0.1 keV. Thus, soft X-ray emission would be absorbed in the wind phase. In other words, the X-ray flash could be terminated by the start of the winds. König et al. (2022) fitted the X-ray spectrum of YZ Ret with a blackbody spectrum and obtained T BB = 3.27 +0.11 −0.33 × 10 5 K (kT BB = 28.2 +0.9 −2.8 eV). They also derived absolute luminosity to be L ph = (2.0 ± 1.2) × 10 38 erg s −1 . These estimates are plotted in Figure 2 by an open red circle with error bars. The estimated blackbody flux has a large error bar, but it is consistent only with relatively massive WDs (M WD 1.2 M ⊙ ).
The estimated blackbody temperature is located leftside the open circles, that is, before optically thick winds start. Thus, theoretically, no strong emission lines are expected. This is consistent with no prominent emis- sion lines in the observed X-ray spectrum (König et al. 2022). Sokolovsky et al. (2022) estimated the galactic hydrogen column density to be 1 × 10 19 cm −2 N H 1.86 × 10 20 cm −2 . König et al. (2022) obtained N H < 1.4 × 10 20 cm −2 toward the nova based on their X-ray spectrum analysis and concluded that there is no major intrinsic absorption during the X-ray flash. Such small hydrogen column density is consistent with our models in which the optically thick winds are absent at the stage of kT ph = 28.2 eV.
2.3. Very short duration of X-ray flash König et al. (2022) reported that the SRG/eROSITA scanned the region of YZ Ret 28 times (every 4 hours) and detected YZ Ret at the 23rd scan for about 36 s but did not detect 4 hours before and after this scan. This means that the duration of X-ray flash is shorter than 8 hours. Figure 3 shows the X-ray light curves during the flash for our models in Table 1 Table 1, from upper to lower, black line (Model J), red (I), blue (C), and green (D), all for 1.35 M⊙ WDs, Model E (black) and F (magenta) for 1.3 M⊙ , B (1.2 M⊙: black), and A (1.0 M⊙: black). The red dot with the error bars denotes kT ph = 28.2 +0.9 −2.8 eV, L ph = (2.0 ± 1.2) × 10 38 erg s −1 , and radius R ph = 50000 ± 18000 km (= 0.07 ± 0.026 R⊙) estimated for YZ Ret (König et al. 2022). the red dot by four orders of magnitude because the SRG/eROSITA did not detect X-rays.
Model A, B, and C evolve slowly, and should have detectable X-ray fluxes even before and/or after four hours of the detection epoch by the SRG/eROSITA, contradicting with the non-detection. The upward (downward) arrow indicates the theoretical prediction of detectable (non-detectable) X-ray flux. Model D is marginally consistent with the requirement.
The carbon mixture models evolve much faster and easily fulfill the requirements. The CNO enrichment in the hydrogen rich envelope make flash evolution faster because the CNO reaction rates increase. A faster evolution is favorable to be consistent with the constraints from the observation of SRG/eROSITA. Furthermore, the appearance of the optically thick winds would contribute to shorten the duration of the X-ray flash of YZ Ret.
In Model H the mass accretion restarts after a shell flash ends with a rate ofṀ acc = 5×10 −9 M ⊙ yr −1 which gradually decreases to 5 × 10 −10 M ⊙ yr −1 in the first 100 years of the quiescent phase and keep constant after that. Until the next outburst, long after 1600 years, the WD thermal structure is adjusted to the lower accretion rate. As a result the outburst properties should be similar to a model of mass accretion rate of 5 × 10 −10 M ⊙ yr −1 . Model H shows in fact similar properties to model I, but slightly stronger flash properties, i.e., shorter flash duration and larger L max nuc than in Model I. König et al. (2022) also reported that the X-ray flux of YZ Ret decreased by about a few to 10% even during the very short 36 s observation period. We estimate Xray decay rates of our models to find a few % in Model F, G, H, and I during a 30 s period near the point denoted by the large red dot, being broadly consistent with the observation.
YZ Ret is a novalike VY Scl-type star, in which dwarf nova outbursts are suppressed. To suppress thermal disk instability of a dwarf nova, the mass transfer rate is higher thanṀ crit . This critical rate is estimated to beṀ crit ∼ 2 × 10 −9 M ⊙ yr −1 for P orb = 3.18 hr and an assumed total binary mass of M 1 + M 2 ∼ 1.3 + 0.3 = 1.6 M ⊙ (see, e.g., equations (3) and (4) of Osaki 1996). Our Model E and G (1.3 and 1.35 M ⊙ WD models with a relatively high mass accretion rate ofṀ acc = 5 × 10 −9 M ⊙ yr −1 ) satisfy this requirement of novalike stars, i.e.,Ṁ acc >Ṁ crit .
We should emphasize the importance of low energy sensitivity of detector. The blue line in Figure 3a shows an X-ray light curve of the 0.3-10 keV band corresponding to the Swift/XRT. The flux is about ten times smaller than the SRG/eROSITA flux (0.2-10 keV band), clearly showing that, for an efficient detection of X-ray flashes, the low energy sensitivity (down to 0.2 keV or lower) is important.
Clue to the origin of the super-Eddington luminosity
We should remark the importance of the X-ray flash on the super-Eddington problem in novae. The bolometric luminosity of a star in hydro-static balance cannot exceed the Eddington limit as long as spherical symmetry is assumed. The Eddington limit is defined by for massive WDs with the mass of M WD , where κ = 0.2(1 + X) cm 2 g −1 is the electron scattering opacity. YZ Ret reached its optical peak V = 3.7 four days after the X-ray flash (McNaught & Phillips 2020). This brightness corresponds to an absolute V magnitude of M V = 3.7 − (m − M ) V = −8.4, which is several times larger than the Eddington limit. 1 Here, the distance modulus in V band is estimated to be (m − M ) V = 5 log(d/10 pc) + A V = 12.0 + 0.1 = 12.1.
At the X-ray flash of YZ Ret, however, the total photospheric luminosity was estimated to be L ph = (2.0 ± 1.2) × 10 38 erg s −1 (König et al. 2022). Thus, the photospheric luminosity did not largely exceed the Eddington limit (see Figures 1 and 2). This clearly confirms that the nova envelope is in hydro-static balance at the X-ray flash, when the optically thick winds had not yet started. Thus, we may conclude that the origin of super-Eddington luminosity is closely related to the occurrence of optically thick winds.
CONCLUSIONS
An X-ray flash in the rising phase of a nova was first detected in the classical nova YZ Ret, which provides us with invaluable information for the nova physics. We may conclude our theoretical analysis as follows.
• König et al. (2022) found X-ray spectrum at the flash to be consistent with an unabsorbed blackbody of 3 × 10 5 K. This is consistent with our hydrostatic evolution models just before optically thick winds start.
• The blackbody temperature T BB ≈ 3 × 10 5 K and luminosity of L ph ≈ 2 × 10 38 erg s −1 are consistent with our models of very massive WDs (M WD 1.2 M ⊙ ).
• The very short duration of the X-ray flash ( 8 hr) further constrains the mass of WD (M WD 1.3 M ⊙ ), depending on the degree of WD core material mixing into the hydrogen-rich envelope. The generation of optically thick winds when the photospheric radius exceeds ∼ 0.1 R ⊙ might terminate the X-ray flash of YZ Ret.
• The nova envelope is in hydro-static balance at the X-ray flash, just before optically thick winds start. In a few days later, the optical luminosity highly exceeds the Eddington limit. This suggests that the origin of super-Eddington luminosity is closely related to the occurrence of optically thick winds.
We are grateful to the anonymous referee for useful comments, which improved the manuscript. | 4,412.8 | 2022-08-01T00:00:00.000 | [
"Physics"
] |
Virtual Synchronous Machine integration on a Commercial Flywheel for Frequency Grid Support
With increasing penetration of inverter-connected power sources, such as renewable energy sources (RESs), the equivalent inertia in the grid decreases. Employing maximum power point tracking controllers, RESs behave like constant power sources, not offering damping to support the frequency during disturbances. Novel control algorithms have been proposed that can mimic the inertial behavior of generators or can provide grid support to counter the decline in system inertia. In this letter, we explore the capability of a commercially available high-speed flywheel energy storage system (FESS) to provide virtual inertia and damping services to microgrids. We demonstrate how a virtual synchronous machine algorithm can increase the grid inertia by controlling the FESS active power. A power hardware in the loop evaluation was performed considering the real limitations of a commercial flywheel with different virtual inertia and damping droop settings.
Letters
Virtual Synchronous Machine integration on a Commercial Flywheel for Frequency Grid Support Florian Reißner and Giovanni De Carne , Senior Member, IEEE Abstract-With increasing penetration of inverter-connected power sources, such as renewable energy sources (RESs), the equivalent inertia in the grid decreases.Employing maximum power point tracking controllers, RESs behave like constant power sources, not offering damping to support the frequency during disturbances.Novel control algorithms have been proposed that can mimic the inertial behavior of generators or can provide grid support to counter the decline in system inertia.In this letter, we explore the capability of a commercially available high-speed flywheel energy storage system (FESS) to provide virtual inertia and damping services to microgrids.We demonstrate how a virtual synchronous machine algorithm can increase the grid inertia by controlling the FESS active power.A power hardware in the loop evaluation was performed considering the real limitations of a commercial flywheel with different virtual inertia and damping droop settings.
I. INTRODUCTION
M OTIVATED by the energy transition and the ensuing rise in inverter-interfaced power generation in the grid, control algorithms for inverters integrating grid forming (GFM) features have become an important area of research [1], [2], [3].GFM inverters implement frequency and voltage support features, such as droop and virtual inertia.It has been shown that such inverters can form stable microgrids (MGs) [4] and improve the system stability [5].Numerous approaches exist to implement frequency support measures in inverter-interfaced power stations.While it is possible to implement grid support features also on phase locked loop (PLL)-controlled inverters (often referred to as fast frequency support), cf., [6], controllers emulating synchronous machines do not require a PLL at all and, therefore, offer superior stability and robustness in grids with high penetration of inverter-interfaced power sources.Reissner and Weiss [7] showed that virtual synchronous machine (VSM)based inverters can also improve the region of attraction of a stable operating point of a grid.Virtual inertia and active damping services can be provided only if energy is available on demand.Various ideas have been proposed to exploit existing energy reservoirs to provide grid support, such as the rotational energy of wind turbines [8] or the module capacitor energy in modular multilevel converters [9].However, such approaches may only have very limited impact, since the available energy storage is often small.To overcome this, dedicated power reservoirs, such as super capacitors, batteries, or flywheels can be used [10], [11], [12], [13].Theoretically, such energy storage provides instantly available power, only limited by the power rating of the relevant equipment and potential manufacturer constraints.
In this letter, we investigate the provision of active damping and virtual inertia services by a commercially available 120-kW high-speed flywheel energy storage system (FESS) in MGs, implemented using a VSM controller.We investigate the technological limitations of the FESS, such as the maximal power ramp rate, on offering frequency support in small MGs.Particular focus was placed on the potential overshoots and oscillations created by poor tuning.All necessary auxiliaries of commercially available FESS (vacuum pumps, air conditioning, etc.) have been included in the analysis in order to give a realistic estimate of the system performance.
The VSM and the MG were implemented on an Opal-RT 5700 real-time simulator, integrated with a 1-MVA power hardware in the loop (PHIL) test field in the Energy Lab 2.0 at the Karlsruhe Institute of Technology.The rest of this letter is organized as follows.In Section II, we briefly describe the VSM and the grid model used in the experiments.In Section III, we present the experimental results.Finally, Section IV concludes this letter.
II. SYSTEM DESCRIPTION
Virtual Synchronous Machine: The VSM used in this letter is based on [14] and [15].We show a simplified block diagram in Fig. 1 containing the active and reactive power control loops.The main part of the VSM consists of a second-order swing equation, expressed as a torque balance where T m , T d , and T e are the mechanical-, damping-, and electrical torque, respectively, J is the inertia constant, and ω is the angular frequency of the rotor of the VSM. 1 S denotes a saturating integrator (cf., [14]).In the following, we will also refer to the inertia using M = Jω n , where ω n is the nominal grid frequency.The damping torque T d is obtained from the following equation, expressed in the Laplace domain: where τ is the time constant of the lead-lag filter, and D l and D h are the low-and high-frequency damping constants, respectively.When controlling a flywheel, the low-frequency droop constant D l is set to zero, to not deplete the flywheel energy during long frequency deviations.We refer to the damping ratio of the VSM as ρ = D h /J in the following.T w (s) in (2) denotes the damper winding torque, as defined in [16], which we do not explain here for brevity.It essentially prevents oscillations of the VSM rotor, but has a little impact on the output power.
Integrating ω (modulo 2π), the rotor angle θ is obtained, which then is used for all required Park-transformations.The virtual rotor field current i f is obtained from the following equation, which implements voltage droop and reactive power control: where m > 0 is the mutual inductance (a constant), K m > 0 is the controller gain, D q is the voltage droop factor, V r and Q set are the voltage and reactive power references, respectively, and V and Q are the measured voltage and reactive power.The synchronous internal voltages e are obtained from the following equation: Using a virtual impedance with resistance R g , inductance L g , and capacitance C g , the virtual currents i virt are obtained from e − v.This allows to calculate Te = −mi f i virt,q , which after low-pass filtering by block low-pass filter (LPF) gives T e .MG model: The MG is represented by an infinite bus with frequency dynamics similar to those of a synchronous generator, cf., [17,Sec. 11.1].The voltage amplitude V ∞ is assumed constant, and the equivalent inertia and droop constants are denoted by M ∞ and D ∞ , respectively.The governor and turbine characteristics of this equivalent generator are represented by a first-order lead-lag filter with time constants T z and T p .A loss of a generation unit is simulated by a drop in the power production by ΔP , which subsequently causes a typical frequency drop with a rate of change of frequency (ROCOF) defined by the equivalent inertia M ∞ and a steady-state frequency deviation dictated by D ∞ .T z and T p are chosen such that the nadir is about three times the steady-state deviation and occurs approximately after 6 s.The output power of the flywheel P FW is calculated from voltage and current measurements and added to ΔP .The block representation of this grid model is shown in Fig. 2.
III. EXPERIMENTAL RESULTS
The experimental setup is shown in Fig. 3.A Stornetic highspeed FESS is used as an energy storage system with a maximum capacity of 7.2 kWh (cf., Fig. 4).The FESS consists of two high-speed 60-kW flywheels.Several auxiliaries of the FESS, such as the temperature unit, a vacuum pump, and a control equipment add an average consumption of 2-6 kW, according to ambient temperature and operating point [18].The dynamics of the inner control loops of the FESS are assumed to be negligible for the present analysis.The test grid is powered by two 200-kVA Compiso Egston power amplifiers, connected with an Opal-RT 5700 real-time simulator through fiber optics.The simulated MG shown in Section II is connected to a VSM through a line impedance of L t = 0.1 mH and R t = 0.1Ω.The VSM sends a power reference in the range of P s ∈ [−120 120] kW to the FESS which, in order to ensure low latency, is encoded as an analog voltage between 0 and 5 V.The ramp rate r of this power setpoint P s is configurable, with a maximum value of ±80 kW/s, limited by manufacturer safeguards.The reactive power setpoint is Q s ≡ 0 kVAr.In total, three probes measure the voltages v and currents i at the terminals of the PHIL amplifier, which then are used to estimate the output power P FW of the flywheel.In our tests, we simulate the loss of a generation unit in the MG by imposing a step change of ΔP = −80 kW on the grid model in Fig. 2 and observe the frequency dynamics of the system.We then investigate the influence of the ramp rate r, the VSM parameters inertia M = Jω n , the high frequency droop expressed as ρ = D h /J, and droop time constant τ on the nadir, ROCOF, and the amplitude of the occurring oscillations.The MG is assumed to have a nominal power of 1 MW with an inertia constant of M ∞ = 23.9kWs 2 which, expressed in seconds, corresponds to H ∞ = 3.75 s.The droop constant is 5%, i.e., D ∞ = 63.7 kWs and the nominal grid voltage is V ∞ = 230 Vrms.T p = 15 s and T z = 3 s are chosen similar to values in [4].The tuning parameters of the VSM are chosen such that for a frequency drop with a nadir of 49 Hz (the minimum tolerated frequency of the FESS), the VSM produces approximately the maximum power output of 120 kW.For the VSM, we use τ ∈ {0, 0.5, 2}, ρ ∈ {0, 1, 2}, and M ∈ {31.8, 47.7}kWs 2 , and all other parameters are tuned according to [15].The ramp rates tested are r ∈ {40, 80} kW/s.In the following, we show plots of frequency and power for selected experiments, before analyzing the influence of the tuning parameters on the grid behavior.
A. Frequency and Power Response
Figs. 5 and 6 show the influence of the time constant τ of the high-pass filter and the ramp rate r on the frequency of the system and the output power of the FESS.Clearly, a higher τ (meaning a slower washout time of the high-pass filter) improved the nadir and decreased the initial oscillation caused by the ramp rate limitation.(Note that τ = 0 means no frequency droop.)The ramp rate had no significant impact on the nadir, but a slower ramp rate causes more initial overshoot.For better readability, we omitted plotting the power output for τ = 0.5 s in Fig. 6, instead we plot both the measured output power P FW (solid lines) Fig. 5. Frequency of the grid (dashed line) and the VSM (solid line) subsequent to a ΔP step of 80 kW.Higher τ decreases the nadir, and a lower r causes higher oscillations.ρ = 2s −1 and M = 31.8kWs 2 .Fig. 6.Output power of the FESS (solid line) and the VSM (dashed line) subsequent to a loss of a 80-kW power unit.The impact of the rate limitation r is clearly visible.ρ = 2s −1 and M = 31.8kWs 2 .Fig. 7. Frequency response of the grid (solid line) and the VSM (dashed line) for a variation of M and ρ. τ = 2s and r = 40 kW/s.and the reference power P s (dashed lines).Clearly, the effect of the ramp limitation can be seen in the time interval between 0 and 5 s.
Figs. 7 and 8 show a comparison of the frequency and power response for a variation of M and ρ.Clearly, increasing either parameter improves the frequency nadir, while the initial ROCOF and the initial oscillation (with a minimum of 312 rad/s) remain Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.unchanged.This can be understood from Fig. 8, where the ramp rate limitation on P s forces identical P FW for all parameters, up to approximately 2 s.In consequence, the inertia initially cannot slow down the frequency decline such that the slope of the frequency is dictated by M ∞ .
Fig. 9 shows the amount of energy extracted from the flywheels, ΔE kin (red) and the integral of the power P FW flowing to the MG for an experiment with M = 47.7 kWs 2 , ρ = 2s −1 , and τ = 2s.The difference between the two (yellow) is the energy consumed by the auxiliaries of the flywheels.A first-order line fit (black dashed line) indicates that the auxiliaries consumed approx.6 kW in this experiment.Due to the consumption of the auxiliaries, in a real scenario, P s must account for these power losses.Fig. 10 shows the currents and voltages at the terminals of the amplifiers during this experiment.A peak of 180 A per line is reached.
B. Parameter Study
We show the impact of all four parameters r, ρ, τ , and M on nadir, ROCOF, and oscillations in the following analysis.Where applicable, solid lines are used for r = 40 kW/s and dashed lines for r = 80 kW/s.Recall that if ρ = 0, also τ = 0, and vice versa.None of the parameters shows relevant impact on the ROCOF, cf., Fig. 11(a).This is due to the RoCoF being maximal at the onset of the disturbance, while due to the finite response time of the flywheel, P FW starts to impact the system only after approx.0.5 s.Clearly, A osc tends to zero for increasing r, cf., Fig. 11(b).Indeed, for most tunings, A osc is already negligible at 80 kW/s, such that r is not required to be much faster.While higher M , ρ, and τ all increase A osc , a higher r decreases oscillations, cf., Fig. 12(a).Finally, the nadir improves with τ , ρ, and M , see Fig. 12(b), suggesting that both HF-droop and Fig. 11.Impact of r, τ , and ρ on ROCOF (l) and oscillations (r).A osc = 0 for ρ = 0s −1 , such that the black curve is hidden.Fig. 12. Impact of ρ, τ , and M on oscillations (l) and nadir (r).inertia can improve system performance, giving the designer relative freedom to tune these parameters to exploit a maximum amount of energy from the FESS according to the application.
IV. CONCLUSION
This letter shows the performance of a commercially available high-speed FESS controlled by a VSM to provide frequency support to an MG.The flywheel used in these experiments has a nominal power of 120 kW and is able to emulate an inertia of 47.7 kWs 2 , which is comparable with the one of a 1-MW synchronous generator.The power setpoint was rate-limited by the flywheel, which causes damped oscillations in the first few seconds subsequent to a simulated loss in power generation in the MG.The ROCOF in such a scenario can only be impacted marginally, if such a ramp limitation is too aggressive.For higher ramp rates and adequate tuning, only little oscillations occur and the frequency nadir can be improved significantly by the FESS.Further work is planned to combine the FESS with a supercapacitor to compensate for the ramp rate limitation and to test both in a realistic MG.
Fig. 1 .
Fig. 1.Block diagram of the VSM with the main control loops for frequency, voltage, and reactive power, and a virtual impedance block.
Fig. 3 .
Fig. 3. FESS PHIL test field consisting of two Stornetic high-speed flywheels, interfaced with two 200-kVA Egston power amplifiers.An Opal-RT 5700 simulator executes both the VSM and the MG model.
Fig. 8 .
Fig. 8. FESS output power (solid line) initially cannot follow P s (dashed line) due to the ramp limitation.τ = 2s and r = 40 kW/s.
Fig. 9 .
Fig. 9. Relative depletion of the flywheel during one experiment.
Fig. 10 .
Fig.10.Current at the terminals of the FESS during an experiment (left-hand side) and a zoomed in section at the peak including the voltages (dashed lines) on the right-hand side.
Manuscript received 24 January 2024; revised 13 February 2024; accepted 22 February 2024.Date of publication 26 February 2024; date of current version 4 September 2024.The work of Florian Reißner was supported in part by Israel Science Foundation under Grant 2802/21.The work of Giovanni De Carne was supported in part by the Helmholtz Association through the program "Energy System Design," and in part by the Helmholtz Young Investigator Group "Hybrid Networks" under Grant VH-NG-1613.(Corresponding author: Florian Reißner.)FlorianReißner is with the School of Electrical Engineering, Tel Aviv University, Ramat Aviv 69978, Israel (e-mail: reissner@tauex.tau.ac.il).
Giovanni De Carne is with the Institute for Technical Physics, Karlsruhe Institute of Technology, 76344 Karlsruhe, Germany (e-mail: giovanni.carne@kit.edu).Color versions of one or more figures in this article are | 4,039 | 2024-10-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Brief Story on Prostaglandins, Inhibitors of their Synthesis, Hematopoiesis, and Acute Radiation Syndrome
Prostaglandins and inhibitors of their synthesis (cyclooxygenase (COX) inhibitors, non-steroidal anti-inflammatory drugs) were shown to play a significant role in the regulation of hematopoiesis. Partly due to their hematopoiesis-modulating effects, both prostaglandins and COX inhibitors were reported to act positively in radiation-exposed mammalian organisms at various pre- and post-irradiation therapeutical settings. Experimental efforts were targeted at finding pharmacological procedures leading to optimization of therapeutical outcomes by minimizing undesirable side effects of the treatments. Progress in these efforts was obtained after discovery of selective inhibitors of inducible selective cyclooxygenase-2 (COX-2) inhibitors. Recent studies have been able to suggest the possibility to find combined therapeutical approaches utilizing joint administration of prostaglandins and inhibitors of their synthesis at optimized timing and dosing of the drugs which could be incorporated into the therapy of patients with acute radiation syndrome.
Introduction
Prostaglandins, as well as inhibitors of their synthesis, act in hematopoiesis through a spectrum of pleiotropic effects [1,2]. Therefore, there is no wonder that both groups of substances have appeared among those tested as potential modulators of radiation damage in mammals [3]. One of the proposed needs for the development of efficient and non-toxic modulators of radiation damage is that of the necessity to apply such drugs in connection with contingent radiation accidents or terrorist attacks [4][5][6].
The story indicated in the title began in the 1970s and 1980s. At that time, reports appeared informing nearly simultaneously about protective effects of either prostaglandins or inhibitors of their synthesis on ionizing radiation-induced acute radiation damage of the mammalian organism. The same period is characterized also by emerging studies on hematological effects of both the groups of substances mentioned [7,8].
Subsequent studies, especially of radiobiological and hematological targeting, brought new pieces of information about the contribution of prostaglandins and inhibitors of their synthesis to the regulation of hematopoiesis under normal and perturbed conditions. Some of these studies also uncovered new possibilities on how to enhance recovery following an exposure to sublethal or lethal radiation doses [9,10]. Significant progress in the methodological spectrum of the studies on the above topics was obtained when selective inhibitors of cyclooxygenase-2 (COX-2), one of the enzymes of the prostaglandin synthesis pathway, appeared [2].
This article deals with the outlined story and shows where the story can be found at present.
Hematological Effects of Prostaglandin E 2 (PGE 2 )
Prostaglandin E 2 (PGE 2 ) was shown to dose-dependently inhibit mouse and human myeloid progenitor cell proliferation in semisolid culture assays [11,12]. The same finding was obtained during in vivo studies and a hypothesis was postulated that prostaglandins played an important role in the negative hematopoietic feedback control [2,[13][14][15][16]. In a later study, the inhibitory effect of PGE 2 on granulocyte-macrophage progenitor cells was shown to synergize with that of interferons and to be mediated by tumor necrosis factor [17].
In contrast to the inhibitory action of PGE 2 on myeloid progenitor cells, this substance was repeatedly reported to stimulate proliferation of the erythroid progenitor cells [18][19][20][21][22]. The effect of PGE 2 could be direct or mediated through factors released by T cells [23].
In vitro studies from 1982 have reported a stimulated production of cycling hematopoietic progenitor cells from a population of quiescent, non-cycling cells, most likely of stem cells from mouse or human bone marrow exposed to PGE 2 [24]. Later findings from 1998 have shown an increased formation of both myeloid and erythroid progenitor cells from purified human blood CD34 + cells [25]. PGE 2 has been reported to increase hematopoietic stem cell long-term-engraftment [26] and homing efficiency [27], to decrease their apoptosis [26,27], as well as to increase their entry into the cell cycle [26].
As shown above, PGE 2 influences various components of the hematopoietic system in different directions. Therefore, the resulting hematological effects of PGE 2 are manifestations of complex regulatory PGE 2 actions. Consequently, hematological effects of PGE 2 cannot be evaluated on a universal "good or bad" or "stimulatory or inhibitory" basis, but only from partial viewpoints and taking into account dosing and timing of the drug, evaluation time interval, the way of prostaglandin action (in vitro, ex vivo, in vivo) etc. Detailed analysis of methodological aspects of hematological studies on PGE 2 and other eicosanoids, as well as of mechanisms of their effects, can be found in a separate review [1].
Concise Overview of Acute Radiation Syndrome
Acute radiation syndrome (radiation sickness) is caused by exposure of a mammalian organism to a high dose of penetrating, ionizing radiation over a short period of time [28]. Differences in cellular sensitivity to ionizing radiation underlie the classic division of the acute radiation syndrome into three syndromes characterized by the extent of radiation dose by which they are produced, namely hematopoietic (bone marrow) syndrome, gastrointestinal syndrome, and neurovascular syndrome [28]. In man, the hematopoietic radiation syndrome is caused by radiation doses between 2 and 6 Gy [28]. This syndrome is the most probable to appear, e.g., as a consequence of radiation accidents. When appropriately treated by hematological interventions, survival is possible [29]. If untreated, about half of all people exposed to a dose of more than 3.5 Gy will die within 60 days from infection and bleeding [30]. From the clinical importance of the hematopoietic radiation syndrome, the emphasis on hematological pharmacological intervention in this article can be deduced. The gastrointestinal syndrome is diagnosed between radiation doses of 6 to 10 Gy and survival is possible in the lower part of this dose range [29], but most of the patients succumb after irradiation with doses in this range within weeks of exposure [31]. The neurovascular syndrome (over 10 Gy in man) is absolutely lethal [28]; at doses in excess of 50 Gy victims will die within 48 h [32].
Radiosensitivity differs between various species [33]. Table 1 shows approximate values of LD 50/30 (radiation dose that kills 50 per cent of irradiated individuals within 30 days after exposure) for X-ray whole-body irradiation of various mammalian species, including man (for man, the LD 50/30 cannot be experimentally obtained and, therefore, the value shown was estimated [34]). As follows from many data, in mouse, which is the experimental species most often used for radiobiological studies, the absolute radiation doses for the individual radiation syndromes are much higher in comparison with man [33].
Prostaglandins Act Radioprotectively
Much work on the topic of pharmacological modulation of radiation damage by prostaglandins was done by the group of Hanson et al. They showed that various prostaglandin derivatives, like 16,16dimethyl PGE 2 [9], or misoprostol, a prostaglandin E 1 analog [35], protected intestinal stem cells from deleterious effects of ionizing radiation. Of interest was also their finding that the radioprotection by 16,16-dimethyl PGE 2 was induced not only in the compartment of intestinal stem cells but also in that of the hematopoietic stem cells [36]. This finding is in agreement with those of hematologists [24,25].
A significantly increased survival in mice administered a pre-irradiation dose of 16,16-dimethyl PGE 2 was also reported [37]; the authors stated that the administration of the drug extended the LD 50/30 (the radiation dose killing 50 percent of the animals by day 30 after irradiation) from 9.39 Gy in the controls to 16.14 Gy in the mice treated with 16,16-dimethyl PGE 2 [37]. Further research revealed that misoprostol (a normal tissue protector [35]) did not protect tumors from radiation injury and could, thus, achieve therapeutic gain [38]. Of interest can be the finding of Wang et al. [39] who found that total-body irradiation in the dose range for the hematopoietic radiation syndrome also induced an intestinal injury. Therefore, the radioprotective efficacy of prostaglandins on gastrointestinal tissues can also be beneficial following the radiation exposure within the range of that for the hematopoietic radiation syndrome.
Cyclooxygenases Carry out Prostaglandin Synthesis, Their Inhibition Can Be Selective
The formation of prostaglandins from arachidonic acid takes place in a series of steps; prostaglandin H synthase catalyzes the reduction of prostaglandin G 2 to prostaglandin H 2 [40]. Prostaglandin H synthase exists in two isoforms, namely cyclooxygenase-1 (COX-1), which is expressed constitutively in a variety of tissues including the gastrointestinal tract, and cyclooxygenase-2 (COX-2), which is inducible and responsible for the production of prostaglandins during inflammatory states [41][42][43][44]. Figure 1 summarizes the synthesis of prostaglandins and related prostanoids from arachidonic acid. Non-steroidal anti-inflammatory drugs (NSAIDs) inhibit the production of prostaglandins by blocking the access of arachidonic acid to the active site of the cyclooxygenases (COXs) [45]. Individual NSAIDs differ in their selectivity for COX-1 and COX-2 but this selectivity is never absolute for one of the COX isoforms [46]; e.g., meloxicam, whose effects will be discussed in detail later, has a six-fold selectivity for COX-2 [46] and it is classified among "COX-2-selective NSAIDs", but sometimes among "COX-2-preferential NSAIDs" [47]. Experimental and clinical data demonstrated a reduced risk of the undesirable gastrointestinal side effects after administration of COX-2-selective inhibitors as compared with the effects induced by classical non-selective cyclooxygenase inhibitors, as an example, see Reference [48]. The importance of COX-1 for the gastrointestinal tissues was emphasized by Cohn et al. [49] who reported that PGE 2 produced through COX-1 promoted crypt stem cell post-irradiation survival and proliferation. The selectivity of some of the COX inhibitors can be utilized in modulation of radiation damage in a mammalian organism, as discussed below.
Effects of Non-Selective COX Inhibitors in Sublethally and Lethally Irradiated Experimental Animals
Since the hematopoiesis-modulating effects of non-selective NSAIDs and the actions of these drugs in the radiation-damaged mammalian organism are discussed jointly in many studies, they will also be dealt with jointly here.
In 1982, indomethacin, a non-selective COX inhibitor, was reported to increase numbers of myeloid progenitor bone marrow cells [51]. Subsequent studies showed that prostaglandins [52] and non-selective NSAIDs [53,54] oppositely influenced production of cytokines by monocytes. Based on these findings it could be suggested that NSAIDs removed the prostaglandin-mediated negative feedback control (see Section 2) of some important hematopoietic compartments.
A number of studies were performed testing the effects of non-selective COX inhibitors on hematopoiesis suppressed by ionizing radiation. The results of these studies are presented in more detail in earlier reviews [2,55]. Briefly, non-selective COX inhibitors, like indomethacin, diclofenac, and flurbiprofen, were reported to enhance mouse hematopoiesis when administered singly before or after one-time-sublethal irradiation, in the course of fractionated irradiation [56][57][58], as well as when given concomitantly with immunomodulators [59][60][61] or chemical radioprotectors [62].
The desirable action of non-selective COX inhibitors did not result in an enhanced survival of experimental mice after their lethal radiation exposure. Reduced survival was found in lethally irradiated mice treated with non-selective COX inhibitors either before [63] or after irradiation [64]. Since non-selective COX inhibitors are known for inducing a high incidence and intensity of undesirable side effects on the gastrointestinal tissues [65,66] and since lethal radiation doses can induce, besides the bone marrow radiation syndrome, also the gastrointestinal radiation syndrome (see Section 3), it could be deduced that it was just the manifestations of the gastrointestinal radiation syndrome which were aggravated by non-selective COX inhibitors in lethally irradiated mice. Thus, possible use of these drugs in the treatment of the acute radiation syndrome in man has been found to be significantly restricted.
Effects of Selective COX-2 Inhibitors in Sublethally and Lethally Irradiated Experimental Animals
The first report on hematopoiesis-stimulating action of selective COX-2 inhibitors appeared in 1998 when the selective COX-2 inhibitor NS-398 was reported to increase numbers of total white blood cells and neutrophils in experimentally burned rats [67]. Further studies on COX-2-selective inhibitors of prostaglandin production connected meloxicam, another clinically available selective
Effects of Non-Selective COX Inhibitors in Sublethally and Lethally Irradiated Experimental Animals
Since the hematopoiesis-modulating effects of non-selective NSAIDs and the actions of these drugs in the radiation-damaged mammalian organism are discussed jointly in many studies, they will also be dealt with jointly here.
In 1982, indomethacin, a non-selective COX inhibitor, was reported to increase numbers of myeloid progenitor bone marrow cells [51]. Subsequent studies showed that prostaglandins [52] and nonselective NSAIDs [53,54] oppositely influenced production of cytokines by monocytes. Based on these findings it could be suggested that NSAIDs removed the prostaglandin-mediated negative feedback control (see Section 2) of some important hematopoietic compartments.
A number of studies were performed testing the effects of non-selective COX inhibitors on hematopoiesis suppressed by ionizing radiation. The results of these studies are presented in more detail in earlier reviews [2,55]. Briefly, non-selective COX inhibitors, like indomethacin, diclofenac, and flurbiprofen, were reported to enhance mouse hematopoiesis when administered singly before or after one-time-sublethal irradiation, in the course of fractionated irradiation [56][57][58], as well as when given concomitantly with immunomodulators [59][60][61] or chemical radioprotectors [62].
The desirable action of non-selective COX inhibitors did not result in an enhanced survival of experimental mice after their lethal radiation exposure. Reduced survival was found in lethally irradiated mice treated with non-selective COX inhibitors either before [63] or after irradiation [64]. Since non-selective COX inhibitors are known for inducing a high incidence and intensity of undesirable side effects on the gastrointestinal tissues [65,66] and since lethal radiation doses can induce, besides the bone marrow radiation syndrome, also the gastrointestinal radiation syndrome (see Section 3), it could be deduced that it was just the manifestations of the gastrointestinal radiation syndrome which were aggravated by non-selective COX inhibitors in lethally irradiated mice. Thus, possible use of these drugs in the treatment of the acute radiation syndrome in man has been found to be significantly restricted.
Effects of Selective COX-2 Inhibitors in Sublethally and Lethally Irradiated Experimental Animals
The first report on hematopoiesis-stimulating action of selective COX-2 inhibitors appeared in 1998 when the selective COX-2 inhibitor NS-398 was reported to increase numbers of total white blood cells and neutrophils in experimentally burned rats [67]. Further studies on COX-2-selective inhibitors of prostaglandin production connected meloxicam, another clinically available selective COX-2 inhibitor [68], with hematopoiesis and radiation. Meloxicam was found to stimulate hematopoiesis when given in a single dose before irradiation [69,70], or repeatedly after irradiation [69,71]. Details and discussion of these findings can be found in a separate review [2]. It could be deduced from these reports that the hematopoiesis-stimulating effects of the selective COX-2 inhibitors of prostaglandin synthesis retained those of the non-selective ones.
Contrary to the above mentioned aggravated survival of lethally irradiated mice following administration of non-selective COX inhibitors, promising results were obtained when mice exposed to lethal radiation does were given meloxicam, a COX-2 selective NSAID. A significantly increased post-irradiation survival was observed when a single meloxicam dose was administered either shortly (1 h) before or shortly after (1 h) radiation exposure [70,72]. Nevertheless, Jiao et al. [73] published their findings concerning a decreased survival of experimental mice if lethally irradiated animals were administered meloxicam repeatedly, namely seven times during the post-irradiation period. It follows from the observations obtained from survival experiments that dosing and timing of meloxicam in relation to the time of exposure is of crucial importance for obtaining a desirable outcome.
Summarization and Considerations on Hematological and Radiation-Modulating Effects of Selective COX-2 Inhibitor Meloxicam
Meloxicam is the only selective COX-2 inhibitor investigated in more detail from the point of view of testing its hematopoiesis-stimulating abilities in the situation of the acute radiation syndrome inducing radiation exposure. Therefore, this paragraph deals solely with meloxicam though it could be supposed very roughly that also other selective COX-2 inhibitors could act in relevant situations in a similar way.
Meloxicam's primary use was especially that in the treatment of rheumatic disease and postoperative pain. It was stated that utilization of meloxicam in these indications was at least as effective as that of the non-selective COX inhibitors (non-selective NSAIDs) but it was accompanied by a more favorable gastrointestinal tolerability [74]. Due to the complexity of the acute radiation syndrome (see Section 3), these findings are highly relevant also from the point of view of the topics of this article.
As concerns mechanisms, its unquestionable hematopoiesis-stimulating action (see Section 7), the ability of meloxicam to stimulate endogenous production of granulocyte colony-stimulating factor (G-CSF) was revealed and described [70][71][72]. Certainly, also the previously postulated capability of meloxicam to remove the negative feedback control over the production of hematopoietic progenitor cells played by prostaglandins (see Section 6) is of importance in its hematopoiesis-stimulating action.
An intricate issue is that of the timing of meloxicam administration in connection with the attempts to modulate radiation damage in the context of survival experiments. Our group referred about successful improvement of post-irradiation survival when administering meloxicam in a single pre-irradiation dose 1 h before irradiation [70], or in a single post-irradiation dose 1 h after irradiation as a monotherapy [72], as well as in combination with an adenosine A 3 receptor agonist IB-MECA [75]. We hypothesized that the success of the meloxicam administration nearly immediately after irradiation consisted in its ability to induce G-CSF production. Considered together with the proposal of Hérodin et al. stating that the therapeutic action of hematopoietic cytokines was most beneficial at their action very shortly following the radiation exposure [76,77]. A shift of the single meloxicam injection to the time interval of 24 h after irradiation turned out in nearly equal survival in meloxicam-treated and control mice and, thus, was unsuccessful [70]. Similarly unsuccessful was the extension of the dosing of meloxicam to daily dosing on days 1 to 9 after irradiation [72]. The later result was in accordance with that of Jiao et al. [73] who also tested repeated post-irradiation administration of meloxicam without the desirable effect of the drug on post-irradiation survival. It was hypothesized that the failure of the repeated post-irradiation meloxicam dosing could be due to its vascular [78] or hepatic [79] side effects manifesting at the repeated dosing and a serious post-irradiation stress. At the time interval of about the turn of the first decade of this century it seemed, thus, that the end of the story lay in the recommendation to employ the hematopoiesis-stimulating abilities of selective COX-2 inhibitors in the therapy of acute radiation syndrome in the form of its one-time application shortly after the radiation exposure.
Considerations Concerning Connecting Pharmacological Interventions with Prostaglandins and Inhibitors of Their Synthesis into One Treatment Scheme
The effects of prostaglandins and inhibitors of their synthesis on acute radiation syndrome are summarized in Figure 2.
Molecules 2018, 23, x; doi: FOR PEER REVIEW www.mdpi.com/journal/molecules the story lay in the recommendation to employ the hematopoiesis-stimulating abilities of selective COX-2 inhibitors in the therapy of acute radiation syndrome in the form of its one-time application shortly after the radiation exposure.
Considerations Concerning Connecting Pharmacological Interventions with Prostaglandins and Inhibitors of Their Synthesis into One Treatment Scheme
The effects of prostaglandins and inhibitors of their synthesis on acute radiation syndrome are summarized in Figure 2. A shift in the approach to the topic of this story appeared when dealing simultaneously with hematological and radiation-modulating effects of both prostaglandins and inhibitors of their synthesis. Hoggatt et al. [80] published an article where they presented and discussed the possibilities of how to increase the levels of important hematological parameters and the post-irradiation survival of lethally irradiated mice both by PGE2 and by the selective COX-2 inhibitor of prostaglandin synthesis, meloxicam. For PGE2, they suggest the treatment regimen of a single PGE2 dose at 6 or 24 h after lethal irradiation, for meloxicam that of four daily doses starting 6 or 48 h post-irradiation. As concerns hematological parameters, the authors counted blood platelets and three types of hematopoietic progenitor cells under the above dosing and timing regimens. All the treatment regimens using PGE2 or meloxicam showed significantly better survival and status of hematological parameters in comparison with the controls [80]. The PGE2 treatment regimen used is, in our opinion, the only one testing post-irradiation treatment with the drug and is in no contradiction to the previous observations on survival of lethally irradiated mice given a PGE2 analog before irradiation [37]. However, the enhanced survival following the treatment regimen of repeated post-irradiation doses of meloxicam reported by Hoggatt et al. [80] is in disagreement with the previously observed unchanged or decreased post-irradiation survival after repeated post-irradiation dosing of meloxicam reported by Hofer et al. [72] and Jiao et al. [73]. Consequently, we consider necessary to repeat the investigations on the issue of the profitability of various post-irradiation treatment regimens with meloxicam aimed to define and confirm the best treatment scheme.
Though the structure of the article by Hoggatt et al. [80] invited to make a further step, namely that of combining prostaglandins and inhibitors of their synthesis in the treatment of acute radiation syndrome, no findings have been published interconnecting the administration of prostaglandins A shift in the approach to the topic of this story appeared when dealing simultaneously with hematological and radiation-modulating effects of both prostaglandins and inhibitors of their synthesis. Hoggatt et al. [80] published an article where they presented and discussed the possibilities of how to increase the levels of important hematological parameters and the post-irradiation survival of lethally irradiated mice both by PGE 2 and by the selective COX-2 inhibitor of prostaglandin synthesis, meloxicam. For PGE 2 , they suggest the treatment regimen of a single PGE 2 dose at 6 or 24 h after lethal irradiation, for meloxicam that of four daily doses starting 6 or 48 h post-irradiation. As concerns hematological parameters, the authors counted blood platelets and three types of hematopoietic progenitor cells under the above dosing and timing regimens. All the treatment regimens using PGE 2 or meloxicam showed significantly better survival and status of hematological parameters in comparison with the controls [80]. The PGE 2 treatment regimen used is, in our opinion, the only one testing post-irradiation treatment with the drug and is in no contradiction to the previous observations on survival of lethally irradiated mice given a PGE 2 analog before irradiation [37]. However, the enhanced survival following the treatment regimen of repeated post-irradiation doses of meloxicam reported by Hoggatt et al. [80] is in disagreement with the previously observed unchanged or decreased post-irradiation survival after repeated post-irradiation dosing of meloxicam reported by Hofer et al. [72] and Jiao et al. [73]. Consequently, we consider necessary to repeat the investigations on the issue of the profitability of various post-irradiation treatment regimens with meloxicam aimed to define and confirm the best treatment scheme.
Though the structure of the article by Hoggatt et al. [80] invited to make a further step, namely that of combining prostaglandins and inhibitors of their synthesis in the treatment of acute radiation syndrome, no findings have been published interconnecting the administration of prostaglandins and COX inhibitors in a suitable regimen in the course of one experiment. As follows from the previous paragraphs, both prostaglandins and COX inhibitors show hematopoiesis-stimulating and radiation damage-suppressing actions and they achieve their effects by different mechanisms, and following different timing and dosing schedules. Therefore, the idea of their interconnection into one treatment scheme is tempting. It has been repeatedly stated that one of the general approaches to the treatment of acute radiation syndrome aimed at reducing undesirable side effects of the therapy while enhancing the overall therapeutic outcome by combining two (or more) agents [81][82][83]. Combined administration of prostaglandins and inhibitors of their synthesis targeted at stimulation of hematopoiesis and survival enhancement in an irradiated mammalian organism would, thus, represent a promising way in the treatment of acute radiation syndrome and would mean an interesting completing of the story briefly described here.
Supplementary Note on COX-2-Deficient Mice, Hematopoiesis, and Myelosuppression
Studies examined in the Sections 7-9 have dealt with pharmacological inhibition of COX-2 which is acute, potentially not absolute, and may result in partial non-selective co-inhibition of cyclooxygenase-1 [46]. On the other hand, loss of COX-2 activity in COX-2-deficient (COX-2 knock-out, COX-2 KO) mice is life-long, complete, and absolutely selective. First information on behavior of hematopoiesis in COX-2 KO mice appeared in 1989; Lorenz et al. reported on delayed and deteriorated recovery of 5-fluorouracil-induced hematopoietic damage in COX-2 KO mice [84]. In our laboratory we have recently found, using a complex hematological analysis, that in non-treated mice, hematological parameters in COX-2 KO animals are either at the same level compared to wild-type controls or significantly higher in some instances (peripheral blood neutrophils, bone marrow granulocyte/macrophage progenitor cells) [85]. However, in mice with radiation-induced myelosuppression, the overall hematological picture was found to be distinctly worse in the COX-2 KO animals [85]. The latter finding was subsequently supported by the observation of significantly impaired post-irradiation survival of COX-2 KO mice [86]. A hypothesis was formulated that radiation-induced systemic inflammation was beneficial for the post-irradiation hematological recovery and that the inflammation is suppressed by the missing of the inducible COX-2 in COX-2 KO mice [85]. Hematological and radiobiological findings in the COX-2 KO mice with the chronic and absolute COX-2 absence do not have the potential of direct practical use in the clinical medicine. However, they contribute to the understanding of the role of prostaglandins and inhibition of their synthesis in hematopoiesis.
Conclusions
It can be concluded from the data summarized above that both prostaglandins and inhibitors of their synthesis possess the ability to influence positively acute radiation syndrome in a mammalian organism. The results of recent studies suggest that, at appropriate dosing and timing, administration of prostaglandins and inhibitors of their synthesis could be utilized in one pharmacological treatment regimen with the aim to strengthen the processes of post-irradiation regeneration.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,943 | 2019-11-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Flexohand: A Hybrid Exoskeleton-Based Novel Hand Rehabilitation Device
Home-based hand rehabilitation has excellent potential as it may reduce patient dropouts due to travel, transportation, and insurance constraints. Being able to perform exercises precisely, accurately, and in a repetitive manner, robot-aided portable devices have gained much traction these days in hand rehabilitation. However, existing devices fall short in allowing some key natural movements, which are crucial to achieving full potential motion in performing activities of daily living. Firstly, existing exoskeleton type devices often restrict or suffer from uncontrolled wrist and forearm movement during finger exercises due to their setup of actuation and transmission mechanism. Secondly, they restrict passive metacarpophalangeal (MCP) abduction–adduction during MCP flexion–extension motion. Lastly, though a few of them can provide isolated finger ROM, none of them can offer isolated joint motion as per therapeutic need. All these natural movements are crucial for effective robot-aided finger rehabilitation. To bridge these gaps, in this research, a novel lightweight robotic device, namely “Flexohand”, has been developed for hand rehabilitation. A novel compliant mechanism has been developed and included in Flexohand to compensate for the passive movement of MCP abduction–adduction. The isolated and composite digit joint flexion–extension has been achieved by integrating a combination of sliding locks for IP joints and a wire locking system for finger MCP joints. Besides, the intuitive design of Flexohand inherently allows wrist joint movement during hand digit exercises. Experiments of passive exercises involving isolated joint motion, composite joint motions of individual fingers, and isolated joint motion of multiple fingers have been conducted to validate the functionality of the developed device. The experimental results show that Flexohand addresses the limitations of existing robot-aided hand rehabilitation devices.
Introduction
Stroke, trauma, sports injuries, occupational injuries, spinal cord injuries, and orthopedic injuries are common prevalent occurrences in human life, often resulting in hand and finger impairment. The human hand is the most used external part of the human body for activities of daily living (ADL) [1,2]. A person's life can be severely impacted by limitations of motion or even a tiny scar in their body [3]; the impairment of a hand causes a significant deficit in the performance of everyday tasks. Stroke reduces mobility in more than half of stroke survivors aged 65 and over [4]. In the United States alone, more Robotic devices capable of providing continuous isolated and combined digit joint motions to all digits can expand the horizon of robot-aided hand rehabilitation both in a clinical setting and at home. Furthermore, ease of wearability and portability of the device can increase the efficacy of robot-aided home-based rehabilitation. An important factor that cannot be ignored in dealing with digit ROM limitations is that many of the digit tendons travel across the wrist; changing wrist and forearm positions can alter the dynamics of how the tendons work. Some of these motions include wrist flexion/extension, radial and ulnar deviation, and forearm pronation/supination. Additionally, wrist and forearm positions are often required or encouraged to achieve full potential motion during patients' daily life activities. Therefore, restricting these motions of the wrist can hinder the potential of robot-aided therapy. Another key issue in the continuous passive motion of finger using devices is that passive MCP abduction-adduction motion naturally occurs during MCP flexion-extension. These passive motions should be considered while developing such devices for hand rehabilitation.
Robotic device-aided upper limb rehabilitation has been very popular for reducing the burden of going through therapy [17][18][19][20][21][22][23][24]. This mode of rehabilitation has shown great promise among therapeutic interventions. Robotic rehabilitation devices are mainly of two types: end-effector/endpoint type [11,[25][26][27], and exoskeleton type [28][29][30][31]. The end-effector type devices for hand rehabilitation must remain stationary, and the patient is required to place the affected hand onto the device to receive the treatment. Endeffector/endpoint devices for hand rehabilitation [32][33][34] are mechanisms that act on the distal tip of fingers propagating motion to DIP, PIP, and MCP joints. These devices can accommodate a variety of hand sizes, but isolated finger movement cannot be achieved effectively. Due to their easily manufacturable design, quite a few end-effector type devices have become commercially available on the market, commonly known as continuous passive motion (CPM) devices. Currently, CPM devices such as Waveflex CPM [35] and Kinetec Maesta [36] are used in clinical settings and at home and are often covered by Medicare or other health insurance policies. However, these products cannot provide isolated flexion and extension movement to finger joints. There are a few commercially available devices such as Reha-Digit [37] and Amadeo [38] which can provide isolated finger ROM but cannot offer isolated joint motion. Vinesh et al. [39] developed a non-actuated sensored hand glove integrated with a computer game (Flappy Bird) to engage patients playing a game where the subject's single/multiple fingers are involved, representing fine motor skill occupational therapeutic exercises. There are also some non-actuated peripherals for hand rehabilitation available on the market, such as the SAEBO Glove [40] and MusicGlove [41], which functions to strengthen the finger muscles. Still, these cannot provide any passive movement to the patients' hand, which is paramount towards recovery from hemiplegia due to stroke.
Due to the limitations of end-effector type devices, over the past few years, researchers have been leaning towards exoskeleton type robotic devices for hand rehabilitation [42]. Exoskeleton-based design approaches are more suitable for generating isolated finger joint motions and digit movements but can become quite complex due to hand morphology. The bones of the human hand can be quite small while having 27 joints and associated degrees of freedom (DoF). Even when only the flexion-extension motion of DIP, PIP, and MCP joints of fingers and IP and MCP joints are considered, 14 DoF need to be accounted for. A wearable exoskeletal rehab device provides motion to digit joints by maintaining virtual joint axes during motion or aligning joint axes of the structural parts with the digit joint axes. Gonzalez et al. developed a novel virtual joint-based exoskeletal device, ExoK'ab [43], capable of providing isolated motion to digits and fingers' PIP and MCP joints and the thumb's MCP joint. Their device utilized a combination of worm-geared motors and a telescopic mechanism mounted on the forearm-supported base. The user's hand is attached to the device using Velcro straps at the middle and proximal phalanges during exercises. The ExoK'ab adds 731 g of wearable weight to the user's hand, and the base structure restricts any wrist motion during hand exercises. Virtual joint-based design [43][44][45][46][47] requires extensive integrated parts to make sure the virtual axis of rotation of the exoskeleton matches the human hand during flexion-extension motion. Soft robotic devices based on artificial muscles [48] or tendons [49,50] have shown great promise in designing simpler mechanisms capable of producing digit motions in the user's hand. These devices produce the external forces required to achieve ROM without utilizing solid structural parts, alleviating the necessity for maintaining a virtual joint axis through an external mechanism. It should be noted that the pneumatic muscle-based SIFREHAB marketed by SIFSOF, US [51] is commercially available in the market. However, this device lacks a provision for practical tendon glide exercises that require isolated joint movements. In addition, artificial muscles actuated through pressurized elements may leak and reduce system reliability over time unless explicit maintenance is carried out. Electrically actuated tendon-driven soft robotic devices for hand rehabilitation, such as those from Bernocchi et al. [50] and Chen et al. [49], have the same issue of not being able to provide isolated joint movements. Exoskeleton type rehab devices with aligned joint-based mechanisms can provide isolated digit joint motion while having fewer structural parts than virtual joint axis-based mechanisms. But this approach requires space at the sides of the finger for positioning structural elements. This approach is suitable for the index finger and thumb where there is enough space to demonstrate the workability of the designed device, but applying the exact mechanism for the middle, ring, and small fingers poses a problem due to space restriction between index-middle, middle-ring, and ring-small fingers. This issue is further compounded by the fact that all four fingers come together to achieve a full range of motion, namely a full fist, due to passive MCP adduction reducing the inter-finger space even more. Many researchers [52,53] have demonstrated good motion and control of the index finger and thumb. Still, we have yet to see the application of those novel designs to rehabilitate middle, ring, and small fingers. Moshaii et al. [54] have shown a scheme for isolated phalange motion in addition to isolated digit motion. Their design is such that the user's hand is fixed on a stationary platform that restricts wrist motions during finger motion therapy.
In this research, a robotic device, namely "Flexohand," has been developed to fulfill the therapeutic needs (see Table 1) for hand rehabilitation. Flexohand addresses the limitations of current solutions comprising both research prototypes and commercial solutions based on the following criteria:
The device should not restrict wrist motion; 4.
The device should accommodate natural motion during finger flexion-extension by compensating MCP abduction-adduction motion; 5.
Lower added weight burden to the user's hand. The main contribution of this research is the development and incorporation of a novel compliant mechanism for passive compensation of MCP abduction-adduction during fingers' MCP flexion and extension exercises. In addition, a combination of sliding locks for IP joints and a wire locking system for finger MCP joints have been integrated for achieving isolated and composite digit joint flexion-extension. Finally, a tendon transmission system has been designed to reduce the wearable weight of the device. Moreover, this system allows wrist joint motions during hand digit exercises. The rest of the paper is structured as follows: Section 2 presents a detailed description of the Flexohand. The kinematic modeling of Flexohand and the relationship between joint angles and actuator rotation have been presented in Section 3. Section 4 describes the donning and doffing method of the device. The experimental evaluation and discussion are summarized in Section 5. Finally, the paper ends with the conclusions presented in Section 6.
Anatomically Inspired Design
Anatomically, human fingers are classified into two types: the thumb and the other four fingers. The human thumb consists of three joints: the interphalangeal (IP) joint, metacarpophalangeal (MCP) joint, and the trapeziometacarpal (TMC) joint (see Figure 1A). Anatomically, index, middle, ring, and small fingers differ from the thumb. Each finger is composed of three joints: the distal interphalangeal (DIP) joint, the proximal interphalangeal (PIP) joint, and the metacarpophalangeal (MCP) joint (see Figure 1B). The range of motion (ROM) of the digits is achieved by the contraction of muscles which generate the necessary force for movement, and tendons transmit muscular forces to the joints, which induce flexion and extension of the fingers. Muscles are connected to tendons, which are connected to bones at their insertion points and the muscles' origin point. Annular ligaments or pulleys serve to ensure the tendons stay in the correct path or position in the fingers and amplify the pulling force of finger flexion ( Figure 2). The primary finger motions are flexion, extension, abduction, abduction, and rotational movements. For designing Flexohand, only flexion and extension of the finger joints were considered. Two types of muscles and tendons are responsible for such motion. Extensor digitorum muscles and extensor tendons are responsible for the extension motion, and flexor digitorum muscles and flexor tendons are responsible for the flexion motion. Each finger of the hand consists of various bones and joints and can be considered a robotic manipulator. Where muscles actuate revolute joints, power is transmitted by tendons, and the pulleys guide the tendons.
tation have been presented in Section 3. Section 4 describes th method of the device. The experimental evaluation and discus Section 5. Finally, the paper ends with the conclusions presented
Anatomically Inspired Design
Anatomically, human fingers are classified into two types: four fingers. The human thumb consists of three joints: the interp acarpophalangeal (MCP) joint, and the trapeziometacarpal (TM Anatomically, index, middle, ring, and small fingers differ from t composed of three joints: the distal interphalangeal (DIP) joint, th geal (PIP) joint, and the metacarpophalangeal (MCP) joint (see motion (ROM) of the digits is achieved by the contraction of mu necessary force for movement, and tendons transmit muscular fo induce flexion and extension of the fingers. Muscles are connecte connected to bones at their insertion points and the muscles' or ments or pulleys serve to ensure the tendons stay in the correc fingers and amplify the pulling force of finger flexion ( Figure 2). tions are flexion, extension, abduction, abduction, and rotational ing Flexohand, only flexion and extension of the finger joints wer of muscles and tendons are responsible for such motion. Extenso extensor tendons are responsible for the extension motion, and f and flexor tendons are responsible for the flexion motion. Each fi of various bones and joints and can be considered a robotic man actuate revolute joints, power is transmitted by tendons, and th dons. In our design, we leveraged the knowledge of human anatomy by dev pliant exoskeleton type device. The index, middle, ring, and small fingers composed of a distal phalange shell (DPS), middle phalange shell (MPS), phalange shell (PPS), and the thumb exoskeleton is composed of a DPS and illustrates the associated shells for housing finger phalanges. The DIP and P exoskeleton are aligned with the axis of rotation of finger joints.
The flexor sheathing of the DPS, PPS, and MPS segments are mechani pulleys in the human hand (from Figure 4). In the DPS, PPS, and MPS, the serves as a hardware limit that eliminates the possibility of moving the m position beyond the human's anatomical ROM, denoted as "Flexion Limits The open type shells are designed to accommodate the finger phala way that, while donning the device, the finger slides into the associated exo In our design, we leveraged the knowledge of human anatomy by developing a compliant exoskeleton type device. The index, middle, ring, and small fingers exoskeleton is composed of a distal phalange shell (DPS), middle phalange shell (MPS), and proximal phalange shell (PPS), and the thumb exoskeleton is composed of a DPS and PPS. Figure 3 illustrates the associated shells for housing finger phalanges. The DIP and PIP joints of the exoskeleton are aligned with the axis of rotation of finger joints. In our design, we leveraged the knowledge of human anatomy by developing a compliant exoskeleton type device. The index, middle, ring, and small fingers exoskeleton is composed of a distal phalange shell (DPS), middle phalange shell (MPS), and proximal phalange shell (PPS), and the thumb exoskeleton is composed of a DPS and PPS. Figure 3 illustrates the associated shells for housing finger phalanges. The DIP and PIP joints of the exoskeleton are aligned with the axis of rotation of finger joints.
The flexor sheathing of the DPS, PPS, and MPS segments are mechanical versions of pulleys in the human hand (from Figure 4). In the DPS, PPS, and MPS, the angled section serves as a hardware limit that eliminates the possibility of moving the mechanism to a position beyond the human's anatomical ROM, denoted as "Flexion Limits" in Figure 3. The open type shells are designed to accommodate the finger phalanges in such a way that, while donning the device, the finger slides into the associated exoskeleton shells and remain housed in the shells during flexion-extension movement of the finger phalanges by the extended part of the DPS, PPS, and MPS ( Figure 4). The extended part of the shells encompasses the palmar region of the finger exterior between the interphalangeal digit creases. This approach reduces the need for adding Velcro straps or other methods of keeping the exoskeleton connected with the fingers during rehabilitative exercises. These extended parts are also used as a sheath to pass the gliding flexor wire. The flexor sheathing of the DPS, PPS, and MPS segments are mechanical versions of pulleys in the human hand (from Figure 4). In the DPS, PPS, and MPS, the angled section serves as a hardware limit that eliminates the possibility of moving the mechanism to a position beyond the human's anatomical ROM, denoted as "Flexion Limits" in Figure 3.
The open type shells are designed to accommodate the finger phalanges in such a way that, while donning the device, the finger slides into the associated exoskeleton shells and remain housed in the shells during flexion-extension movement of the finger phalanges by the extended part of the DPS, PPS, and MPS ( Figure 4). The extended part of the shells encompasses the palmar region of the finger exterior between the interphalangeal digit creases. This approach reduces the need for adding Velcro straps or other methods of keeping the exoskeleton connected with the fingers during rehabilitative exercises. These extended parts are also used as a sheath to pass the gliding flexor wire.
Compliant Mechanism
Human fingers have varying gaps between adjacent fingers during sion, which conforms to natural hand motion. This gap is lowest when mak a hand. The extruded portions of the exoskeletal shells at the DIP and PIP 7 mm space between index-middle, middle-ring, and ring-small fingers. D ion motion of fingers, the exoskeleton of each finger comes together due
Compliant Mechanism
Human fingers have varying gaps between adjacent fingers during sion, which conforms to natural hand motion. This gap is lowest when mak a hand. The extruded portions of the exoskeletal shells at the DIP and PIP 7 mm space between index-middle, middle-ring, and ring-small fingers. D ion motion of fingers, the exoskeleton of each finger comes together due adduction of MCP joints. This causes mechanical interference between two skeleton modules. Therefore, we have designed a novel MCP-compliant m
Compliant Mechanism
Human fingers have varying gaps between adjacent fingers during flexion-extension, which conforms to natural hand motion. This gap is lowest when making a fist with a hand. The extruded portions of the exoskeletal shells at the DIP and PIP joint occupy a 7 mm space between index-middle, middle-ring, and ring-small fingers. During the flexion motion of fingers, the exoskeleton of each finger comes together due to the passive adduction of MCP joints. This causes mechanical interference between two adjacent exoskeleton modules. Therefore, we have designed a novel MCP-compliant mechanism. The MPS of each finger is connected to the respective MCP-compliant module via a frictional sliding lock ( Figure 6). For a specific user, the relative position of the associated MPS and MCPcompliant module is adjusted by external force for their first time donning the device. The device retains this position for future usage via interbody friction between the MPS and MCP-compliant modules. This adjustability serves three key purposes: (i) the DIP and PIP joints of the index, middle, ring, and small finger and associated exoskeletal segments can be aligned properly; (ii) individual fingers' exoskeletons do not collide during isolated or multi-finger movements; (iii) minimal resistive force is generated by the MCP-compliant mechanism during MCP flexion-extension while maintaining the hand's natural motion during the movements; and (iv) passive compliance of MCP abduction-adduction of the index, middle, ring, and small finger during flexion-extension exercises.
Micromachines 2021, 12,1274 MCP-compliant mechanism during MCP flexion-extension while maintaining th natural motion during the movements; and (iv) passive compliance of MCP abd adduction of the index, middle, ring, and small finger during flexion-extension e Figure 7c shows the middle MCP-co module with an oriented frictional sliding lock for the connecting MPS. The four MCP-compliant modules are connected via a general-purpose elastic cord. One elastic cord of Ø2.5 mm diameter is routed through the compliant modules that pass through three holes in each module and is locked at the outer side of the index and small finger's MCP modules. The use of a single elastic cord allows uniform force distribution through all four fingers' MCP-compliant modules during passive compliance of MCP abduction-adduction during MCP flexion-extension motion. The frictional sliding lock in the MCP-compliant modules' slots are angled so that all four fingers are spread out during fingers' extension. This reduces interference due to friction between two adjacent exoskeletal shells. The orientation of the MCP-compliant mechanism can be seen in Figure 7a
Transmission and Actuation Mechanism
We implemented sets of flexor and extensor tendon wires comparable to extensor digitorum muscles and flexor digitorum profundus muscles in the hand. The flexion wire is routed through the flexor wire sheathings ( Figure 8) at the palmar side of the DPS, PPS, and MPS through the palm module using a Bowden tube towards the motor assembly. Similarly, the flexor wire is routed through the flexor wire sheath at the dorsal side of the DPS, PPS, and MPS and then passed through the back palm module towards the motor assembly. The back palm module is worn at the dorsal side of the hand, and the palm module is worn at the palmar side of the hand. The end of the flexor, extensor, and MCP lock wire is connected to the V-grooved disk directly related to the motor hub. For each finger (index, middle, ring, and small), a set of three motors (Lewansoul LX-16A [55]) was used (see Figure 9). Two motors, namely the flexor motor and extensor motor, are responsible for providing flexion and extension motion, and the third motor is used for restricting the movement of the MCP joint. For the
Transmission and Actuation Mechanism
We implemented sets of flexor and extensor tendon wires comparable to extensor digitorum muscles and flexor digitorum profundus muscles in the hand. The flexion wire is routed through the flexor wire sheathings ( Figure 8) at the palmar side of the DPS, PPS, and MPS through the palm module using a Bowden tube towards the motor assembly. Similarly, the flexor wire is routed through the flexor wire sheath at the dorsal side of the DPS, PPS, and MPS and then passed through the back palm module towards the motor assembly. The back palm module is worn at the dorsal side of the hand, and the palm module is worn at the palmar side of the hand.
Transmission and Actuation Mechanism
We implemented sets of flexor and extensor tendon wires comparable to extensor digitorum muscles and flexor digitorum profundus muscles in the hand. The flexion wire is routed through the flexor wire sheathings ( Figure 8) at the palmar side of the DPS, PPS, and MPS through the palm module using a Bowden tube towards the motor assembly. Similarly, the flexor wire is routed through the flexor wire sheath at the dorsal side of the DPS, PPS, and MPS and then passed through the back palm module towards the motor assembly. The back palm module is worn at the dorsal side of the hand, and the palm module is worn at the palmar side of the hand. The end of the flexor, extensor, and MCP lock wire is connected to the V-grooved disk directly related to the motor hub. For each finger (index, middle, ring, and small), a set of three motors (Lewansoul LX-16A [55]) was used (see Figure 9). Two motors, namely The end of the flexor, extensor, and MCP lock wire is connected to the V-grooved disk directly related to the motor hub. For each finger (index, middle, ring, and small), a set of three motors (Lewansoul LX-16A [55]) was used (see Figure 9). Two motors, namely the flexor motor and extensor motor, are responsible for providing flexion and extension motion, and the third motor is used for restricting the movement of the MCP joint. For the thumb, we used two motors for flexion and extension. In total, this prototype of Flexohand uses 14 actuators which are mounted on a motor assembly board. The flexor, extensor, and MCP lock wires are connected to motors so that when the motor rotates counterclockwise (CCW), the wire is pulled respective to the Bowden tube generating tension. In cable-driven transmission systems, there is a very high possibility of cable slag and self-winding. To solve this issue, both the flexor and extensor motors work together during flexion and extension. For flexion motion, the flexor wire is pulled by the flexor motor's CCW rotation, and the extensor motor rotates clockwise (CW) at a higher velocity to prevent slack in the wire.
12, 1274
10 of 25 thumb, we used two motors for flexion and extension. In total, this prototype of Flexohand uses 14 actuators which are mounted on a motor assembly board. The flexor, extensor, and MCP lock wires are connected to motors so that when the motor rotates counterclockwise (CCW), the wire is pulled respective to the Bowden tube generating tension. In cabledriven transmission systems, there is a very high possibility of cable slag and self-winding. To solve this issue, both the flexor and extensor motors work together during flexion and extension. For flexion motion, the flexor wire is pulled by the flexor motor's CCW rotation, and the extensor motor rotates clockwise (CW) at a higher velocity to prevent slack in the wire. The MCP lock wire, when pulled by the MCP lock motor, restricts the motion of the MCP joint. The MCP slider lock can slide and be positioned between two adjacent MCPcompliant modules to make the finger exoskeleton modules rigid to improve donning and doffing. For the thumb, we only considered IP and MCP flexion-extension. Therefore, the motion of the carpometacarpal (CMC) joint was limited by a thumb CMC brace. For the thumb exoskeleton, a similar flexor and extensor wire is routed through the Bowden tube connected to the CMC brace.
Isolated Digit and Digital Joint Motion
In this hand rehabilitation device, any desired isolated finger motion or isolated joint motion can be achieved by restricting the movements of the other unintended finger joints by configuring the motors' position. To achieve isolated finger motion, the motors associ- The MCP lock wire, when pulled by the MCP lock motor, restricts the motion of the MCP joint. The MCP slider lock can slide and be positioned between two adjacent MCP-compliant modules to make the finger exoskeleton modules rigid to improve donning and doffing. For the thumb, we only considered IP and MCP flexion-extension. Therefore, the motion of the carpometacarpal (CMC) joint was limited by a thumb CMC brace. For the thumb exoskeleton, a similar flexor and extensor wire is routed through the Bowden tube connected to the CMC brace.
Isolated Digit and Digital Joint Motion
In this hand rehabilitation device, any desired isolated finger motion or isolated joint motion can be achieved by restricting the movements of the other unintended finger joints by configuring the motors' position. To achieve isolated finger motion, the motors associated with other fingers are actively kept at zero position while the desired finger's associated motors rotate according to the positional command. The isolated joint motion is achieved by introducing a simple slider lock for DIP and PIP joints and a cable-based lock mechanism for the MCP joint. When DIP motion is to be restricted, the associated DIP slider lock (Figure 3) is pushed between DPS and PPS. Similarly, to lock PIP motion, the PIP slider lock is moved between PPS and MPS. Removing the DIP/PIP slider lock frees that joint, allowing that joint's motion. To restrict the motion of MCP joints, the MCP lock motor is kept fixed at its position while the MCP joint is at the extension position. For DIP and/or PIP joints' flexion motion, the flexor motor pulls the flexor wire, the extensor motor releases the extensor wire, and the MCP joint is restricted, resulting in flexion-extension motion of either the DIP, PIP, or both joints based on the configuration. During flexion motion of the finger, the majority of the tension generated in the flexor wire by the flexor motor first acts on the MCP joint, causing MCP flexion. However, as the flexor wire is passed through the palm module, when the MCP joint is fully flexed, the flexor wire encounters a high frictional force that limits the movement of the DIP and PIP joints. Therefore, during DIP and/or PIP flexion motion, the MCP joint is locked until DIP and/or PIP flexion has been achieved. Afterward, the MCP motor releases the MCP lock wire simultaneously with the flexor motor's flexor wire winding, thus allowing for the flexion of all the joints. For extension motion, the flexor motor and MCP lock motor release the associated tendon wires at a higher velocity while the extensor motor pulls the extensor wire at a comparatively lower velocity. Various isolated and composite finger joint motions and associated device configurations have been shown in Table 2. Table 2. Isolated and composite finger joint flexion-extension exercises.
Finger Joint Motions Configuration Description
Micromachines 2021, 12, 1274 11 of 25 motor is kept fixed at its position while the MCP joint is at the extension position. For DIP and/or PIP joints' flexion motion, the flexor motor pulls the flexor wire, the extensor motor releases the extensor wire, and the MCP joint is restricted, resulting in flexion-extension motion of either the DIP, PIP, or both joints based on the configuration. During flexion motion of the finger, the majority of the tension generated in the flexor wire by the flexor motor first acts on the MCP joint, causing MCP flexion. However, as the flexor wire is passed through the palm module, when the MCP joint is fully flexed, the flexor wire encounters a high frictional force that limits the movement of the DIP and PIP joints. Therefore, during DIP and/or PIP flexion motion, the MCP joint is locked until DIP and/or PIP flexion has been achieved. Afterward, the MCP motor releases the MCP lock wire simultaneously with the flexor motor's flexor wire winding, thus allowing for the flexion of all the joints. For extension motion, the flexor motor and MCP lock motor release the associated tendon wires at a higher velocity while the extensor motor pulls the extensor wire at a comparatively lower velocity. Various isolated and composite finger joint motions and associated device configurations have been shown in Table 2.
Finger Joint Motions Configuration Description
The figure shown in the right column shows the nominal/zero position of the device while the user is wearing it. The same configuration is achieved during extension exercises as such: the extensor motor pulls the extensor wire at a slower rate while the MCP lock motor (if moved from zero position) and flexor motors release associated wires. The DIP and PIP locks can be slid off from the exoskeletal shells.
The DIP lock has been removed while the PIP lock stays. The MCP lock motor stays at zero position throughout DIP flexion to prevent MCP joint motion. The extensor motor releases the extensor wire at a faster rate while the flexor motor pulls the flexor wire.
The figure shown in the right column shows the nominal/zero position of the device while the user is wearing it. The same configuration is achieved during extension exercises as such: the extensor motor pulls the extensor wire at a slower rate while the MCP lock motor (if moved from zero position) and flexor motors release associated wires. The DIP and PIP locks can be slid off from the exoskeletal shells. Table 2.
Finger Joint Motions Configuration Description
The figure shown in the right column shows the nominal/zero position of the device while the user is wearing it. The same configuration is achieved during extension exercises as such: the extensor motor pulls the extensor wire at a slower rate while the MCP lock motor (if moved from zero position) and flexor motors release associated wires. The DIP and PIP locks can be slid off from the exoskeletal shells.
The DIP lock has been removed while the PIP lock stays. The MCP lock motor stays at zero position throughout DIP flexion to prevent MCP joint motion. The extensor motor releases the extensor wire at a faster rate while the flexor motor pulls the flexor wire.
The DIP lock has been removed while the PIP lock stays. The MCP lock motor stays at zero position throughout DIP flexion to prevent MCP joint motion. The extensor motor releases the extensor wire at a faster rate while the flexor motor pulls the flexor wire. The DIP lock has been removed; the PIP lock has been slid to the DIP lock's position and used as the DIP lock. The motor configuration is the same as that of the isolated DIP flexion.
Both DIP and PIP locks stay at the locking position. During this motion, the extensor and MCP lock motors release the extensor and MCP lock wire, respectively, faster than the flexor motor pulls the flexor wire.
During this composite joint motion, both the DIP and PIP locks are slid off. During this motion, motor configurations are the same as the that of the isolated MCP flexion exercise.
The DIP lock has been removed; the PIP lock has been slid to the DIP lock's position and used as the DIP lock. The motor configuration is the same as that of the isolated DIP flexion.
Micromachines 2021, 12, 1274 12 of 25 The DIP lock has been removed; the PIP lock has been slid to the DIP lock's position and used as the DIP lock. The motor configuration is the same as that of the isolated DIP flexion.
Both DIP and PIP locks stay at the locking position. During this motion, the extensor and MCP lock motors release the extensor and MCP lock wire, respectively, faster than the flexor motor pulls the flexor wire.
During this composite joint motion, both the DIP and PIP locks are slid off. During this motion, motor configurations are the same as the that of the isolated MCP flexion exercise.
Both DIP and PIP locks stay at the locking position. During this motion, the extensor and MCP lock motors release the extensor and MCP lock wire, respectively, faster than the flexor motor pulls the flexor wire. The DIP lock has been removed; the PIP lock has been slid to the DIP lock's position and used as the DIP lock. The motor configuration is the same as that of the isolated DIP flexion.
Both DIP and PIP locks stay at the locking position. During this motion, the extensor and MCP lock motors release the extensor and MCP lock wire, respectively, faster than the flexor motor pulls the flexor wire.
During this composite joint motion, both the DIP and PIP locks are slid off. During this motion, motor configurations are the same as the that of the isolated MCP flexion exercise.
During this composite joint motion, both the DIP and PIP locks are slid off. During this motion, motor configurations are the same as the that of the isolated MCP flexion exercise. One sliding lock is kept to lock the DIP joint's motion and is thus named as the DIP lock. At Step 1, PIP flexion is achieved during this composite joint motion through the same motor configuration as that of the isolated PIP flexion. At Step 2, the MCP lock wire is released simultaneously so that MCP flexion is performed.
The PIP lock is kept at the joint locking position. At Step 1, DIP flexion is achieved during this composite joint motion through the same motor configuration as that of the isolated DIP flexion. At Step 2, the MCP lock wire is released simultaneously so that MCP flexion is achieved.
At
Step 1, DIP and PIP flexion is achieved by keeping the MCP lock motor at zero position until the extensor and flexor motors have achieved DIP and PIP flexion. At Step 2, the MCP lock wire is released simultaneously with the extensor motor to achieve MCP flexion.
Modelling of Structural Parts
Anthropomorphic references from the healthy adult (participant-A, age: 30 yrs., height: 64 in., weight: 152 lbs.) subject's right hand were taken before designing the Flexohand. Exoskeletal shells, locks, and other modules were designed in Creo Parametric software, 6.0.2.0, PTC, Boston, MA, USA. It should be mentioned that, while designing such exoskeleton type devices for finger rehabilitation, it is essential that the device's structural joint pivots and the hand digits' rotation axes are aligned. The exoskeletal shells were designed to conform to this requirement with the use of data from the subject. The DIP and PIP/IP joint axes of the hand digits were aligned with the DPS-PPS and PPS-MPS pivotal joints. Fingers' MCP joint axes of rotation are compensated by the compliant mechanism and, in the case of the thumb, the combination of a thumb brace and the orientation of tendon wires ensures that thumb MCP flexion-extension is achieved nominally. It is to One sliding lock is kept to lock the DIP joint's motion and is thus named as the DIP lock. At Step 1, PIP flexion is achieved during this composite joint motion through the same motor configuration as that of the isolated PIP flexion. At Step 2, the MCP lock wire is released simultaneously so that MCP flexion is performed.
Micromachines 2021, 12, 1274 13 of 25 One sliding lock is kept to lock the DIP joint's motion and is thus named as the DIP lock. At Step 1, PIP flexion is achieved during this composite joint motion through the same motor configuration as that of the isolated PIP flexion. At Step 2, the MCP lock wire is released simultaneously so that MCP flexion is performed.
The PIP lock is kept at the joint locking position. At Step 1, DIP flexion is achieved during this composite joint motion through the same motor configuration as that of the isolated DIP flexion. At Step 2, the MCP lock wire is released simultaneously so that MCP flexion is achieved.
At
Step 1, DIP and PIP flexion is achieved by keeping the MCP lock motor at zero position until the extensor and flexor motors have achieved DIP and PIP flexion. At Step 2, the MCP lock wire is released simultaneously with the extensor motor to achieve MCP flexion.
Modelling of Structural Parts
Anthropomorphic references from the healthy adult (participant-A, age: 30 yrs., height: 64 in., weight: 152 lbs.) subject's right hand were taken before designing the Flexohand. Exoskeletal shells, locks, and other modules were designed in Creo Parametric software, 6.0.2.0, PTC, Boston, MA, USA. It should be mentioned that, while designing such exoskeleton type devices for finger rehabilitation, it is essential that the device's structural joint pivots and the hand digits' rotation axes are aligned. The exoskeletal shells were designed to conform to this requirement with the use of data from the subject. The DIP and PIP/IP joint axes of the hand digits were aligned with the DPS-PPS and PPS-MPS pivotal joints. Fingers' MCP joint axes of rotation are compensated by the compliant mechanism and, in the case of the thumb, the combination of a thumb brace and the orientation of tendon wires ensures that thumb MCP flexion-extension is achieved nominally. It is to The PIP lock is kept at the joint locking position. At Step 1, DIP flexion is achieved during this composite joint motion through the same motor configuration as that of the isolated DIP flexion. At Step 2, the MCP lock wire is released simultaneously so that MCP flexion is achieved.
Micromachines 2021, 12, 1274 13 of 25 One sliding lock is kept to lock the DIP joint's motion and is thus named as the DIP lock. At Step 1, PIP flexion is achieved during this composite joint motion through the same motor configuration as that of the isolated PIP flexion. At Step 2, the MCP lock wire is released simultaneously so that MCP flexion is performed.
The PIP lock is kept at the joint locking position. At Step 1, DIP flexion is achieved during this composite joint motion through the same motor configuration as that of the isolated DIP flexion. At Step 2, the MCP lock wire is released simultaneously so that MCP flexion is achieved.
At
Step 1, DIP and PIP flexion is achieved by keeping the MCP lock motor at zero position until the extensor and flexor motors have achieved DIP and PIP flexion. At Step 2, the MCP lock wire is released simultaneously with the extensor motor to achieve MCP flexion.
Modelling of Structural Parts
Anthropomorphic references from the healthy adult (participant-A, age: 30 yrs., height: 64 in., weight: 152 lbs.) subject's right hand were taken before designing the Flexohand. Exoskeletal shells, locks, and other modules were designed in Creo Parametric software, 6.0.2.0, PTC, Boston, MA, USA. It should be mentioned that, while designing such exoskeleton type devices for finger rehabilitation, it is essential that the device's structural joint pivots and the hand digits' rotation axes are aligned. The exoskeletal shells were designed to conform to this requirement with the use of data from the subject. The DIP and PIP/IP joint axes of the hand digits were aligned with the DPS-PPS and PPS-MPS pivotal joints. Fingers' MCP joint axes of rotation are compensated by the compliant mechanism and, in the case of the thumb, the combination of a thumb brace and the orientation of tendon wires ensures that thumb MCP flexion-extension is achieved nominally. It is to At Step 1, DIP and PIP flexion is achieved by keeping the MCP lock motor at zero position until the extensor and flexor motors have achieved DIP and PIP flexion. At Step 2, the MCP lock wire is released simultaneously with the extensor motor to achieve MCP flexion.
Modelling of Structural Parts
Anthropomorphic references from the healthy adult (participant-A, age: 30 yrs., height: 64 in., weight: 152 lbs.) subject's right hand were taken before designing the Flexohand. Exoskeletal shells, locks, and other modules were designed in Creo Parametric software, 6.0.2.0, PTC, Boston, MA, USA. It should be mentioned that, while designing such exoskeleton type devices for finger rehabilitation, it is essential that the device's structural joint pivots and the hand digits' rotation axes are aligned. The exoskeletal shells were designed to conform to this requirement with the use of data from the subject. The DIP and PIP/IP joint axes of the hand digits were aligned with the DPS-PPS and PPS-MPS pivotal joints. Fingers' MCP joint axes of rotation are compensated by the compliant mechanism and, in the case of the thumb, the combination of a thumb brace and the orientation of tendon wires ensures that thumb MCP flexion-extension is achieved nominally. It is to be noted that the current prototype of Flexohand is specific to an individual, participant-A. For a different participant, associated anthropomorphic parameter values would need to be updated. The parametric design capability of Creo Parametric software enables us to input the updated parameter values into the CAD environment and achieve new models of exoskeletal shells and compliant mechanism parts which are specific to each participant. Afterward, the new parts with different dimensions can be 3D printed and assembled for usage.
Kinematic Analysis
Each finger combined with its associated exoskeleton can be described as a 4 DoF (2R-R-R) serial manipulator where MCP abduction-adduction, MCP flexion-extension, PIP flexion-extension, and DIP flexion-extension motions have been considered. In contrast, the thumb can be defined as a 2 DOF (R-R) serial manipulator considering IP flexionextension and MCP flexion-extension motions. Figure 10 shows the link frame assignment for a finger and exoskeletal segments where L 1 = the length of the proximal phalange, L 2 = the length of the middle phalange, and L 3 = distance between the DIP joint and fingertip. In Table 3, we have summarized the modified Denavit-Hartenberg (DH) parameters [56] associated with the developed kinematic model. be noted that the current prototype of Flexohand is specific to an individual, participant-A. For a different participant, associated anthropomorphic parameter values would need to be updated. The parametric design capability of Creo Parametric software enables us to input the updated parameter values into the CAD environment and achieve new models of exoskeletal shells and compliant mechanism parts which are specific to each participant. Afterward, the new parts with different dimensions can be 3D printed and assembled for usage.
Kinematic Analysis
Each finger combined with its associated exoskeleton can be described as a 4 DoF (2R-R-R) serial manipulator where MCP abduction-adduction, MCP flexion-extension, PIP flexion-extension, and DIP flexion-extension motions have been considered. In contrast, the thumb can be defined as a 2 DOF (R-R) serial manipulator considering IP flexionextension and MCP flexion-extension motions. Figure 10 shows the link frame assignment for a finger and exoskeletal segments where L1 = the length of the proximal phalange, L2 = the length of the middle phalange, and L3 = distance between the DIP joint and fingertip. In Table 3, we have summarized the modified Denavit-Hartenberg (DH) parameters [56] associated with the developed kinematic model.
The transformation matrix for the kinematic model is expressed using Equation (1).
Here, defines the position of the fingertip respective to the corresponding MCP The transformation matrix for the kinematic model is expressed using Equation (1). Here, P x P Y Pz defines the position of the fingertip respective to the corresponding MCP joint.
In this device, the MCP abduction/adduction associated angle q 1 adjusts passively using the compliant mechanism during MCP flexion/extension. Therefore, each finger exoskeleton can be defined as a simple 3 DoF (R-R-R) planar manipulator. This paper focuses on estimating digit joint angles by developing a relation between motor rotation and effective tendon wire length.
Angles corresponding to digit joints' flexion-extension motion can be defined by relating joint angle values to the varying effective tendon wire length responsible for achieving the motion. Figure 11 shows the procedure for estimating isolated DIP joint flexion angles. To estimate flexion angle first, we drew a circle with radius OA = OB = r, where O denotes DIP joint's center of rotation, A denotes the flexion wire exit point of DPS, B denotes the flexion wire entry point of PPS, OA is the distance between O and A, and OB is the distance between O and B.
where , , , , , , , , and can be found in Appendix A. In this device, the MCP abduction/adduction associated angle q1 adjusts passively using the compliant mechanism during MCP flexion/extension. Therefore, each finger exoskeleton can be defined as a simple 3 DoF (R-R-R) planar manipulator. This paper focuses on estimating digit joint angles by developing a relation between motor rotation and effective tendon wire length.
Angles corresponding to digit joints' flexion-extension motion can be defined by relating joint angle values to the varying effective tendon wire length responsible for achieving the motion. Figure 11 shows the procedure for estimating isolated DIP joint flexion angles. To estimate flexion angle first, we drew a circle with radius OA = OB = r. Figure 11(iii)), the points A, C, and B coincide together; therefore, OA = OC = OB. Using Figure 12, we derived the relation between the DIP flexion angle, δ = 2 β (left) and the flexor motor's angular rotation, θ (right). The left figure corresponds to the DIP joint's extended position (see Figure 11(i)), where the angle between DPS and PPS, δ = Figure 11(ii)). For both DIP extension and intermediate positions, AC = AB and AB = AC + CB; thus, ∆OAB is an isosceles triangle in these cases. During DIP flexion (see Figure 11(iii)), the points A, C, and B coincide together; therefore, OA = OC = OB.
Using Figure 12, we derived the relation between the DIP flexion angle, δ = 2 β (left) and the flexor motor's angular rotation, θ (right). The left figure corresponds to the DIP joint's extended position (see Figure 11(i)), where the angle between DPS and PPS, δ = ∠AOB, is maximum. According to the shell design in the CAD environment, we found ∠AOB = δ = 80 • , and therefore β = 40 • and OC = 5.5 mm. ∠AOB, is maximum. According to the shell design in the CAD environment, we found ∠AOB = δ = 80°, and therefore β = 40° and OC = 5.5 mm. Knowing β allowed us to find , as ∠OCA = ∠OCB = 90°, and then we calculated the radius of the constructed circle, OA, for the DIP extended configuration, using Equation (2) and cord length, and ACB using Equation (3): Then, we found the varying length of line ACB = L', corresponding to varying ∠AOB = δ' for different DIP intermediate positions using Equations (4) and (5): where ∆L is the relative change in effective tendon wire length. In Figure 12, ACBQP is the total flexor wire length where, during all positions of DIP flexion, the angle length of the BQ section remains constant. According to the flexor wire's connection to the flexor motor (see Figures 8 and 12), the relative decrement of effective tendon wire is achieved by rotating the flexor motor counterclockwise and thus varying θ = the motor's angular position. The flexor wire end is connected to the V-grooved disk via wire lock at point P. Q is the virtual fixed point where the flexor wire always touches the V-groove due to tension. O' denotes the center of rotation of the flexor motor, and R is the effective radius of the V-grooved disk. The flexor wire is connected such that when the DIP joint is fully extended, point P and Q coincide, resulting in θ = 0. As we increase θ, S increases, and due to the connection configuration, we can conclude the following: Knowing β allowed us to find α, as ∠OCA = ∠OCB = 90 • , and then we calculated the radius of the constructed circle, OA, for the DIP extended configuration, using Equation (2) and cord length, and ACB using Equation (3): Then, we found the varying length of line ACB = L , corresponding to varying ∠AOB = δ' for different DIP intermediate positions using Equations (4) and (5): where ∆L is the relative change in effective tendon wire length. In Figure 12, ACBQP is the total flexor wire length where, during all positions of DIP flexion, the angle length of the BQ section remains constant. According to the flexor wire's connection to the flexor motor (see Figures 8 and 12), the relative decrement of effective tendon wire is achieved by rotating the flexor motor counterclockwise and thus varying θ = the motor's angular position. The flexor wire end is connected to the V-grooved disk via wire lock at point P. Q is the virtual fixed point where the flexor wire always touches the V-groove due to tension. O' denotes the center of rotation of the flexor motor, and R is the effective radius of the V-grooved disk. The flexor wire is connected such that when the DIP joint is fully extended, point P and Q coincide, resulting in θ = 0. As we increase θ, S increases, and due to the connection configuration, we can conclude the following: where, S = The arc length between P and Q. We calculated S respective to θ using Equation (8): Using Equations (4)- (8), the relation between the flexor motor's angular position, θ, and the DIP joint's flexion angle, δ', for isolated DIP joint flexion motion can be formulated as Equation (9): The same approach is used for determining the required associated motors' rotation for isolated DIP extension, isolated PIP flexion-extension, and isolated MCP flexion and extension. Note that, with this approach during composite motion, DIP and PIP flexionextension, the DIP and PIP angles cannot be individually computed, so in this case, we describe the flexion-extension motion by summing DIP and PIP joint angle values. Furthermore, with this approach, the computed MCP joint angles are expected to be less accurate due to nonuniform positions of the MPS tendon wire exit points and back palm and palm modules' tendon wire entry points. During experimentation, the motor angular position values, θ, were obtained to generate the intermediate position of joint angles, δ', from the corresponding Equation (9) for DIP, PIP, and MCP joints.
Donning and Doffing of the Device
Donning this rehabilitation device is akin to wearing a glove ( Figure 13). First, the user needs to wear the thumb exoskeleton module mounted on the CMC brace by strapping Velcro straps around the palm. Secondly, the finger exoskeletons are locked together by sliding the MCP sliding locks between the adjacent fingers' respective MCP lock modules. This creates a rigid structure for increased wearability. Then, the user slides their fingers (index, middle, ring, and small) into the respective finger exoskeletons. Afterward, the palm module and back palm module are strapped together around the hand using Velcro straps. The critical point here is to tighten the straps to the degree that the compression due to the strap does not restrict the motions of the tendons in hand. Finally, the elastic cords connected to the palm module and back-palm module are pulled around the wrist, so the modules are fixed to the hand. Afterward, the MCP sliding locks are slid back into the respective fingers' MCP compliant module, allowing the MCP compliant mechanism to work freely during exercise. To take off the device, the steps mentioned above are done in reverse. Experimentally, we have found that it takes about one and half minutes to don and doff this device from the user's hand.
Fabrication abd Experimental Setup
The modelled parts were printed using an SLA type 3D printer (printer: Elegoo MARS [57]; material: UV-curing photopolymer rapid resin [58]). Nylon-coated fishing wires were used as tendon wires and a commercially available Bowden tube (PTFE) was
Fabrication abd Experimental Setup
The modelled parts were printed using an SLA type 3D printer (printer: Elegoo MARS [57]; material: UV-curing photopolymer rapid resin [58]). Nylon-coated fishing wires were used as tendon wires and a commercially available Bowden tube (PTFE) was used as a tendon wire shell. This device has low wearable weight, meaning when the user wears the device, it adds around 280 g of weight to the user's hand. The motor assembly can be placed on any stationary surface, which reduces the burden on the user's hand. The schematic of the experimental setup of the device can be seen in Figure 14. The motors are controlled by using Arduino Sketch running on a personal computer. For different therapeutic exercises and ranges of motion of the hand digit joints for a user, the program input parameters can be modified by anyone with access to the motion program at the current stage of development. Note that, in the future, the device is intended to be used for personalized therapy where both the user and therapist can modify the motion program from a graphical user interface (GUI).
Actuation Calibration
This device prototype was tested on participant-A, and a variety of isolated finger, isolated finger joint, and composite joint motion exercises were performed. During the experiments, we found that the required angular position of the actuators (flexor, extensor, and MCP lock motors) to achieve deterministic joint flexion-extension angles varies from the angular position obtained by using Equation (9). These deviations are caused by the Bowden tube shell being considered a rigid shell during the formulation of Equation (3.9) with respect to joint angles. In contrast, in the experiments it was found that, during flexion-extension motions, the tension generated in the tendon wires causes the Bowden tube to deform, resulting in variance in the required effective tendon wire length. Therefore, the flexor, extensor, and MCP lock motors' angular positions were determined empirically and calibrated using goniometers for experimentation. To calibrate joint angles, we sampled motor position combinations five times. Each time the joint angles were manually measured using a goniometer. Afterward, we average the angle values to generate a calibration chart associated with the different motions of each digit in the hand. We present the calibrated motor angle values and associated joint angles for fingers in Table 4. Note that human errors are to be expected as the measurements of joint angles are taken manually. Therefore, we define the calibrated joint angle values as a close approximation to actual digit joint angles. Note that the calibration data has been generated for participant-A, and it was a one-time process. For subsequent usage and experimentation with participant-A and the developed Flexohand, the calibrated data remains valid.
Actuation Calibration
This device prototype was tested on participant-A, and a variety of isolated finger, isolated finger joint, and composite joint motion exercises were performed. During the experiments, we found that the required angular position of the actuators (flexor, extensor, and MCP lock motors) to achieve deterministic joint flexion-extension angles varies from the angular position obtained by using Equation (9). These deviations are caused by the Bowden tube shell being considered a rigid shell during the formulation of Equation (9) with respect to joint angles. In contrast, in the experiments it was found that, during flexionextension motions, the tension generated in the tendon wires causes the Bowden tube to deform, resulting in variance in the required effective tendon wire length. Therefore, the flexor, extensor, and MCP lock motors' angular positions were determined empirically and calibrated using goniometers for experimentation. To calibrate joint angles, we sampled motor position combinations five times. Each time the joint angles were manually measured using a goniometer. Afterward, we average the angle values to generate a calibration chart associated with the different motions of each digit in the hand. We present the calibrated motor angle values and associated joint angles for fingers in Table 4. Note that human errors are to be expected as the measurements of joint angles are taken manually. Therefore, we define the calibrated joint angle values as a close approximation to actual digit joint angles. Note that the calibration data has been generated for participant-A, and it was a one-time process. For subsequent usage and experimentation with participant-A and the developed Flexohand, the calibrated data remains valid.
Experiments with Flexohand
The efficacy of this developed prototype was evaluated based on the device's capability to generate various isolated and composite motions of digit joints. Using the calibration chart, positional commands are sent to motors to achieve respective joint motions. A selected few joint ROM exercises among all possible configurations are presented in Figures 15-20 Figure 15, where the PIP joint is locked via a PIP lock, and the DIP lock is removed. Then, a positional command is passed to the motor control module for achieving DIP flexion motion. Figure 16 shows the PIP flexion of the index finger, where the PIP joint is unlocked by sliding the PIP lock towards the DIP joint and removing the DIP joint lock. From the experiments conducted in this research, it was seen that the participant was able to receive various passive hand digit exercises by using the prototype of Flexohand. These passive exercise routines were generated to show the therapy providing capabilities of the developed mechanism. These routines comprise various isolated joint motions, composite joint motions of individual fingers, and isolated joint motions of multiple fingers. The series of snapshots taken from the video recorded during experimentation shows some of the therapy routines. The wrist joint is not restricted during the finger exercises as the tendon sheath does not travel across the wrist. With a combination of sliding locks and MCP locking mechanism, various isolated joint and composite joint motions are achieved with Flexohand.
Conclusions
In this research, a robotic device for hand rehabilitation, namely Flexohand, was developed. Flexohand incorporates multiple mechanisms for providing isolated digit joint motion of all fingers and the thumb. The prototype of Flexohand was built using low-cost 3D printers, printing materials, and actuators. The current prototype was used to provide various rehabilitative passive therapies to participant-A. The efficacy of the mechanism used for Flexohand will be evaluated further by improving fabrication processes and with the addition of better actuators. From the experiments conducted in this research, it was seen that the participant was able to receive various passive hand digit exercises by using the prototype of Flexohand. These passive exercise routines were generated to show the therapy providing capabilities of the developed mechanism. These routines comprise various isolated joint motions, composite joint motions of individual fingers, and isolated joint motions of multiple fingers. The series of snapshots taken from the video recorded during experimentation shows some of the therapy routines. The wrist joint is not restricted during the finger exercises as the tendon sheath does not travel across the wrist. With a combination of sliding locks and MCP locking mechanism, various isolated joint and composite joint motions are achieved with Flexohand.
Conclusions
In this research, a robotic device for hand rehabilitation, namely Flexohand, was developed. Flexohand incorporates multiple mechanisms for providing isolated digit joint motion of all fingers and the thumb. The prototype of Flexohand was built using low-cost 3D printers, printing materials, and actuators. The current prototype was used to provide various rehabilitative passive therapies to participant-A. The efficacy of the mechanism used for Flexohand will be evaluated further by improving fabrication processes and with the addition of better actuators. In the case of thumb MCP extension (see Figure 17), the IP joint is locked, and the thumb MCP joint angle is changed from 25 • to 0 • . Similarly, PIP flexion of the index, middle, ring, and small finger are shown in Figure 18. Figure 19 shows the simultaneous MCP flexion of all fingers and thumbs where all DIP and PIP/IP joints are locked. To achieve composite flexion of the DIP, PIP, and MCP joint, all interphalangeal locks (finger: DIP, PIP; thumb: IP) are removed. Then, DIP and PIP flexion is achieved in Step 1, and finally, MCP flexion is achieved in Step 2 (see Figure 20).
From the experiments conducted in this research, it was seen that the participant was able to receive various passive hand digit exercises by using the prototype of Flexohand. These passive exercise routines were generated to show the therapy providing capabilities of the developed mechanism. These routines comprise various isolated joint motions, composite joint motions of individual fingers, and isolated joint motions of multiple fingers. The series of snapshots taken from the video recorded during experimentation shows some of the therapy routines. The wrist joint is not restricted during the finger exercises as the tendon sheath does not travel across the wrist. With a combination of sliding locks and MCP locking mechanism, various isolated joint and composite joint motions are achieved with Flexohand.
Conclusions
In this research, a robotic device for hand rehabilitation, namely Flexohand, was developed. Flexohand incorporates multiple mechanisms for providing isolated digit joint motion of all fingers and the thumb. The prototype of Flexohand was built using low-cost 3D printers, printing materials, and actuators. The current prototype was used to provide various rehabilitative passive therapies to participant-A. The efficacy of the mechanism used for Flexohand will be evaluated further by improving fabrication processes and with the addition of better actuators. Institutional Review Board Statement: IRB approval is not required to demonstrate proof of concept. | 14,613.2 | 2021-10-20T00:00:00.000 | [
"Engineering"
] |
A Fast and Reliable Matching Method for Automated Georeferencing of Remotely-Sensed Imagery
Due to the limited accuracy of exterior orientation parameters, ground control points (GCPs) are commonly required to correct the geometric biases of remotely-sensed (RS) images. This paper focuses on an automatic matching technique for the specific task of georeferencing RS images and presents a technical frame to match large RS images efficiently using the prior geometric information of the images. In addition, a novel matching approach using online aerial images, e.g., Google satellite images, Bing aerial maps, etc., is introduced based on the technical frame. Experimental results show that the proposed method can collect a sufficient number of well-distributed and reliable GCPs in tens of seconds for different kinds of large-sized RS images, whose spatial resolutions vary from 30 m to 2 m. It provides a convenient and efficient way to automatically georeference RS images, as there is no need to manually prepare reference images according to the location and spatial resolution of sensed images.
Introduction
Direct geo-location of remotely-sensed (RS) images is based on the initial imaging model, e.g., rigorous sensor model and Rational Polynomial Coefficients (RPC) model without ground control, and the accuracy of the model is limited by the interior and exterior orientation parameters.The accurate interior orientation parameters can be achieved by performing on-board geometric calibration, but the exterior orientation parameters, which are directly observed by on-board GPS, inertial measuring units and star-trackers, usually contain variable errors.Even the most modern satellite geo-positioning equipment results in varying degrees of geo-location errors (from several meters to hundreds of meters) on the ground [1].In practical applications, the reference image is of great importance to collect ground control points (GCPs) and to perform precise geometric rectification.However, the reference images are commonly difficult or expensive to obtain, and an alternative approach is to use GCPs obtained by GPS survey, which is time consuming and labor intensive.In recent years, many online aerial maps (e.g., Google satellite images [2], Bing aerial images [3], MapQuest satellite maps [4], Mapbox satellite images [5], etc.) and interactive online mapping applications (e.g., Google Earth [6], NASA World Wind [7], etc.) have become available, and they show high geometric accuracy according to the authors' recent GPS survey experiments.The surveyed GCPs are distributed in 17 different areas around China, where the latitude varies from 18 • N to 48 • N and the longitude varies from 75 • E to 128 • E. The accuracy of the online satellite maps (Google satellite images, Bing aerial images and Mapbox satellite images) in the surveyed areas is shown in Table 1.Note that the accuracy of MapQuest satellite maps is not included, as MapQuest satellite maps of high zoom levels (higher than 12) are not available in China.Although some areas lack high resolution images or the positioning errors of the images are around 10 m, most of the surveyed areas are of high geometric accuracy, and the root mean square (RMS) values of the positioning errors of these online resources are less than 5 m.Moreover, the areas lacking high resolution images are decreasing, and the geometric accuracy of the online resources is increasingly improving.These online resources provide another alternative to manually collecting GCPs, and they should be used more widely in the future as their accuracies increase.As far as we know, however, automatic solutions have not been reported yet.Automatic image matching is one of the most essential techniques in remote sensing and photogrammetry, and it is the basis of various advanced tasks, including image rectification, 3D reconstruction, DEM extraction, image fusion, image mosaic, change detection, map updating, and so on.Although it has been extensively studied during the past few decades, image matching remains challenging due to the characteristics of RS images.A practical image matching approach should have good performance in efficiency, robustness and accuracy, and it is difficult to perform well in all of these aspects, as the RS images are usually of a large size and scene and are acquired in different conditions of the spectrum, sensor, time and geometry (viewing angle, scale, occlusion, etc.).
The existing image matching methods can be classified into two major categories [8,9]: area-based matching (ABM) methods and feature-based matching (FBM) methods.
Among the ABM methods, intensity correlation methods based on normalized cross-correlation (NCC) and its modifications are classical and easy to implement, but the drawbacks of high computational complexity and flatness of the similarity measure maxima (due to the self-similarity of the images) prevent them from being applied to large-scale and multi-source images [9].Compared to intensity correlation methods, phase correlation methods have many advantages, including high discriminating power, numerical efficiency, robustness against noise [10] and high matching accuracy [11].However, it is difficult for phase correlation methods to be extended to match images with more complicated deformation, although Fourier-Mellin transformation can be applied to deal with translated, rotated and scaled images [12].Moreover, as phase correlation methods depend on the statistical information of the intensity value of the image, the template image should not be too small to provide reliable phase information, and phase correlation may frequently fail to achieve correct results if the template image covers changed content (e.g., a newly-built road).In least squares matching (LSM) methods, a geometric model and a radiometric model between two image fragments are modeled together, and then, least squares estimation is used to find the best geometric model and matched points [13].LSM has a very high matching accuracy potential (up to 1/50 pixels [14]) and is computationally efficient and adaptable (can be applied to complicated geometric transformation models and multispectral or multitemporal images [15]).However, LSM requires good initial values for the unknown parameters, as the alignment/correspondence between two images to be matched generally has to be within a few pixels or the process will not converge [14,16].
In contrast to the ABM methods, the FBM methods do not work directly with image intensity values, and this property makes them suitable for situations when illumination changes are expected or multisensor analysis is demanded [9].However, FBM methods, particularly line-and region-based methods, are commonly less accurate than ABM methods [15] (fitting these high-level features usually introduces additional uncertainty [17] to the matching result).FBM methods generally include two stages: feature extracting and feature matching.As automatic matching of line-and region-features is more difficult and less accurate, the point-based methods are much more widely used.Among the point-based methods, scale-invariant feature transform (or SIFT) [18] is one of the most important ones, which is invariant to image rotation and scale and robust across a substantial range of affine distortion, the addition of noise and changes in illumination, but imposes a heavy computational burden.More recently-proposed point detectors, e.g., Speeded Up Robust Features (SURF) [19], Features from accelerated segment test (FAST) [20], Binary Robust Invariant Scalable Keypoints (BRISK) [21], Oriented FAST and Rotated BRIEF (ORB) [22] and Fast Retina Keypoint (FREAK) [23], provide fast and efficient alternatives to SIFT, but they are proven not as robust as SIFT.However, SIFT-based methods face the following challenges when directly used in RS images: large image size, large scene, multi-source images, accuracy, distribution of matched points, outliers, etc.
During the last ten years, many improvements have been made to cope with the drawbacks of SIFT: Efficiency: In the PCA-SIFT descriptor [24], the 3042-dimensional vector of a 39 × 39 gradient region is reduced to a 36-dimensional descriptor, which is fast for matching, but it is proven to be less distinctive than SIFT [25] and to require more computation to yield the descriptor.Speeded-up robust features (SURF) is one of the most significant speeded up versions of SIFT, but can only slightly decrease the computational cost [26] when becoming less repeatable and distinctive [22].Some GPU (graphic process unit)-accelerated implementations of SIFT (e.g., SiftGPU [27] and CudaSift [28]) can get comparable results as Lowe's SIFT [18], but are much more efficient.However, these implementations require particular hardware, such as the GPU, which is not available for every personal computer (PC), and they are not robust enough when applied to very large satellite images.
Multi-source image: [29] refined the SIFT descriptor to cope with the different main orientations of corresponding interesting points, which are caused by the significant difference in the pixel intensity and gradient intensity of sensed and reference images.The work in [30] proposed an improved SIFT to perform registration between optical and SAR satellite images.The work in [31] introduced a similarity metric based on local self-similarity (LSS) descriptor to determine the correspondences between multi-source images.
Distribution control: Uniform robust SIFT (UR-SIFT) [32] was proposed to extract high-quality SIFT features in the uniform distribution of both the scale and image spaces, while the distribution of matched points is not guaranteed.More recently, the tiling method was used to deal with large RS images [26,33] and to yield uniform, distributed ground control points.
Outliers' elimination: Scale restriction SIFT (SR-SIFT) [34] was proposed to eliminate the obvious translation, rotation and scale differences between the reference and the sensed image.The work in [35] introduced a robust estimation algorithm called the HTSC (histogram of TARsample consensus) algorithm, which is more efficient than the RANSAC algorithm.The mode-seeking SIFT (MS-SIFT) algorithm [36] performs mode seeking (similarity transformation model) to eliminate outlying matched points, and it outperformed SIFT-based RANSAC according to the authors' experiments.The similarity transformation, nevertheless, is not suitable for all kinds of RS images when the effects of image perspective and relief are serious.
In summary, despite the high matching accuracy, ABM methods do not have good performance for RS images due to the complex imaging conditions and geometric distortions.On the other hand, FBM methods are more suitable for multisensor analysis.SIFT is one of the most successful FBM methods, but it still faces many difficulties when directly applied to RS images.Although a number of improved versions of SIFT have been proposed to cope with the drawbacks, all of these methods do not make full use of the prior information (initial imaging model and possible geometric distortions) of the RS image and the requirement of a specific task.In this work, we focus on the task of image rectification (e.g., geometric correction, orthorectification and co-registration), while the tasks of 3D reconstruction and DEM extraction, which require densely-matched points, are not considered.Commonly, tens of uniform, distributed and accurate control points are sufficient to perform rectification of RS images, and more control points do not necessarily improve the accuracy of the result [37].The purpose of this paper is to overcome the difficulties of SIFT and to develop a practical online matching method, which is efficient, robust and accurate, for the georeferencing task of RS images.The original contribution of this work mainly includes the following aspects: (i) a convenient approach to perform point matching for RS images using online aerial images; (ii) a technical frame to find uniformly-distributed control points for large RS images efficiently using the prior geometric information of the images; and (iii) an improved strategy to match SIFT features and eliminate false matches.
The rest of this paper is organized as follows.Section 2 introduces the technical frame of the proposed matching method, and Section 3 states the approach to utilize online aerial images in detail.Experimental evaluation is presented in Section 4, and the conclusion is drawn in Section 5.
Technical Frame
The proposed point matching method is mainly based on the following scheme: (1) Image tiling: The geometric distortion of the RS image is complicated, resulting from the distortion of the camera, projective deformation, affect of interior and exterior orientation parameters, Earth curvature, reliefs, and so on, and the rational function model (RFM) of 78 coefficients (RPCs) is usually used to model the deformation of the RS image [38].However, the local distortion, e.g., that of a small image patch of 256 × 256, can be approximated by much simpler transformations (affine or similar transformation).
In a remotely-sensed image of a large scene, SIFT may be computationally difficult and error-prone, and dividing the large image into small tiles can avoid this drawback.
The tilling strategy also helps to control the distribution and quantity of the matched points, and the computational cost can be notably saved if the number of target matches is limited.
(2) Make use of prior geometric information: The prior geometric information of RS images, e.g., ground sample distance (or spatial resolution) and coarse geographic location, can be utilized to make the image matching process more efficient and robust.
(3) Make use of the attributes of SIFT feature: The attributes of a SIFT feature, including location, scale, orientation and contrast, can be used to eliminate false matches and evaluate the quality of the feature.
(4) Refine the results of SIFT: The matched points of SIFT are extracted from the sensed and reference image independently and are less accurate than those of area-based methods.However, the results of SIFT provide good initial values for least squares matching (LSM) and can be refined to achieve very high accuracy by LSM.
The process of the proposed matching method can be summarized as the flowchart in Figure 1, and the detailed techniques of the method will be introduced in the following sections (Section 2.1 to Section 2.6).
Image Tiling
In the proposed method, image tiling consists of three steps: • The region of interest (the whole region of the sensed image or the intersection region of the sensed and reference image) is divided into blocks according to the number of target matches.• Each block of the image is divided into small tiles (processing unit) to perform SIFT matching, and in this work, the size of the image tile is 256 × 256.• The corresponding tile is extracted from the reference image (online aerial maps) according to the tile in the sensed image and the initial geometric model.
Image blocking
All blocks processed?
Finish
All tiles processed?Figure 2 illustrates the blocks of an image and the tiles of a block.The aim of image matching is to achieve a reliable control point in each block, and the process will move on to the next block once any tile of the current block succeeds to yield a reliable control point.When extracting the corresponding tile from the reference image, the initial geometric model should be utilized, which can be various types: the affine transformation model contained in a georeferenced image or all kinds of imaging models, such as a rigorous sensor model, a polynomial model, a direct linear transformation model, a rational function model (RFM), etc.
Commonly, these imaging models can be defined as a forward model (from the image space to the object space) or an inverse model (from the object space to the image space). where: (x, y) are the coordinates in image space, (X, Y, Z) are the coordinates in object space, Z is the elevation, F X and F Y are the forward transforming functions of the X and Y coordinates, respectively, F x and F x are the inverse transforming functions of x and y coordinates, respectively.
In the forward model, image coordinates (x, y) and elevation Z are needed to determine the ground coordinates (X, Y, Z).With the help of DEM data, however, the ground coordinates (X, Y, Z) can be determined by the image coordinates (x, y) after several iterations.Therefore, the forward model can also be denoted by Equation ( 3) if DEM data are available.
With the help of the initial geometric model of the sensed image, the reference image tile can be extracted by calculating its approximate extent.Moreover, to make SIFT matching more efficient and robust, the reference image tile is resampled to a similar resolution as the sensed image tile.The detailed techniques of fetching the reference image tile from online aerial maps will be introduced in Section 3.
Extracting SIFT Features
As the reference image tile is resampled to a similar resolution as the sensed image tile, the SIFT detector can be performed in only one octave to get the expected results, and the process becomes much more efficient.In the only octave, the scale space of the image tile is defined as a function, L(x, y, σ), that is produced from the convolution of a variable-scale Gaussian, G(x, y, σ), with the input image tile, I(x, y) [18]: where G(x, y, σ) = 1 2πσ 2 e −(x 2 +y 2 )/2σ 2 and * is the convolution operation.Then, D(x, y, σ), the convolution of the difference-of-Gaussian (DoG) function and the image tile, which can also be computed from the difference of two nearby scales separated by a constant multiplicative factor k, is used to detect stable keypoint locations in the scale space by searching the scale space extrema.
Once a keypoint candidate has been found, its location (x, y), scale σ, contrast c and edge response r can be computed [18], and the unstable keypoint candidates whose contrast c is less than threshold T c (e.g., T c = 0.03) or whose edge response r is greater than threshold T r (e.g., T r = 10) will be eliminated.Then, image gradient magnitudes and orientations are sampled around the keypoint location to compute the dominant direction θ of local gradients and the 128-dimensional SIFT descriptor of the keypoint.
Matching SIFT Features
In standard SIFT, the minimum Euclidean distance between the SIFT descriptors is used to match the corresponding keypoints, and the ratio of closest to second-closest neighbors of a reliable keypoint should be greater than an empirical threshold T dr , e.g., T dr = 0.8 [18].However, [29,32] pointed out that the T dr constraint was not suitable for RS images and would lead to numerous correctly-matched eliminations.
In this work, both the T dr constraint and a cross matching [32] strategy are applied to find the initial matches.Denoting by P and Q the keypoint sets in the sensed and reference image tiles, once either of the following two conditions is satisfied, the corresponding keypoints p i ∈ P and q j ∈ Q will be included in the match candidates.
T dr constraint: The ratio of closest to second-closest neighbors of the keypoint p i is greater than T dr = 0.75, and the keypoint q j is the closest neighbor of p i .Here, we chose a smaller T dr rather than 0.8, which is recommended by [18], to reduce the chance of including too many false matches for RS images.Cross matching: The keypoint p i is the closest neighbor of q j in P, and the keypoint q j is also the closest neighbor of p i in Q.
Of course, the match candidates usually include a number of false matches, which will be eliminated in the following step.
Eliminating False Matches
Commonly, some well-known robust fitting methods, such as RANSAC or least median of squares (LMS), are applied to estimate an affine transformation, as well as the inliers from the match candidates.However, these methods perform poorly when the percent of inliers falls much below 50%.In this work, the false matches are eliminated by four steps, i.e., rejecting by scale ratio, rejecting by rotation angle, rejecting by the coarse similarity transformation (Equation ( 6)) using RANSAC and rejecting outliers by the precise affine transformation (Equation ( 7)) one by one.
x r = s(x s cos θ + y s sin θ) + t x y r = s(−x s sin θ + y s cos θ) + t y (6) where: s and θ are the scale parameter and rotation angle parameter of similarity transformation, t x and t y are the translation parameters of similarity transformation in the x direction and the y direction, a 0, a 1 , a 2 , b 0 , b 1 , b 2 are the parameters of affine transformation.
There are a number of reasons for choosing similarity transformation to perform RANSAC estimation instead of affine transformation.Firstly, it is possible for a similarity transformation to model the geometric deformation coarsely in a small tile of an RS image.Secondly, the similarity transformation solution requires less point matches than the affine transformation solution and is also more robust.In addition, the similarity transformation can make full use of the geometric information, such as the scale and dominant direction, of the SIFT keypoints.
(1) Rejecting by scale ratio: The scale has been computed for each keypoint in the phase of extracting SIFT features (Section 2.2) and the scale ratio of a pair of corresponding keypoints in the sensed image tile and reference image tile indicates the scale factor between the two image tiles.By computing a histogram of the scale ratios of all match candidates, the peak of the histogram will locate around the true scale factor between the two image tiles [36].The match candidates whose scale ratio is far from the peak of the histogram are not likely to be correct matches and, therefore, are rejected from the match candidates.Denoting the peak scale ratio by σ peak , the acceptable matches should satisfy the following criterion: where ∆σ is the scale ratio of a match candidate, T σ is the scale ratio threshold and T σ = 0.8 is used in this work.The selection of T σ will be discussed later at the end of Section 2.4.Note that the reference image tile is resampled to a similar resolution as the sensed image tile; the computation of the scale ratio histogram is not necessary.The σ peak is expected to be around 1.0, even if we do not check the scale ratio histogram.
(2) Rejecting by rotation angle: Similarly, as the difference of the dominant direction of corresponding keypoints indicates the rotation angle between the two image tiles, a rotation angle histogram can be computed using the dominant directions of the SIFT keypoints in all match candidates.The rotation angle histogram has 36 bins covering the 360 degree range of angles, and the match candidates whose difference of dominant direction is far from the peak of the histogram are rejected.Denoting the peak rotation angle by θ peak , the acceptable matches should satisfy the following criterion: where ∆θ is the dominant directions of the SIFT features in a match candidate, T θ is the rotation angle threshold and T θ =15 • is used in this work.The selection of T θ will be discussed later at the end of Section 2.4.
(3) Rejecting by similarity transformation: After the first two steps, most of the false matches will be rejected, and the RANSAC algorithm is quite robust to estimate a coarse similarity transformation from the remaining match candidates.Meanwhile, outliers for similarity transformation are also excluded.
(4) Rejecting by affine transformation: In order to achieve accurate matching results, the remaining match candidates should be further checked by an affine model.Specifically, all of the remaining match candidates are used to find the least-squares solution of an affine transformation, and inaccurate matches, which do not agree with the estimated affine transformation, should be removed.The process will iterate until none of the remaining matches deviates from the estimated affine transformation by more than one pixel.
Note that once fewer than four match candidates remain before or after any of the four steps, the match will be terminated for this tile immediately.
Next, we will provide a discussion on the recommended values of T σ and T θ , and this is based on a matching task using a collection of 1000 pairs of real image tiles that were extracted from different sources of RS images, including Landsat-8, ZY-3, GF-1, etc.
The matching task includes two groups of tests: (1) set T θ =15 • , and let T σ vary from zero to one; (2) set T σ = 0.8, and let T θ vary from 0 • to 360 • .A pair of image tiles is called a "matched pair" if the matching process of this image pair yields at least four matches after all four filtering steps.However, for a matched pair, it is possible that not all of the false matches were excluded after the filtering steps, and the results will be untrustworthy.Therefore, only the correctly-matched pairs, whose output matches are all correct, are reliable, and we refer to the percentage of correctly-matched pairs out of all of the matched pairs as the "correct match rate".In each group of tests, the numbers of matched pairs and correct match rates were obtained for different values of T σ or T θ .Figure 3a shows the matching results with respect to different values of T σ , while Figure 3b shows the matching results with respect to different values of T θ .According to Figure 3a, as T σ increases from zero to one, the number of matched pairs declines, but the correct match rate ascends.According to Figure 3b, as T θ increases from 0 • to 360 • , the number of matched pairs ascends, while the correct match rate declines.To ensure a high correct match rate and enough matches, the value of T σ should be between 0.70 and 0.85, and the value of T θ should be between 15
Refining Position
After the step of eliminating false matches, all of the remaining matches have good agreement (within one pixel) with the local affine transformation.However, the accuracy of the matched points is not high enough considering that they are extracted from the sensed image and reference image independently.Consequently, least squares matching (LSM) is applied to refine the matching results.
It is possible to further include any matches that agree with the final affine transformation from those rejected by error in the phase of eliminating false matches, and then, the new set of matches will be more complete.However, only a pair of matched points is needed for a sensed image tile in the proposed method, even if a large number of matches are found.Therefore, the step of adding missed matches is not included in this work for the sake of efficiency.
Considering that the features having high contrast are stable to image deformation [32], the keypoint with the highest contrast is chosen from the output of the phase of eliminating false matches.Actually, the high contrast not only guarantees the stability of the keypoints, but also benefits the accuracy of LSM.
LSM is performed in a small region around the SIFT keypoint in sensed image tile, e.g., a template of 11 × 11, and it is quite efficient.In order to cope with both the geometric deformation and radiometric difference, a geometric model and a radiometric model between two image fragments are modeled together [16], and the condition equation of a single point is: where x r = a 0 + a 1 x s + a 2 y s and As Equation ( 8) is nonlinear, good initial values are required to find the optimal models.Fortunately, the previously-calculated affine transformation provides very good initial values for the geometric parameters, and those of the radiometric parameters can be set as k 1 = 1 and k 2 = 0 [16].Finally, the Levenberg-Marquardt algorithm [39] is applied to solve the problem.
Below is an example to show the effect of position refinement.Figure 4 illustrates matched keypoints in the sensed image tile and the reference image tile, and Figure 5 shows the image fragments around the keypoints, as well as the the matched points before and after the phase of position refinement.Figure 5a,b is the original matching result of SIFT, and it is very difficult to tell whether the matched points in the sensed image and the reference image are corresponding points exactly.However, by applying least squares matching, the warped image fragment in Figure 5c is in good agreement with the search image fragment in Figure 5b, both in geometry and radiometry.Consequently, it is very clear that the marked point in Figure 5c (transformed from the keypoint in Figure 5a) and that in Figure 5d (refined by LSM) are corresponding.Meanwhile, one can see that the original SIFT keypoint in Figure 5b is not accurate enough when comparing to the point in Figure 5d.Note that the images in Figure 5 are enlarged by eight times using the cubic interpolation method, and the actual deviation between Figure 5b,d is about one pixel.
Summary
According to the number of required control points, the sensed image will be divided into a number of blocks evenly, and only one control point is needed for each block.Then, each block is divided into a number of tiles according to the previously-defined tile size, and once any of the tiles succeeds to produce a control point, the process will move on to the next block.We do not intend to exhaust all of the possible matches, but find a moderate number of control points that are very reliable.Obviously, this method is greedy and therefore efficient.
After the matching of all image blocks is finished, it is easy to further identify potential outliers by checking for agreement between each matched point and a global geometric model, e.g., rigorous sensor model, rational function model, etc.However, hardly any false matches were found in the final matches according to the results of all our tests.
We call the process of matching a pair of a sensed image tile and a reference image tile a "tile trial" (as shown in Figure 1), and the actual efficiency of the method is decided by the number of tile trial times.If the success rate of tile trial is 100% (the best case), only one tile trial is performed for each control point, and the number of tile trial times is not related to the size of the sensed image; if the success rate of tile trial is 0% (the worst case), all of the tiles of the sensed image will be tested, and the number of tile trial times is decided by the size of the sensed image.The success rate of tile trial is related to the similarity between the sensed image and the reference image and the distinction, which can be affected by a number of factors, e.g., the quality (cloud coverage, contrast, exposure, etc.) of the images, the scale difference, the spectral difference, changes caused by different imaging times, etc.Additionally, as the tile trial is based on SIFT matching, the success rate is limited if the test images cover a region of low texture, such as water, desert and forest.
Furthermore, the high independence among the image blocks enables a parallel implementation, which can further accelerate the proposed method.The processing of image blocks can be assigned to multiple processors and nodes (computers) in a cluster and, therefore, run concurrently.Parallelization makes full use of the computing resources and exponentially shortens the consumed time of image matching.In this work, we implemented a parallel version on multiple processors, but not on multiple computers.
In addition, the SIFT implementation designed on a graphic process unit (GPU) [27] may considerably accelerate the process of the tile trial, but it is not yet included in this work.
Fetch Reference Image from Online Aerial Maps
In Section 2.1, we mentioned that the reference image tile can be extracted by calculating its approximate extent, and this section will introduce the detailed techniques to fetch reference image tiles from online aerial maps, i.e., Google satellite images, Bing aerial images, MapQuest satellite maps and Mapbox satellite images.
Static Maps API Service
In this work, we use the Static Maps API Service of online aerial maps, i.e., sending a URL (Uniform Resource Locator) request, to fetch the required reference image tiles automatically.For example, the formats of URL request of Google satellite images, Bing aerial images, MapQuest satellite maps and Mapbox satellite images are listed below.
Mapbox satellite images:
In these URLs, the parameters inside "{}" should be specified, i.e., the longitude and latitude of the center point, zoom level, width and height of the image tile and the API keys.One can apply either free or enterprise API keys from corresponding websites, freely or with a low cost, and the calculation of the other parameters will be introduced in the following sections.
Zoom Level
For online global maps, a single projection, Mercator projection, is typically used for the entire world, to make the map seamless [40].Moreover, the aerial maps are organized in discrete zoom levels, from 1 to 23, to be rendered for different map scales.At the lowest zoom level, i.e., 1, the map is 512 × 512 pixels, and once the zoom level is increased by one, the width and height of the map expand twice.
Consequently, in order to fetch the corresponding reference image tile, we need to firstly determine the zoom level, which is related to the ground sample distances (GSD) of the sensed image tile.Similar to the relative scale between the sensed image and the reference image, the GSDs of the sensed image are not necessarily constant in a whole image, and the local GSDs of an image tile can be calculated by the formula: where: GSD x and GSD y are the ground sample distances in x direction and y direction, x c and y c are the image coordinates of the center point of the sensed image tile, F X and F Y are the forward transforming functions of the X and Y coordinates, which are described in Equation (3).
On the other hand, the GSDs (in meters) of online aerial maps vary depending on the zoom level and the latitude at which they are measured, and the conversion between the GSD and nearest zoom level is described by Equation (10), where: R earth is the earth radius, for which 6,378,137 meters is used, φ is the latitude at which it is measured, GSD is the ground sample distance (in meters), both in the x direction and y direction, n is the zoom level, • is an operator to find the nearest integer.Equation ( 10) can be applied to find the nearest zoom level according to the GSD of the sensed image tile (the mean value of those in the x direction and the y direction) calculated by Equation (9).
Width and Height
By providing the rectangle extent of the sensed image tile, the corresponding geographic coordinates (longitude and latitude) of the four corners can be calculated by the initial forward transforming functions in Equation (3).In order to find the extent of the required reference image tile, the geographic coordinates, λ and φ, should be converted to the image coordinates, x and y, in the map of the nearest zoom level, n, according to Equation (11) [40], where λ and φ are the longitude and latitude.
Then, the extent of reference image tile is the minimum boundary rectangle of the four corners in the map of zoom level n, and the width and height of the tile are known, accordingly.
Center Point
Next, we need to calculate the geographic coordinates of the center point of reference image tile, and the following inverse transformation from image coordinates, x and y, in the map of the nearest zoom level, n, to geographic coordinates, λ and φ, will be used, Equation ( 12) is derived from Equation ( 11) directly, and it will be used again when the matched points in the sensed and reference image tile are found, as the image coordinates in the reference image tile should be converted to geographic coordinates to obtain ground control points.
Resizing
Given the nearest zoom level, the width and height of the image tile, the longitude and latitude of the center point and the API keys, the Static Maps API service can be used to download the required reference image tile from the online aerial images.However, the GSD of the downloaded reference image tile may not be very close to that of the sensed image tile, since the zoom level is discrete.The downloaded image tile needs to be further resampled to a similar resolution as the sensed image tile for the sake of efficiency and robustness, according to the relative scale between the two image tiles, which can be calculated by dividing the GSD of the sensed image tile by that of the online reference image tile.
Summary
The scene of a RapidEye image captured in 2010 is used to show an example of online matching.The image is of Oahu, Hawaii, and the spatial resolution is 5 m. Figure 6 shows a tile of the RapidEye image matched with different online aerial maps, including Google satellite images, Bing aerial images, MapQuest satellite maps and Mapbox satellite images.Note that we intentionally chose a scene in the USA, as the MapQuest satellite maps of high zoom levels (higher than 12) are provided only in the United States.
From Figure 6, one can see that in the same range, the data sources of the four kinds of online aerial maps are not the same.In practically applications, different online aerial maps can be used for complementation.
Experiments and Analysis
In this section, several groups of experiments are carried out to check the validity of the proposed method, and all experiments are performed on a 3.07-GHz CPU with four cores.
Robustness
To show the superiority of the matching strategy of the proposed method, we carry out comparative tests with three methods: the proposed method, the ordinary SIFT [18] and SR-SIFT [34], which is claimed to be more robust than ordinary SIFT.In the ordinary SIFT matching method, match candidates are found by using the distance ratio constraint of closest to second-closest neighbors, and outliers are eliminated by using the RANSAC algorithm and affine transformation.In the SR-SIFT method, scale restriction is applied to exclude unreasonable matches before the RANSAC filtering.The distance ratio threshold T dt = 0.75 is applied in all of the methods.Figure 7 shows the results of the three methods when applied to a pair of AVIRIS image tiles (visible and infrared).From Figure 7, we can see that the original SIFT yields the poorest matching results, while SR-SIFT provides more correct matches.However, the best results come from the proposed method, not only the quantity of correct matches, but also the distribution of matched points.
From Table 2, the following points can be drawn: • The results of ordinary SIFT and SR-SIFT are similar, and a simple scale restriction filter seems not helpful to find correct matches.Specifically, in most cases (except for Dataset 5), the distance ratio constraint excludes a number of correct matches and sometimes results in failure (e.g., in Dataset 6).By applying cross matching, the proposed method includes much more initial match candidates, although many of them are false matches.• The percentage of outliers in the initial match candidates is usually greater than 90%, and the RANSAC algorithm is not robust enough to identify correct subsets of matches; thus, it frequently fails or yields untrustworthy results.On the other hand, in the proposed method, four steps of outlier rejecting can eliminate all of the false matches.Actually, after the first two steps of rejecting (by the scale ratio and by the rotation angle), most of the outliers will be cast out.• Commonly, only a few matches will be added in Step 6 of the proposed method, except for Dataset 5, in which the correct matches are plentiful.Consequently, Step 6 can be omitted, without affecting the final result too much.
• The ordinary SIFT matching method performs quite well for Dataset 5, in which the sensed image and reference image were captured simultaneously from the same aircraft and in the same view angle.With little variation in content, illumination and scale, the SIFT descriptor is very robust and distinctive, and the distance ratio constraint identified most of the correct matches.Then RANSAC algorithm manages to find reliable results, since the match candidates contain fewer than 50% outliers.
Efficiency
In order to show the efficiency of the proposed method, SiftGPU [27], which is the fastest version of SIFT to our best knowledge, is applied to carry out comparative tests with the proposed method.The implementation of SiftGPU is provided by Wu C. in [27].
Two scenes of GF-2 PAN images in Beijing, China, which were acquired on 3 September 2015 and 12 September 2015, respectively, are used to perform the matching tests.GF-2 is a newly-launched Chinese resource satellite; the spatial resolution of its panchromatic image is around 0.8 m, and the size of an image scene is around 29,000 × 28,000 pixels.SiftGPU spends 19.4 s to find 82 matched points (including 11 incorrectly matched points) from the two scenes of GF-2 images, while the proposed method spends 20.6 s to find 30 matched points, and the matching results are shown in Figure 8.Although SiftGPU yields more matches within less time, the distribution and the correctness of the results of the proposed method (as shown in Figure 8b) are obviously superior to those of SiftGPU (as shown in Figure 8a).SiftGPU makes full use of the computing resource of the computer devices and is quite efficient when processing large images, and 15,030 and 16,291 SIFT keypoints are extracted from the two images respectively within a dozen seconds.However, finding the corresponding keypoints is difficult, as the large scene makes the descriptors of SIFT keypoints less distinctive.Moreover, the serious distortion of the satellite images also makes it difficult to identify the outliers from matched points; the residual errors of the 71 correctly matched points are more than 100 pixels when fitted by a three-degree 2D polynomial function.
Actually, SiftGPU frequently fails to provide sound results for very large satellite images according to our experimental results, despite its outstanding efficiency.
In summary, the proposed method is almost as fast as SiftGPU, but provides more reliable results.
Accuracy
In Section 2.5, an example has shown the effect of least squares match (LSM) refinement, and in this section, 42 scenes of the GF-1 MSS image are used to further evaluate the accuracy of the proposed method.
Firstly, the original GF-1 images are rectified based on the vendor-provided RPC model with constant height zero and projected in longitude and latitude.Secondly, 25 check points, (x, y, λ, φ, 0), are collected between each pair of the original image and the rectified image using the proposed method.Finally, the geographic coordinates, (λ, φ, 0), of the check point in the rectified images are transformed into the image coordinates, (x , y ), in the original images using the inverse transformation of the vendor-provided RPC model, and then, the biases between the matched points can be measured by the difference of the image coordinates, (x − x , y − y ).The results before and after LSM refinement are compared to show the accuracy improvement of the proposed method.
Figure 9 shows the root mean square biases of matched points before and after LSM refinement in each image, and one can see that in most of the tests, the accuracy of the matched points is notably improved after LSM refinement.
There are several reasons to use this experimental scheme.Geometric transformation, especially longitude and latitude projection, usually results in distortion of the image and then increases the uncertainty of the position of detected SIFT features, while image distortion commonly exists in practical image matching tasks.In addition, by performing image matching between the original image and the rectified image, the ground truth of the bias should be zero, since the parameters of geometric transformation are already known.Moreover, the least squares match is performed in a small patch and is independent of the global transformation (vendor-provided RPC model); thus, the agreement between the matched points and the global transformation can be applied to evaluate the accuracy of matching, objectively.Note that the position biases between matched points may not be exactly zero, due to the errors introduced by interpolation.
Practical Tests
To evaluate the proposed method, four scenes of RS images, including Landsat-5, China-Brazil Earth Resources Satellite 2 (Cbers-2), Cbers-4, ZY-3, GF-2, Spot-5, Thailand Earth Observation System (Theos) and GF-1, are used to perform online matching, while Bing aerial images are utilized as the reference images.For each image, the proposed matching method is used to automatically collect control points, which are then applied to rectify the sensed image, and finally, 20 well-distributed check points are manually collected from the rectified image and reference images (Bing aerial images) to evaluate the matching accuracy.Table 3 shows the general information of the test images used in these experiments.As shown in Table 3, different initial image models of the sensed image are utilized, including the rigorous sensor model, affine transformation contained in georeferenced images, the vendor-provided RPC model, etc.The rigorous sensor model of Landsat-5 is provided by the open source software OSSIM [41] and the rigorous sensor model of the Cbers-4 image is built according to the 3D ray from the image line-sample to ground coordinates in the WGS-84 system.The RPC models of the ZY-3, GF-2 and GF-1 images are provided by the vendors.The Spot-5, Cbers-2 and Theos images are processed to the L2 level of correction, and the affine transformation models contained in images are used as initial imaging models.
Cbers-2 (Test 2), Cbers-4 P10 (Test 3), Spot-5 (Test 6) and Theos (Test 7) images are rectified based on the terrain-dependent RPC (TD-RPC) model.Landsat-5 image in Test 1 is rectified based on the rigorous sensor model, and RPC refinement is applied to rectify the ZY-3 (Test 4), GF-2 (Test 5) and GF-1 (Test 8) images.Therefore, we intend to find 100 GCPs for each image in Tests 2, 3, 6 and 7, while 30 GCPs are required for each image in Tests 1, 4, 5 and 8.Note that all of the TD-RPC models in the tests are calculated using 1 -Norm-Regularized Least Squares (L1LS) [42] estimation method, to cope with the potential correlation between the parameters.
Figure 10 shows the distribution of matched points in Test 1 -Test 8, and Table 4 shows more results for each test, including the number of correct matches, consumed time, the model used for geometric rectification and the RMSE of check points.Figure 11 is the comparison between the sensed images and the online Bing aerial maps in Tests 4, 6, 7 and 8 using the "Swipe Layer" tool in ArcGIS Desktop, and ArcBruTile [43] is used to display the Bing aerial maps in ArcGIS Desktop.
In this section, the spatial resolution of the test image varies from 30 m to 2 m, but the very high resolution (less than 1 m) RS images are not included, as the geometric accuracy of the online aerial maps is limited.In this sense, we can successfully find enough GCPs for very high resolution images from the online aerial maps, but the accuracy of the GCPs is not guaranteed.
According to Figure 3, and the test results in Figure 10, Figure 11 and Table 4, one can see that: • Various kinds of geometric models can be used as the initial imaging model, including rigorous models for different sensors, the vendor-provided RPC model and the affine transformation model contained in georeferenced images.• The proposed method is successfully applied to match images captured in different imaging conditions, e.g., by different sensors, at different times, at different ground sample distances, etc. • Sufficient and well-distributed GCPs are efficiently collected for sensed images of different spatial resolutions, and the biases between the sensed images and online aerial maps are corrected after the process of image matching and geometric rectification.• It is a very convenient and efficient way to automatically collect GCPs for the task of geometric rectification of RS images, as there is no need to manually prepare reference images according to the location and spatial resolution of sensed images.
Note that the process of matching online is commonly a bit more time consuming than matching locally, since sending requests and downloading image tiles from online aerial maps may take more time than extracting image tiles from local reference images.
Conclusions
In this paper, we proposed a convenient approach to automatically collect GCPs from online aerial maps, which focuses on automated georeferencing of remotely-sensed (RS) images and makes use of the prior information of the RS image.The proposed method is based on SIFT feature, and the improvements accomplished in this work help to overcome the difficulties of SIFT when directly used in RS images, e.g., large image size, distribution of matched points, limited accuracy, outliers, etc.Both local reference images and online aerial maps can be utilized to collect control points.Different kinds of large-sized RS images, whose spatial resolutions vary from 30 m to 2 m, are included in the experiments, and the results show that the matching process can be finished within tens of seconds, yielding a sufficient number of reliable ground control points (GCPs).With the help of these reliable GCPs and DEM data, the root mean square errors (RMSEs) of the check points from the georeferenced images are less then two pixels.Moreover, by utilizing the online aerial maps, there is no need to manually prepare reference images according to the location and spatial resolution of sensed images.
Although we can successfully find GCPs for very high resolution (less than 1 m) RS images from the online aerial maps, the accuracy of the GCPs is not guaranteed.However, we believe the proposed approach will become even more useful as the accuracy of online aerial maps improves.
Figure 2 .
Figure 2. Blocks of an image and tiles of a block.
Figure 3 .
Figure 3.The numbers of matched pairs and correct match rates for different values of T σ and T θ .(a) The results with respect to different values of T σ ; (b) the results with respect to different values of T θ .
a 2 ,
b 0 , b 1 , b 2 are six parameters of geometric transformation, k 1 and k 2 are two radiometric parameters for contrast and brightness (or equivalently gain and offset), I s (x s , y s ) and I r (x r , y r ) are the gray values of a pixel in a source and reference image tile.The geometric model and the radiometric model are estimated by least squares, and then, we can accurately locate the corresponding point in the reference image tile.
Figure 4 .Figure 5 .
Figure 4.The matched keypoints in a sensed image tile and a reference image tile.(a) The sensed image tile; and (b) the reference image tile.
( a )Figure 6 .
Figure 6.Example of online matching, and the matched points are marked by cross.(a) to (d) are matching results using Google satellite images, Bing aerial maps, MapQuest satellite maps and Mapbox satellite images, respectively.In each figure, the left is the RapidEye image tile, while the right is the online aerial map.
Figure 7 .
Figure 7. Matching results of the AVIRIS visible image tile (left) and the AVIRIS infrared image tile (right), using three different methods.(a) The result of original SIFT (four matches are found, including a wrong match); (b) the result of SR-SIFT (six correct matches are found); and (c) the result of the proposed method (20 correct matches are found).
Figure 8 .
Figure 8. Matching results of GF-2 PAN images, using SiftGPU and the proposed method.In each sub-figure, the left is the image captured on 3 September 2015 and the right is the image captured on 12 September 2015.The matched points are labeled by the same numbers, and red crosses stand for correct matches, while yellow crosses stand for wrong matches.(a) The result of SiftGPU (71 correct matches and 11 wrong matches); and (b) the result of proposed method (30 correct matches are found).
Figure 9 .
Figure 9. Root mean square biases of matched points before and after least squares match (LSM) refinement.
Figure 11 .
Figure 11.Layer swiping between sensed images and online aerial maps in Tests 4, 6, 7 and 8. (a) and (b) are from Test 4, and the top ones are Bing aerial maps, while the lower one in (a) is the warped ZY-3 image using vendor-provided RPC and the lower one in (b) is the rectified ZY-3 image using RPC refinement; (c) and (d) are from Test 6, and the left are Bing aerial maps, while the right one in (c) is the Spot-5 image of Level 2 and the right one in (d) is the rectified Spot-5 image using terrain-dependent RPC; (e) and (f) are from Test 7, and the right are Bing aerial maps, while the left one in (e) is the Theos image of Level 2 and the left one in (f) is the rectified Theos image using terrain-dependent RPC; (g) and (h) are from Test 8, and the upper are Bing aerial maps, while the lower one in (e) is the warped GF-1 image using vendor-provided RPC and the lower one in (f) is the rectified GF-1 image using RPC refinement.
Table 1 .
Accuracy of the online aerial maps, i.e., root mean square (RMS) values of the positioning errors according to our GPS survey results.
Number of required control points Sensed image Reference image Figure 1.
Flowchart of the proposed matching method.
Table 2 .
The number of remaining matches after each step in the three methods.
Table 3 .
General information of the test images.
Table 4 .
Test results of online image matching. | 12,313 | 2016-01-19T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science",
"Geography"
] |
An Integrative Glycomic Approach for Quantitative Meat Species Profiling
It is estimated that food fraud, where meat from different species is deceitfully labelled or contaminated, has cost the global food industry around USD 6.2 to USD 40 billion annually. To overcome this problem, novel and robust quantitative methods are needed to accurately characterise and profile meat samples. In this study, we use a glycomic approach for the profiling of meat from different species. This involves an O-glycan analysis using LC-MS qTOF, and an N-glycan analysis using a high-resolution non-targeted ultra-performance liquid chromatography-fluorescence-mass spectrometry (UPLC-FLR-MS) on chicken, pork, and beef meat samples. Our integrated glycomic approach reveals the distinct glycan profile of chicken, pork, and beef samples; glycosylation attributes such as fucosylation, sialylation, galactosylation, high mannose, α-galactose, Neu5Gc, and Neu5Ac are significantly different between meat from different species. The multi-attribute data consisting of the abundance of each O-glycan and N-glycan structure allows a clear separation between meat from different species through principal component analysis. Altogether, we have successfully demonstrated the use of a glycomics-based workflow to extract multi-attribute data from O-glycan and N-glycan analysis for meat profiling. This established glycoanalytical methodology could be extended to other high-value biotechnology industries for product authentication.
Introduction
With the growing human population and the increasing demand for food, food adulteration has become a global problem estimated to affect 10-20% of all food consumed in the world [1,2]. Such contamination by either additions or substitutions of meat from a different species is a significant dietary issue, particularly for individuals with allergies or those of a certain religious conviction [3,4]. It is thus prudent to develop techniques in authenticating meat products as a means of ensuring safe trade and ethics [2,3].
Currently, many methods have been developed for the means of food fraud detection, including microscopic, spectroscopic (NMR, FTIR), and DNA-based techniques (PCR) [2,[5][6][7]. Indeed, amidst these techniques, biomarkers identification by means of omics technology allows such quantification and distinction at a molecular level [8,9]. In fact, the significant popularity and application of these technologies in resolving food compositions in general at a high resolution has developed an entire field of "foodomics" to identify molecular traits pertaining to the production and processing of these complex mixtures [10,11].
Interestingly, within the available omics technologies in the analysis of meat from different species, the use of glycomics remains relatively unknown. Considering the multiple diverse glycan structures that can arise as a result of the complex protein glycosylation pathways, it is likely that significant structural differences in glycan compositions can be detected between samples derived from different animal species [12,13]. In particular, studies have found at least six different N-glycan structural differences between duck and meat samples as a means of differentiating these two species [13]. Additionally, the types of O-glycan structures in meat samples are not presently known, and thus the discrimination patterns between different meat samples using an O-glycan analysis have also not been resolved. With recent advances in deciphering glycan structures, such as a combined fluorescence-based quantitation with an LC-MS technique (LC-FLR-MS) in resolving N-glycan structures at great sensitivity [14,15], as well as novel methods in releasing O-glycans with free reducing-end aldehydes for O-glycan analysis [16][17][18], a deep and total structural analysis of both N-linked and O-linked glycans can be performed to characterise samples with a degree of resolution that previously would not have been possible.
In this study, we demonstrate the use of this approach to determine the diverse O-linked and N-linked glycan profiles of three meat samples (chicken, beef, and pork). In the O-glycan analysis, we find a clear difference in the distinct structures between the different meat samples. The abundance and diversity of each O-glycan structure appears to be significantly dissimilar, suggesting well-defined O-glycosylation patterns between meat samples derived from different animal species. In the N-glycan analysis, our high-resolution measurements have identified the presence of up to 17 different N-glycan structures in the meat samples, which is an increase by a factor of two compared to the previously carried out analysis [13]. The individual glycan structures, as well as the total glycosylation attributes, are clearly distinguishable between the different meat samples. Finally, a principal component analysis (PCA) is also performed with all O-glycan and N-glycan structures within the meat samples, revealing straightforward discrimination between the samples, and thus the potential use of such integrative glycomic approaches for the high-throughput authentication of meat samples in the future [19].
Meat Lysis and Protein Extraction
Meat samples for each species were purchased and processed within the same day. Chicken samples were isolated from peroneus longus, whilst pork and beef samples were isolated from extensor carpis radialis. Fat and connective tissues were trimmed off from the meat. Pea-sized meat samples were snap-frozen using nitrogen and were minced to homogeneity using a pestle and mortar. Approximately 150 µg of each homogenised meat sample was lysed in 800 µL of T-PER tissue protein extraction reagent supplemented with protease inhibitor (1:100, both from ThermoFisher, UK) for 10 min on a rotary shaker. After this, the samples were centrifuged at 10,000× g for 5 min and the supernatant was collected. Proteins extracted were stored at −80 • C before being subjected to the O-and N-glycan analysis workflow.
Release and Permethylation of O-Glycans
O-glycans were released from 100 µg of meat samples by adding 200 µL of (0.5 M) sodium borohydride in 0.05 M potassium hydroxide and incubating in a 50 • C oven for 16 h. The reaction was terminated by adding glacial acetic acid dropwise followed by a clean-up using Dowex 50W-X8(H) 50-100 mesh resin chromatography. The samples were loaded onto the pre-prepared Dowex resin column and the flowthrough was collected in a glass tube. The O-glycans were eluted using 5 mL of 5% acetic acid and combined with the flowthrough. Eluted O-glycans were evaporated to dryness using a nitrogen sample concentrator. Then, 500 µL of 10% acetic acid in methanol was added and dried to remove borate (repeated five times). Sodium hydroxide dissolved in dimethyl sulfoxide and iodomethane were added to the dried glycan samples in glass tubes. The reaction was allowed to proceed under rotation at 30 rpm for about 3 h. Next, 1 mL of deionised water was added dropwise to quench the reaction. After 2 mL of chloroform was added, the mixture was mixed thoroughly. After allowing the mixture to separate into 2 layers, the upper aqueous layer was removed. Deionised water was added to the chloroform layer and this step of mixing and removal of aqueous layer was repeated several times until the chloroform layer was clear. The chloroform layer was then evaporated to dryness using a nitrogen sample concentrator.
Sep-Pak Separation of Permethylated Glycans
C18 Sep-Pak ® cartridge (Water Corporation, Milford, MA, USA) was primed sequentially with 5 mL methanol, 5 mL deionised water, 5 mL acetonitrile and 5 mL deionised water. The dried permethylated sample was redissolved in 200 µL of 50% methanol and loaded to the Sep-Pak ® cartridge. Elution was carried out by adding 2 mL of 15, 35, 50 and 75% acetonitrile in water (v/v). Each elution fraction was collected and evaporated to dryness using a SpeedVac.
Mass Spectrometry Analysis of O-Glycans
Permethylated O-glycans from the 35% and 50% fractions were combined and reconstituted in 100 µL of 80% methanol with 0.1% formic acid. Then, 10 µL of reconstituted released O-glycans were injected into Agilent 1290 infinity LC system coupled to an Agilent 6550 iFunel qTOF mass spectrometer (Agilent Technologies, Santa Clara, CA, USA). O-glycans were separated using an Agilent Zorbax Eclipse Plus C18 RRHD column (1.8 µm, 2.1 mm × 50 mm) at 500 µL/min, with an elution gradient of 3 to 10%, 10 to 40%, 40 to 70%, and 70 to 90% of 0.1% formic acid in acetonitrile (ACN, mobile phase B) at 0 to 10 min, 10 to 25 min, 25 to 30 min, and 30 to 38 min, respectively. For mobile phase A, 0.1% formic acid in water was used. The column was flushed with 90% mobile phase B for 12 min before re-equilibrating with 3% mobile phase B for 15 min.
Mass spectra were acquired in positive ion mode over a mass range of m/z 100-2000 with an acquisition rate of 1 Hz. The following parameters were used for the acquisition: drying gas temperature 150 • C at 12 L/min, sheath gas temperature 300 • C at 12 L/min, nebuliser pressure at 45 psi and capillary voltage at 2500 V. Mass correction was enabled using an infused calibrant solution with a reference mass of m/z 121.0873 and 922.0098.
O-Glycan Assignment
LC-MS data were processed using Molecular Feature Extractor (MFE) algorithm of MassHunter Qualitative Analysis Software (version B.06.00 Build 6.0.633.10 SP1, Agilent Technologies, Santa Clara, CA, USA). A permethylated mass list was generated based on the neutral masses of O-glycans found on the GlycoStore and Consortium for Functional Glycomics database [20,21]. This list with a mass filter of 10 ppm was used to search the LC-MS data. Mass peaks were filtered with a peak height of at least 100 counts and resolved into individual ion species. Using a Glycans Isotopic distribution model, charge state of a maximum of 3 and retention time, all ion species with singly and doubly protonated ions and their sodium adducted ions associated with a single compound were summed together. The neutral compound mass was then calculated and a list of all compound peaks in the samples and standards were generated with relative abundances depicted by chromatographic peak areas.
Targeted tandem MS was acquired in positive ion mode over a mass range of m/z 100-2000 with an acquisition time of 1.5 Hz. A targeted mass list was generated based on the desired MFE compounds found on the samples for MS/MS analysis. The precursor masses of interest, along with its charge state, retention time and peak width were indicated. The isolation width used was medium (~4 m/z) and the collision energy (CE) used for each The targeted tandem MS data were processed using MFE algorithm with the same settings used for searching LC-MS data and the MS/MS spectrum were extracted from each of the targeted compounds.
Liquid Chromatography-Mass Spectrometry Analysis of RFMS-Labelled N-Glycan
Released N-glycans were analysed as previously described [22]. First, 10 µL of reconstituted released N-glycans were injected into an ACQUITY H-Class UPLC (Waters Corporation, Milford, MA, USA) coupled to a SYNAPT XS mass spectrometer (Waters Corporation, Milford, MA, USA). Samples were separated using an ACQUITY UPLC Glycan BEH amide column (130 A, 1.7 µm, 2.1 mm × 150 mm, Waters Corporation, Milford, MA, USA) at 60 • C and 400 µL/min, with a 40 min gradient from 25 to 49% of 50 mM Ammonium Formate (mobile phase A). As mobile phase B, 100% ACN was used. RFMS-labelled glycans were excited at 265 nm and measured at 425 nm with an ACQUITY UPLC FLR detector (Waters Corporation, Milford, MA, USA). The MS1 profile scans of m/z 400-2000 were acquired using the SYNAPT XS in positive mode with an acquisition rate of 1 Hz. The electrospray ionisation capillary voltage was set at 1.8 kV, cone voltage at 30 V, desolvation gas flow at 850 L/h, and ion source temperature and desolvation temperature were kept at 120 • C and 350 • C, respectively. Leucine Enkephalin (Waters Corporation, Milford, MA, USA) was used as the LockSpray compound for real-time mass correction. RapiFluor-MS Dextran Calibration ladder (Waters Corporation, Milford, MA, USA) was also injected into LC-MS to calibrate the retention time of sample peaks. The retention times were normalised using the dextran calibration curve to Glucose Units (GU).
N-Glycan Assignment
Released N-glycans were analysed using the UNIFI Scientific Information System (Version 1.8, Waters Corporation, Milford, MA, USA). Fluorescence peaks were integrated manually using the UNIFI Scientific Information System and relative quantitation of peaks was obtained by area-under-curve measurements followed by normalisation to the total area. Glycan assignment was carried out by matching neutral mass and/or Glucose Units (GU) of each peak to the modified "N-glycan 309 mammalian no sodium" database available in the Byonic software.
Statistical Analyses
Data are presented as mean ± standard error of the mean. Statistical analyses were performed by Student's t-test or one-way ANOVA with Tukey's post hoc test using Graph-Pad Prism 8 (GraphPAD Software Inc., San Diego, CA, USA). Multiple comparisons were performed between chicken, pork, and beef in ANOVA tests. O-glycan and N-glycan relative abundance was analysed using principal component analysis (PCA). The criterion for significance was p < 0.05 (* p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001).
Framework for an Integrated Glycomic Study of Meat Samples from Different Species
The overall glycomic-based workflow consists of: (i) an experimental approach in extracting and isolating glycan structures from meat samples, (ii) a quantitative glycomic analysis of the abundance of both N-glycans and O-glycans, and finally (iii) the computation of the principal component analysis (PCA) to successfully discriminate glycans belonging to the different meat samples (Figure 1). In the first step, whole meat samples are grounded and subjected to lysis using T-PER TM Tissue Protein Extraction Reagent. Subsequently, a centrifuge step is employed to pellet the tissue debris, and the final supernatant product contains all proteins successfully extracted from the meat samples, including the glycoproteins of interest. In the next step, the same samples are split for N-glycan and O-glycan analysis. In the O-glycan analysis, O-glycans are released and permethylated before the LC-MS analysis, while in N-glycans, they are released and labelled with Rapifluor-MS (RFMS) before an UPLC-FLR-MS analysis. Both workflows allow the differentiation and quantification of the abundance of distinct N-glycans and O-glycans. Finally, these results are pooled together for a PCA analysis which allows the novel discrimination of the different meat samples on a molecular level (Supplementary Figures S1-S54).
O-Glycan Characterisation of Meat
We demonstrate the application of this workflow by applying the above workflow to three types of meat samples, namely chicken, pork, and beef. The extracted proteins were first analysed based on their O-glycan profiles. O-glycan structures, in particular, have been shown to exhibit a diverse glycosylation pattern in eukaryotes, owing to the many biosynthetic pathways [23]. In the case of the three meat samples, we observe the presence of four distinct O-glycan structures found through the analysis of released permethylated O-glycans. Through the combined analysis of the retention time and MS2 fragment data of the O-glycan standards, two of the most abundant glycans were identified as Gal-GalNAc and NeuAc-Gal-GalNAc (Figures 2 and 3A,B). Gal-GalNAc, in particular, was observed to be the most abundant structure in all samples, though its relative abundance ranged from 74.7 ± 0.6% in pork meat to 45.7 ± 3.3% in chicken meat ( Figure 3A, Table 1). This significant abundance corroborates previous observations of Gal-GalNAc as one of the core, and thus most abundant, O-glycan structures that can be found, particularly in mammals [24,25]. On the other hand, the relative abundance of NeuAc-Gal-GalNAc was observed to range from 10-25% instead, depending on the sample. The presence of two other distinct glycans was also observed, though their exact linkage could not be ascertained. These two structures, labelled instead through their chemical compositions as Hex-HexNAc (RT~15.5 min), and Hex(NeuAc)HexNAc (RT~18.8 min), respectively, was observed to be more abundant in the chicken meat samples than the other animal samples (Figures 2 and 3C,D). Furthermore, a quantitative comparison of the relative abundances of all four O-glycans was made across all three meat samples ( Figure 3A-D, Table 1). ANOVA test indicated that the relative abundances of Gal-GalNAc and Hex-HexNAc (by chemical composition) in chicken, pork, and beef meat samples are significantly different from each other ( Figure 3A,C). An interesting trend was observed in the case of sialylated O-glycans within the meat samples. In particular, chicken meat samples were observed to contain a significant amount of Hex(NeuAc)HexNAc (by chemical composition) which was otherwise undetected in the beef and pork samples ( Figure 3D). However, in the case of the other sialylated O-glycan NeuAc-Gal-GalNAc, chicken meat samples had the lowest relative abundance as compared to the other two samples ( Figure 3B, 11.3 ± 1.9% versus 25.3 ± 0.6% and 23.1 ± 3.6%, respectively, p < 0.05, p< 0.01). Indeed, the combined quantification of all sialylated glycans (Hex(NeuAc)HexNAc and NeuAc-Gal-GalNAc) yield no statistically significant difference between species ( Figure 3E). This highlights the importance of high resolution glycomics in characterising individual O-glycan structures for the proper distinction between meat samples of different animal origins.
N-Glycan Characterisation of Meat
We also performed a full characterisation of the N-glycan structures of the extracted proteins from the three meat samples (Figure 4). We released N-glycans using the enzyme PNGaseF and labelled them with RFMS fluorescence tag before subjecting them to FLR-LC-MS workflow. Interestingly, from the FLR chromatogram, which reflects the abundance of the fluorescently labelled N-glycans, we can observe distinct overall N-glycomic signatures between the samples. For instance, in the case of the chicken meat sample, an even distribution of peaks was observed ( Figure 4A), whilst distribution of peaks from (One-way ANOVA followed by Tukey's post hoc test, multiple comparisons were performed between chicken, pork, and beef, n = 4; * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001).
N-Glycan Characterisation of Meat
We also performed a full characterisation of the N-glycan structures of the extracted proteins from the three meat samples (Figure 4). We released N-glycans using the enzyme PNGaseF and labelled them with RFMS fluorescence tag before subjecting them to FLR-LC-MS workflow. Interestingly, from the FLR chromatogram, which reflects the abundance of the fluorescently labelled N-glycans, we can observe distinct overall N-glycomic signatures between the samples. For instance, in the case of the chicken meat sample, an even distribution of peaks was observed ( Figure 4A), whilst distribution of peaks from pork and beef meat samples were skewed towards those with higher retention time, which also corresponds to higher neutral mass ( Figure 4B,C). In particular, the overall N-glycome of the pork sample appeared to be less heterogeneous, with one major peak observed at around 20.8 min. N-glycan compositions were further confirmed with neutral mass and/or glucose unit (GU) and labelled in the representative FLR chromatogram (Figure 4). We subsequently quantified the relative abundance of each N-glycan structure between the meat samples ( Table 2). In each sample, up to 17 different glycan structures could be identified, which is significantly higher than that reported from previous studies [13]. We note that other rare, low-abundant glycan structures may also be present in these samples, whose signal is masked by the relatively much more abundant glycan structures. These Nglycans of these samples were grouped based on their glycosylation attributes-fucosylation, sialylation, galactosylation, and the presence of high mannose ( Figure 5). We observed that the pork meat samples contained the highest abundance of fucosylated N-glycan structures ( Figure 5A, p < 0.0001 and p < 0.05, for comparison between pork and chicken or beef, respectively), followed by beef and chicken samples ( Figure 5A, p < 0.001, for comparison between chicken and beef). Chicken meat samples contained the lowest relative abundances of sialylated and galactosylated N-glycans compared to pork and beef meat samples (sialylation: Figure 5B, p < 0.001 and p < 0.01, for comparison between chicken and pork or beef, respectively; galactosylation: Figure 5C, p < 0.001, for both comparison between chicken and pork or beef), with no statistically significant difference between pork and beef meat samples. In contrast, the chicken meat sample had the highest amount of N-glycans with high mannose structure ( Figure 5D, p < 0.001, for both comparison between chicken and pork or beef), while no statistically significant difference was found between pork and beef samples.
Chicken (%) Pork (%) Beef (%)
HexNAc (2) (7) 6.6 ± 0.6 3.9 ± 0.6 4.1 ± 0.9 HexNAc(4)Hex (5)Fuc (1)NeuAc (2) 4.3 ± 0.5 60.7 ± 6. The relative abundances of two predominant sialic acids on N-glycans, Neu5Ac and Neu5Gc were also characterised, considering the prominent role of sialic acids in multiple biological functions [26]. In particular, the uptake of Neu5Gc from red meat has been shown to trigger inflammatory response and cancer development [27]. Our results show that pork samples contained significantly higher relative abundance of Neu5Ac than chicken and beef samples, with no difference between chicken and beef samples ( Figure 5E, p < 0.001, p < 0.01, for comparison between pork and chicken or beef, respectively). On the other hand, no Neu5Gc was detected in chicken, with 4.8 ± 0.3% and 34.4 ± 1.9% of Neu5Gc detected in pork and beef, respectively ( Figure 5F). More than half of the total sialic acid content (Neu5Ac and Neu5Gc) found in beef was Neu5Gc, compared to only a significantly small proportion of Neu5Gc detected in pork ( Figure 5G, 58.2 ± 1.8% versus 6.9 ± 0.4%). The ratio of Neu5Gc to total sialic acid in all three meats agrees with previously published work that analysed free sialic acid using HPLC method [27]. (9) 9.2 ± 0.7 2.3 ± 1.4 ND HexNAc(4)Hex(5)NeuGc(2) ND ND 5.8 ± 0.4 HexNAc(4)Hex(5)Fuc(1)NeuGc(2) ND ND 2.5 ± 1.0 Hex, hexose sugars; HexNAc, N-acetylhexosamine; Fuc, fucose; NeuAc, N-acetylneuraminic acid; NeuGc, N-glycolylneuraminic acid. The number beside each sugar refers to the number of such molecules present in that structure. (±, standard error of mean; ND, not detected.). Our high resolution glycomic workflow also allows the full characterisation of N-glycans with α-galactose (galactose-α-1,3-galactose). It is important to characterise the abundance of α-galactose in meat samples because it is implicated in alpha-gal syndrome (also known as red meat allergy)-a potentially life-threatening allergy to red meat [28]. The highest relative abundance of α-galactose amongst the meat samples was detected in the beef meat sample, followed by a moderate amount found in the pork sample, and an undetectable amount in chicken meat samples ( Figure 5H, 35.8 ± 1.8%, 5.1 ± 0.8%, and 0%, respectively). This agrees with our understanding that α-galactose is found more abundantly in red meat than in white meat [29].
PCA Analysis
Our characterisation of the N-linked and O-linked glycans of the three different meat samples shows the distinct differences in their relative abundances, suggesting a unique overall glycomic profile of each sample that can be distinguished with the integrative glycomic workflow. In order to achieve a unified glycome analysis of both N-linked and O-linked glycans, the datasets were pooled together and subjected to principal component analysis (PCA). The dataset was transformed into two principal components-PC1 and PC2,-which explained percentages of variance, which were 57.4% and 25.8%, respectively. The reliability of this PCA analysis can be observed by more than 82% of the total variance that can be accounted for by the first two PCs. The score plot of PC1 and PC2 of each meat allowed visual discrimination of different species (Figure 6). This analysis demonstrates, with strong statistical significance, the well-defined glycomic characteristics of meat samples pertaining to different animal species, and the strength of the overall integrated glycomic approach in profiling meat samples.
Discussion
Current glycomic approaches are extensively applied for the use as biomarkers in different areas, particularly in medical and biotechnology fields [30][31][32]. In this study, we have described in detail the extended use of an integrated glycomics approach to charac terise meat samples in terms of their O-linked and N-linked glycan structures. The abun dances of individual glycan types are significantly different both in terms of the O-linked as well as N-linked glycans. Previous studies have investigated the O-linked glycans found in food, such as bovine whey protein product and mucins from salmon and chicken [33][34][35]. However, to the best of our knowledge, O-linked glycan characterisation has no been employed in meat profiling. The novel discovery of the main types of O-glycan struc tures found in meat samples shows different molecular signatures pertaining to each spe cies, and an orthogonal measurement in glycomics for meat differentiation purposes. Sim ilarly, the identification of a diverse number of N-glycan structures in each sample shows the diversity of mechanisms which each species undergoes for glycosylation, resulting in their distinct structural differences. Despite the variety of N-glycan structures that can be present, it is interesting that meat samples of each animal species possess a distinct subse of these structures. This suggests a high degree of specificity, especially in the glycoen zymes involved in the synthesis of these glycans [8].
It is worth noting that meat tissues were isolated only from one region of the anima (peroneus longus from chicken samples, and extensor carpis radialis from the pork and beef samples). As glycosylation is context-and tissue-specific, we anticipate potential dif ferences in the glycan profile of different tissues belonging to the same animal [36,37]. By systematically comparing the differences between tissue sources, and between animals future studies can further reveal the potential of glycomics in its versatile applicability in
Discussion
Current glycomic approaches are extensively applied for the use as biomarkers in different areas, particularly in medical and biotechnology fields [30][31][32]. In this study, we have described in detail the extended use of an integrated glycomics approach to characterise meat samples in terms of their O-linked and N-linked glycan structures. The abundances of individual glycan types are significantly different both in terms of the O-linked as well as N-linked glycans. Previous studies have investigated the O-linked glycans found in food, such as bovine whey protein product and mucins from salmon and chicken [33][34][35]. However, to the best of our knowledge, O-linked glycan characterisation has not been employed in meat profiling. The novel discovery of the main types of O-glycan structures found in meat samples shows different molecular signatures pertaining to each species, and an orthogonal measurement in glycomics for meat differentiation purposes. Similarly, the identification of a diverse number of N-glycan structures in each sample shows the diversity of mechanisms which each species undergoes for glycosylation, resulting in their distinct structural differences. Despite the variety of N-glycan structures that can be present, it is interesting that meat samples of each animal species possess a distinct subset of these structures. This suggests a high degree of specificity, especially in the glycoenzymes involved in the synthesis of these glycans [8].
It is worth noting that meat tissues were isolated only from one region of the animal (peroneus longus from chicken samples, and extensor carpis radialis from the pork and beef samples). As glycosylation is context-and tissue-specific, we anticipate potential differences in the glycan profile of different tissues belonging to the same animal [36,37]. By systematically comparing the differences between tissue sources, and between animals, future studies can further reveal the potential of glycomics in its versatile applicability in different types of food samples. In addition, this study has not characterised the glycan profile of all animal species. It has been observed that meat species adulteration can occur between meat samples of closely related species such as horse meat and beef [1,38]. Thus, further studies investigating the glycome differences between meat samples derived from closely related species are warranted.
The meat industry presents other major challenges such as fraudulent labelling of geographical origin and production system (e.g., organic vs. non-organic) of the meat samples. Amongst the many analytical techniques available in the meat industry, there are only a few capable of performing the authentication of geographical origin and production system. For instance, DNA-based methods such as polymerase chain reactions and genomics rely on a specific DNA sequence or taxonomic marker to differentiate the species of meat, but they are unable to detect fraudulent labels of geographical origin and production system. Since dietary patterns, lifestyle and environmental changes are known to affect protein glycosylation in humans, animals would likely also undergo glycan changes. As such, such changes in glycan profile could be exploited in the authentication of geographical origin and production system. The use of glycomic techniques in meat species profiling presented in this paper serves as a proof-of-concept and its use in the authentication of other meat product features should be investigated. Importantly, glycoproteomic approaches harnessing both proteomic and glycomic potentials could also be a promising tool to further the molecular characterisation of meat samples.
Considering the traction of cultured meat products (growing cells to generate meat-like tissue structures in the laboratory) as an alternative protein source for human consumption, the glycoprofile workflow described in this study can also be used in establishing the critical quality attributes (CQA) of cultured meat products [39][40][41]. Given the considerable risk of cell line contamination and product adulteration in this growing industry, such techniques can help to establish a stringent quality control of these cultured meat samples in the future [42,43]. With the advent of FDA-approved genetically modified pigs with α-galactose for human consumption, our glycomic techniques can also be used to monitor the controlled manipulation of cultured meat [44]. This includes their modification in consideration of unwanted glycans such as α-galactose and Neu5Gc for health and safety reasons.
While more studies are warranted to investigate how these glycomic signatures may differ based on breed genetic compositions, parts from which the meat samples were obtained, and other extrinsic factors (e.g., feed intake, growth conditions, regional differences), it is evident that the use of O-linked and N-linked glycome profiling allows the successful differentiation between different meat samples, and as a proof-of-concept paves the way for a new high-throughput and robust approach in quantifying meat adulteration. This may be particularly advantageous over other approaches as N-glycan profiling is unaffected by the harsh processing of meat (heat-induced treatments) that can degrade DNA and affect the accuracy of genomic approaches [8]. Given its highly quantitative and efficient procedure, the adoption of such glycome profiling approaches provides a powerful future alternative technique to traditional methods in meat identification and authentication. | 6,913.2 | 2022-06-30T00:00:00.000 | [
"Biology"
] |
Use of a Virtual Reality Simulator for Tendon Repair Training: Randomized Controlled Trial
Background Virtual reality (VR) simulators have become widespread tools for training medical students and residents in medical schools. Students using VR simulators are provided with a 3D human model to observe the details by using multiple senses and they can participate in an environment that is similar to reality. Objective The aim of this study was to promote a new approach consisting of a shared and independent study platform for medical orthopedic students, to compare traditional tendon repair training with VR simulation of tendon repair, and to evaluate future applications of VR simulation in the academic medical field. Methods In this study, 121 participants were randomly allocated to VR or control groups. The participants in the VR group studied the tendon repair technique via the VR simulator, while the control group followed traditional tendon repair teaching methods. The final assessment for the medical students involved performing tendon repair with the “Kessler tendon repair with 2 interrupted tendon repair knots” (KS) method and the “Bunnell tendon repair with figure 8 tendon repair” (BS) method on a synthetic model. The operative performance was evaluated using the global rating scale. Results Of the 121 participants, 117 participants finished the assessment and 4 participants were lost to follow-up. The overall performance (a total score of 35) of the VR group using the KS method and the BS method was significantly higher (P<.001) than that of the control group. Thus, participants who received VR simulator training had a significantly higher score on the global rating scale than those who received traditional tendon repair training (P<.001). Conclusions Our study shows that compared with the traditional tendon repair method, the VR simulator for learning tendon suturing resulted in a significant improvement of the medical students in the time in motion, flow of operation, and knowledge of the procedure. Therefore, VR simulator development in the future would most likely be beneficial for medical education and clinical practice. Trial Registration Chinese Clinical Trial Registry ChiCTR2100046648; http://www.chictr.org.cn/hvshowproject.aspx?id=90180
其他: yes: all primary outcomes were significantly better in intervention group vs control partly: SOME primary outcomes were significantly better in intervention group vs control no statistically significant difference between control and intervention potentially harmful: control was significantly better than intervention in one or more outcomes inconclusive: more research is needed 其他: Approx. Percentage of Users (starters) still using the app as recommended after 3 months * Overall, was the app/intervention effective? * Identify the mode of delivery. Preferably use "web-based" and/or "mobile" and/or "electronic game" in the title. Avoid ambiguous terms like "online", "virtual", "interactive". Use "Internet-based" only if Intervention includes non-web-based Internet components (e.g. email), use "computer-based" or "electronic" only if offline products are used. Use "virtual" only in the context of "virtual reality" (3-D worlds). Use "online" only in the context of "online support groups". Complement or substitute product names with broader terms for the class of products (such as "mobile" or "smart phone" instead of "iphone"), especially if the application runs on different platforms.
清除選取
Does your paper address subitem 1a-i? * Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The title was used the "virtual" only in the context of "virtual reality" (3-D worlds).
1a-ii) Non-web-based components or important co-interventions in title
Mention non-web-based components or important co-interventions in title, if any (e.g., "with telephone support").
清除選取
Does your paper address subitem 1a-ii?
Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No non-web-based components or important co-interventions was applied for the app. subitem not at all important 1 2 3 4 5 essential 1a-iii) Primary condition or target group in the title Mention primary condition or target group in the title, if any (e.g., "for children with Type I Diabetes") Example: A Web-based and Mobile Intervention with Telephone Support for Children with Type I Diabetes: Randomized Controlled Trial 清除選取 Does your paper address subitem 1a-iii? * Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study We have mentioned the title as a surgical training.
1b-i) Key features/functionalities/components of the intervention and comparator in the METHODS section of the ABSTRACT
Mention key features/functionalities/components of the intervention and comparator in the abstract. If possible, also mention theories and principles used for designing the site. Keep in mind the needs of systematic reviewers and indexers by including important synonyms. (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it) 清除選取 subitem not at all important 1 2 3 4 5 essential Does your paper address subitem 1b-i? * Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The participants in the VR group studied the tendon repair technique via the VR simulator, while the control group followed traditional tendon suture teaching methods. The final assessment was to perform tendon repair with the "Kessler tendon repair with 2 interrupted tendon repair knots" (KS) method and the "Bunnell tendon repair with figure 8 tendon repair" (BS) method on a synthetic model.
1b-ii) Level of human involvement in the METHODS section of the ABSTRACT
Clarify the level of human involvement in the abstract, e.g., use phrases like "fully automated" vs. "therapist/nurse/care provider/physician-assisted" (mention number and expertise of providers involved, if any). (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it)
清除選取
Does your paper address subitem 1b-ii?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study the VR stimulator did not involve human involvement subitem not at all important 1 2 3 4 5 essential subitem not at all important 1 2 3 4 5 essential 1b-iii) Open vs. closed, web-based (self-assessment) vs. face-to-face assessments in the METHODS section of the ABSTRACT Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic or a closed online user group (closed usergroup trial), and clarify if this was a purely web-based trial, or there were face-to-face components (as part of the intervention or for assessment). Clearly say if outcomes were self-assessed through questionnaires (as common in web-based trials). Note: In traditional offline trials, an open trial (open-label trial) is a type of clinical trial in which both the researchers and participants know which treatment is being administered. To avoid confusion, use "blinded" or "unblinded" to indicated the level of blinding instead of "open", as "open" in web-based trials usually refers to "open access" (i.e. participants can self-enrol). (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it)
清除選取
Does your paper address subitem 1b-iii?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The final assessment was to perform tendon repair with the "Kessler tendon repair with 2 interrupted tendon repair knots" (KS) method and the "Bunnell tendon repair with figure 8 tendon repair" (BS) method on a synthetic model.
1b-iv) RESULTS section in abstract must contain use data Report number of participants enrolled/assessed in each group, the use/uptake of the intervention (e.g., attrition/adherence metrics, use over time, number of logins etc.), in addition to primary/secondary outcomes. (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it)
INTRODUCTION 2a) In INTRODUCTION: Scientific background and explanation of rationale
Does your paper address subitem 1b-iv?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The overall performance (a total score of 35) for the VR group using the "KS method" was significantly higher (p < 0.001) than that of the control group. Moreover, for the BS method, the VR group also had a significantly better result (p < 0.001) than the control group.
1b-v) CONCLUSIONS/DISCUSSION in abstract for negative trials
Conclusions/Discussions in abstract for negative trials: Discuss the primary outcome -if the trial is negative (primary outcome not changed), and the intervention was not used, discuss whether negative results are attributable to lack of uptake and discuss reasons. (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it)
清除選取
Does your paper address subitem 1b-v?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The use of the VR simulator to learn tendon suturing resulted in a significant improvement in the time in motion, flow of operation, and knowledge of procedure by medical students compared with the traditional tendon suture method.
subitem not at all important Does your paper address CONSORT subitem 2b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This study compared traditional tendon repair training and VR simulation of tendon repair and evaluated future applications of VR simulation in the academic medical field.
3b) Important changes to methods after trial commencement (such as eligibility criteria), with reasons Does your paper address CONSORT subitem 3a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This study is a parallel-design randomized controlled trial comparing VR and control groups. This study was approved by the ethics committee of the First Affiliated Hospital of Jinan University and registered in the Chinese Clinical Trial Registry (Reg No.: 2019-03-0262). Information was collected from all participants after obtaining written informed consent in accordance with the Declaration of Helsinki. All participants were required to complete the final assessment, which was performing tendon repair on synthetic models with 2 different knots: the "Kessler tendon suture with 2 interrupted tendon suture knots" and the "Bunnell tendon suture with figure 8 tendon suture" (KS and BS methods, respectively).
Does your paper address CONSORT subitem 3b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study All participants provided written, informed, oral independently-witnessed consent to participate in the research study. A random allocation sequence was generated using a random number table. A sequence was used to allocate groups of participants to the VR and control groups. For the examination, the students performed tendon repair on a synthetic model. All participants were randomly assigned to one of the two groups. Participants in the VR group (n = 61) learned the technique of tendon repair through the VR simulator method, whereas the control group (n = 60) used the traditional tendon repair teaching method. The examiners were well-trained surgeons and unaffiliated with the medical school; they evaluated and assigned a score to each final product immediately without knowing the allocation list in a nonbiased manner during pre-and post-intervention assessments. In order to ensure the rigor of the examination, we have included a short training session for the examiners. Training helps to clarify the examiner's role, required behavior, review the marking guidance, marking assignment to standardize the exam, and encourage the consistency of the examiner's marking behavior. At the end of the training, examiners also did a marking exercise to scrutinize examiners' marking behavior. During the examination, medical students were asked not to tell the examiner which group they were assigned to.
subitem not at all important 1 2 3 4 5 essential 4a) Eligibility criteria for participants 3b-i) Bug fixes, Downtimes, Content Changes Bug fixes, Downtimes, Content Changes: ehealth systems are often dynamic systems. A description of changes to methods therefore also includes important changes made on the intervention or comparator during the trial (e.g., major bug fixes or changes in the functionality or content) (5-iii) and other "unexpected events" that may have influenced study design such as staff changes, system failures/downtimes, etc. [2].
清除選取
Does your paper address subitem 3b-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No relevant issue with the Bug fixes, Downtimes, Content Changes.
Does your paper address CONSORT subitem 4a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Senior medical students were eligible participants in this study. They were required to complete the following fundamental courses before entering the randomized control trial: (1) human anatomy, (2) physiology, (3) biochemistry, (4) pathology, (5) pathophysiology, (6) diagnostics, (7) internal medicine, (8) orthopedics, (9) surgical probation, and (10) other professional basic clinical courses. This study excluded any participant who did not meet the above requirement. Written informed consent with a clearly stated study plan was given to all participants. The purpose of this trial was explained to the participants. After informed consent had been signed, we asked the medical students to perform tendon repair on synthetic simulations. A baseline score was given by an orthopedic specialist. Other baseline information, including gender, age, and grade point average, was collected from the medical school database.
subitem not at all important 1 2 3 4 5 essential subitem not at all important 1 2 3 4 5 essential 4a-i) Computer / Internet literacy Computer / Internet literacy is often an implicit "de facto" eligibility criterion -this should be explicitly clarified.
清除選取
Does your paper address subitem 4a-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study 您的答案 4a-ii) Open vs. closed, web-based vs. face-to-face assessments: Open vs. closed, web-based vs. face-to-face assessments: Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic, and clarify if this was a purely webbased trial, or there were face-to-face components (as part of the intervention or for assessment), i.e., to what degree got the study team to know the participant. In online-only trials, clarify if participants were quasi-anonymous and whether having multiple identities was possible or whether technical or logistical measures (e.g., cookies, email confirmation, phone calls) were used to detect/prevent these.
清除選取
subitem not at all important 1 2 3 4 5 essential Does your paper address subitem 4a-ii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The control group participants were required to participate in the full eight hours of lectures and a six-hour practical class in medical school for two weeks. The participants learned about traumatic orthopedic theory and the fundamentals of tendon repair during the lectures. They practiced tendon repair on the synthetic models under the professor's guidance. In the practice class, students were given a PowerPoint, which provided illustrations, photographs, and step-by-step instructions. They were instructed to review the training material for one hour.
The VR group participants were required to take the same course as the control group, except the guided PowerPoint review part. Moreover, they practiced the VR simulators (including the VR version and the PC version) for one hour in class. The medical students practiced under the guidance with detailed instructions. The VR simulator is focused on every participant's performance while performing tendon repair. The operation in the VR simulator is divided into practice and examination modes. Corresponding notes for each step during the practice mode were provided; however, no notes were provided for the examination mode (Appendix 1). The students were required to finish all the required learning in the practice mode before entering the examination mode for assessment. For the VR training section, while half of the students were practicing in the VR simulator, the rest of the students were practicing on the PC version in the training center. These students shifted the training modes after 30 minutes of VR training (see Table 1). All training was performed within the classes and both groups had exactly the same opportunity for practice time.
4a-iii) Information giving during recruitment
Information given during recruitment. Specify how participants were briefed for recruitment and in the informed consent procedures (e.g., publish the informed consent documentation as appendix, see also item X26), as this information may have an effect on user self-selection, user expectation and may also bias results.
4b) Settings and locations where the data were collected
Does your paper address subitem 4a-iii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study All participants provided written, informed, oral independently-witnessed consent to participate in the research study. A random allocation sequence was generated using a random number table. A sequence was used to allocate groups of participants to the VR and control groups. For the examination, the students performed tendon repair on a synthetic model. All participants were randomly assigned to one of the two groups. Participants in the VR group (n = 61) learned the technique of tendon repair through the VR simulator method, whereas the control group (n = 60) used the traditional tendon repair teaching method. The examiners were well-trained surgeons and unaffiliated with the medical school; they evaluated and assigned a score to each final product immediately without knowing the allocation list in a nonbiased manner during pre-and post-intervention assessments. In order to ensure the rigor of the examination, we have included a short training session for the examiners. Training helps to clarify the examiner's role, required behavior, review the marking guidance, marking assignment to standardize the exam, and encourage the consistency of the examiner's marking behavior. At the end of the training, examiners also did a marking exercise to scrutinize examiners' marking behavior. During the examination, medical students were asked not to tell the examiner which group they were assigned to ( Figure 1).
subitem not at all important 1 2 3 4 5 essential Does your paper address CONSORT subitem 4b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study All participants provided written, informed, oral independently-witnessed consent to participate in the research study. A random allocation sequence was generated using a random number table. A sequence was used to allocate groups of participants to the VR and control groups. For the examination, the students performed tendon repair on a synthetic model. All participants were randomly assigned to one of the two groups. Participants in the VR group (n = 61) learned the technique of tendon repair through the VR simulator method, whereas the control group (n = 60) used the traditional tendon repair teaching method. The examiners were well-trained surgeons and unaffiliated with the medical school; they evaluated and assigned a score to each final product immediately without knowing the allocation list in a nonbiased manner during pre-and post-intervention assessments. In order to ensure the rigor of the examination, we have included a short training session for the examiners. Training helps to clarify the examiner's role, required behavior, review the marking guidance, marking assignment to standardize the exam, and encourage the consistency of the examiner's marking behavior. At the end of the training, examiners also did a marking exercise to scrutinize examiners' marking behavior. During the examination, medical students were asked not to tell the examiner which group they were assigned to ( Figure 1).
4b-i) Report if outcomes were (self-)assessed through online questionnaires
Clearly report if outcomes were (self-)assessed through online questionnaires (as common in web-based trials) or otherwise.
清除選取
subitem not at all important 1 2 3 4 5 essential Does your paper address subitem 4b-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Both the control and experimental groups participated in the research study for 14 days. The results were calculated using the global rating scale (GRS). Seven dimensions were incorporated into the tool. The GRS shows different aspects of operative performance. This technology was compared using the global rating scale according to several aspects: (1) respect for tissue, (2) time in motion, (3) instrument handling, (4) tendon repair skill, (5) flow of operation, (6) knowledge of procedure and final suture, and (7)
清除選取
Does your paper address subitem 4b-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The examiners were well-trained surgeons and unaffiliated with the medical school 5) The interventions for each group with sufficient details to allow replication, including how and when they were actually administered
5-i) Mention names, credential, affiliations of the developers, sponsors, and owners
Mention names, credential, affiliations of the developers, sponsors, and owners [6] (if authors/evaluators are owners or developer of the software, this needs to be declared in a "Conflict of interest" section or mentioned elsewhere in the manuscript).
Does your paper address subitem 5-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study
5-ii) Describe the history/development process
Describe the history/development process of the application and previous formative evaluations (e.g., focus groups, usability testing), as these will have an impact on adoption/use rates and help with interpreting results.
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study None of the history/development process of the application and previous formative evaluations
5-iii) Revisions and updating
Revisions and updating. Clearly mention the date and/or version number of the application/intervention (and comparator, if applicable) evaluated, or describe whether the intervention underwent major changes during the evaluation process, or whether the development and/or content was "frozen" during the trial.
Describe dynamic components such as news feeds or changing content which may have an impact on the replicability of the intervention (for unexpected events see item 3b).
清除選取
Does your paper address subitem 5-iii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This is the first version of the application.
5-iv) Quality assurance methods
Provide information on quality assurance methods to ensure accuracy and quality of information provided [1], if applicable.
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study
5-v)
Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the algorithms used Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the algorithms used. Replicability (i.e., other researchers should in principle be able to replicate the study) is a hallmark of scientific reporting.
Does your paper address subitem 5-v?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Data were analyzed using the SPSS 23.0 (SPSS, Chicago, Illinois, USA) software package [35]. The baseline information, including age and GPA, was analyzed using the independent T test for parametric data [36]. Differences in the objective and semi-objective measurements between the two groups were analyzed using the Mann-Whitney analysis for nonparametric data [37]. The level of agreement between the semi-objective assessments made by the two experts was estimated by Cohen's K coefficient. P < 0.05 was considered significant [38].
5-vi) Digital preservation
Digital preservation: Provide the URL of the application, but as the intervention is likely to change or disappear over the course of the years; also make sure the intervention is archived (Internet Archive, webcitation.org, and/or publishing the source code or screenshots/videos alongside the article). As pages behind login screens cannot be archived, consider creating demo pages which are accessible without login.
Does your paper address subitem 5-vi?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study http://www.ilab-x.com/details?id=2934&isView=true
5-vii) Access
Access: Describe how participants accessed the application, in what setting/context, if they had to pay (or were paid) or not, whether they had to be a member of specific group. If known, describe how participants obtained "access to the platform and Internet" [1]. To ensure access for editors/reviewers/readers, consider to provide a "backdoor" login account or demo mode for reviewers/readers to explore the application (also important for archiving purposes, see vi).
清除選取
Does your paper address subitem 5-vii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study
清除選取
Does your paper address subitem 5-viii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Both versions were open to the students for practice according to their needs. In addition, the practical study section of tendon repair is performed using the VR simulator with individual steps for each procedure (comprising 7 steps in total). The website provides a section in which students and teachers can communicate and share ideas with each other. This program allows the students to learn in-depth while enhancing their learning through the communication section of the VR simulator. This online discussion can overcome the barriers of time and distance.
5-ix) Describe use parameters
Describe use parameters (e.g., intended "doses" and optimal timing for use). Clarify what instructions or recommendations were given to the user, e.g., regarding timing, frequency, heaviness of use, if any, or was the intervention used ad libitum.
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Both the control and experimental groups participated in the research study for 14 days.
5-x) Clarify the level of human involvement
Clarify the level of human involvement (care providers or health professionals, also technical assistance) in the e-intervention or as co-intervention (detail number and expertise of professionals involved, if any, as well as "type of assistance offered, the timing and frequency of the support, how it is initiated, and the medium by which the assistance is delivered". It may be necessary to distinguish between the level of human involvement required for the trial, and the level of human involvement required for a routine application outside of a RCT setting (discuss under item 21 -generalizability).
Does your paper address subitem 5-x?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This program allows the students to learn in-depth while enhancing their learning through the communication section of the VR simulator.
subitem not at all important 1 2 3 4 5 essential subitem not at all important 1 2 3 4 5 essential 5-xi) Report any prompts/reminders used Report any prompts/reminders used: Clarify if there were prompts (letters, emails, phone calls, SMS) to use the application, what triggered them, frequency etc. It may be necessary to distinguish between the level of prompts/reminders required for the trial, and the level of prompts/reminders for a routine application outside of a RCT setting (discuss under item 21 -generalizability).
清除選取
Does your paper address subitem 5-xi? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No prompts (letters, emails, phone calls, SMS) to use the application was applied.
5-xii) Describe any co-interventions (incl. training/support)
Describe any co-interventions (incl. training/support): Clearly state any interventions that are provided in addition to the targeted eHealth intervention, as ehealth intervention may not be designed as stand-alone intervention. This includes training sessions and support [1]. It may be necessary to distinguish between the level of training required for the trial, and the level of training for a routine application outside of a RCT setting (discuss under item 21 -generalizability.
清除選取
Does your paper address subitem 5-xii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The VR group participants were required to take the same course as the control group, except the guided PowerPoint review part. Moreover, they practiced the VR simulators (including the VR version and the PC version) for one hour in class.
6a) Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed subitem not at all important 1 2 3 4 5 essential Does your paper address CONSORT subitem 6a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This study was approved by the ethics committee of the First Affiliated Hospital of Jinan University and registered in the Chinese Clinical Trial Registry (Reg No.: 2019-03-0262). Information was collected from all participants after obtaining written informed consent in accordance with the Declaration of Helsinki. All participants were required to complete the final assessment, which was performing tendon repair on synthetic models with 2 different knots: the "Kessler tendon suture with 2 interrupted tendon suture knots" and the "Bunnell tendon suture with figure 8 tendon suture" (KS and BS methods, respectively). 6b) Any changes to trial outcomes after the trial commenced, with reasons 6a-ii) Describe whether and how "use" (including intensity of use/dosage) was defined/measured/monitored Describe whether and how "use" (including intensity of use/dosage) was defined/measured/monitored (logins, logfile analysis, etc.). Use/adoption metrics are important process outcomes that should be reported in any ehealth trial.
清除選取
Does your paper address subitem 6a-ii?
Copy and paste relevant sections from manuscript text No usage was measured in this trial.
6a-iii) Describe whether, how, and when qualitative feedback from participants was obtained
Describe whether, how, and when qualitative feedback from participants was obtained (e.g., through emails, feedback forms, interviews, focus groups).
Does your paper address subitem 6a-iii?
Copy and paste relevant sections from manuscript text qualitative feedback from participants was not obtained 7a) How sample size was determined NPT: When applicable, details of whether and how the clustering by care provides or centers was addressed subitem not at all important 1 2 3 4 5 essential 7b) When applicable, explanation of any interim analyses and stopping guidelines Does your paper address CONSORT subitem 6b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No changes to trial outcomes after the trial commenced 7a-i) Describe whether and how expected attrition was taken into account when calculating the sample size Describe whether and how expected attrition was taken into account when calculating the sample size.
Does your paper address subitem 7a-i?
Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Participants in the VR group (n = 61) learned the technique of tendon repair through the VR simulator method, whereas the control group (n = 60) used the traditional tendon repair teaching method.
8a) Method used to generate the random allocation sequence NPT: When applicable, how care providers were allocated to each trial group 8b) Type of randomisation; details of any restriction (such as blocking and block size) 9) Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned Does your paper address CONSORT subitem 7b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No any interim analyses and stopping guidelines.
Does your paper address CONSORT subitem 8a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study A random allocation sequence was generated using a random number table. A sequence was used to allocate groups of participants to the VR and control groups.
Does your paper address CONSORT subitem 8b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This study is a parallel-design randomized controlled trial comparing VR and control groups.
10) Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions 11a) If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how NPT: Whether or not administering co-interventions were blinded to group assignment subitem not at all important 1 2 3 4 5 essential Does your paper address CONSORT subitem 9? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study A random allocation sequence was generated using a random number table. A sequence was used to allocate groups of participants to the VR and control groups.
Does your paper address CONSORT subitem 10? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study A random allocation sequence was generated using a random number table. A sequence was used to allocate groups of participants to the VR and control groups.
11a-i) Specify who was blinded, and who wasn't
Specify who was blinded, and who wasn't. Usually, in web-based trials it is not possible to blind the participants [1, 3] (this should be clearly acknowledged), but it may be possible to blind outcome assessors, those doing data analysis or those administering co-interventions (if any).
清除選取
subitem not at all important 1 2 3 4 5 essential 11b) If relevant, description of the similarity of interventions (this item is usually not relevant for ehealth trials as it refers to similarity of a placebo or sham intervention to a active medication/intervention) Does your paper address subitem 11a-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study At the end of the training, examiners also did a marking exercise to scrutinize examiners' marking behavior. During the examination, medical students were asked not to tell the examiner which group they were assigned to (Figure 1).
11a-ii) Discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator" Informed consent procedures (4a-ii) can create biases and certain expectations -discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator".
Does your paper address subitem 11a-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study All participants provided written, informed, oral independently-witnessed consent to participate in the research study.
12a) Statistical methods used to compare groups for primary and secondary outcomes NPT: When applicable, details of whether and how the clustering by care providers or centers was addressed Does your paper address CONSORT subitem 11b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The control group participants were required to participate in the full eight hours of lectures and a six-hour practical class in medical school for two weeks. The participants learned about traumatic orthopedic theory and the fundamentals of tendon repair during the lectures. They practiced tendon repair on the synthetic models under the professor's guidance. In the practice class, students were given a PowerPoint, which provided illustrations, photographs, and step-by-step instructions. They were instructed to review the training material for one hour.
The VR group participants were required to take the same course as the control group, except the guided PowerPoint review part. Moreover, they practiced the VR simulators (including the VR version and the PC version) for one hour in class. The medical students practiced under the guidance with detailed instructions. The VR simulator is focused on every participant's performance while performing tendon repair. The operation in the VR simulator is divided into practice and examination modes. Corresponding notes for each step during the practice mode were provided; however, no notes were provided for the examination mode (Appendix 1). The students were required to finish all the required learning in the practice mode before entering the examination mode for assessment. For the VR training section, while half of the students were practicing in the VR simulator, the rest of the students were practicing on the PC version in the training center. These students shifted the training modes after 30 minutes of VR training (see Table 1). All training was performed within the classes and both groups had exactly the same opportunity for practice time.
subitem not at all important 1 2 3 4 5 essential 12b) Methods for additional analyses, such as subgroup analyses and adjusted analyses Does your paper address CONSORT subitem 12a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Data were analyzed using the SPSS 23.0 (SPSS, Chicago, Illinois, USA) software package [35]. The baseline information, including age and GPA, was analyzed using the independent T test for parametric data [36]. Differences in the objective and semi-objective measurements between the two groups were analyzed using the Mann-Whitney analysis for nonparametric data [37]. The level of agreement between the semi-objective assessments made by the two experts was estimated by Cohen's K coefficient. P < 0.05 was considered significant [38].
12a-i) Imputation techniques to deal with attrition / missing values
Imputation techniques to deal with attrition / missing values: Not all participants will use the intervention/comparator as intended and attrition is typically high in ehealth trials. Specify how participants who did not use the application or dropped out from the trial were treated in the statistical analysis (a complete case analysis is strongly discouraged, and simple imputation techniques such as LOCF may also be problematic [4]).
清除選取
Does your paper address subitem 12a-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Imputation techniques to deal with attrition / missing values:
X26) REB/IRB Approval and Ethical Considerations [recommended as
subheading under "Methods"] (not a CONSORT item) subitem not at all important 1 2 3 4 5 essential Does your paper address CONSORT subitem 12b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The level of agreement between the semi-objective assessments made by the two experts was estimated by Cohen's K coefficient. P < 0.05 was considered significant [38].
Does your paper address subitem X26-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study
x26-ii) Outline informed consent procedures
Outline informed consent procedures e.g., if consent was obtained offline or online (how? Checkbox, etc.?), and what information was provided (see 4a-ii). See [6] for some items to be included in informed consent documents.
Does your paper address subitem X26-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study All participants provided written, informed, oral independently-witnessed consent to participate in the research study.
X26-iii) Safety and security procedures
Safety and security procedures, incl. privacy considerations, and any steps taken to reduce the likelihood or detection of harm (e.g., education and training, availability of a hotline)
Does your paper address subitem X26-iii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study All personal information was blocked from the trial registry.
13a) For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome NPT: The number of care providers or centers performing the intervention in each group and the number of patients treated by each care provider in each center 13b) For each group, losses and exclusions after randomisation, together with reasons Does your paper address CONSORT subitem 13a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Between August 1, 2019, and August 12, 2020, 121 potential participants were assessed for study participation in the Medical School of Jinan University. Four participants from the control group dropped out of the program for personal reasons. All participants were required to undergo a final assessment on synthetic models, and the overall score sheet was used to calculate the results. This study analyzed all participants using the assessment global rating scale described above. The global rating scale baseline is shown for assessing tendon repair differences in the control and VR groups (Table 2). A comparison of participants in both groups according to age, gender, grade point average (GPA), and pretest evaluation revealed no educationally relevant or significant differences. Follow-up ended on September 30, 2020.
Does your paper address CONSORT subitem 13b? (NOTE: Preferably, this is shown in a CONSORT flow diagram) * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Between August 1, 2019, and August 12, 2020, 121 potential participants were assessed for study participation in the Medical School of Jinan University. Four participants from the control group dropped out of the program for personal reasons. All participants were required to undergo a final assessment on synthetic models, and the overall score sheet was used to calculate the results. This study analyzed all participants using the assessment global rating scale described above. The global rating scale baseline is shown for assessing tendon repair differences in the control and VR groups (Table 2). A comparison of participants in both groups according to age, gender, grade point average (GPA), and pretest evaluation revealed no educationally relevant or significant differences. Follow-up ended on September 30, 2020.
subitem not at all important 1 2 3 4 5 essential 14a) Dates defining the periods of recruitment and follow-up
13b-i) Attrition diagram
Strongly recommended: An attrition diagram (e.g., proportion of participants still logging in or using the intervention/comparator in each group plotted over time, similar to a survival curve) or other figures or tables demonstrating usage/dose/engagement.
清除選取
Does your paper address subitem 13b-i?
Copy and paste relevant sections from the manuscript or cite the figure number if applicable (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Posttraining on the global rating scale was used to assess tendon repair in the two groups. The table shows a comparison between the KS and BS methods. With respect to tissue, no significant difference using the KS method was found (p= 0.215). The control and VR groups were not significantly different using the BS method (p = 0.209) ( Table 3).
Does your paper address CONSORT subitem 14a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Between August 1, 2019, and August 12, 2020, 121 potential participants were assessed for study participation in the Medical School of Jinan University. Four participants from the control group dropped out of the program for personal reasons. All participants were required to undergo a final assessment on synthetic models, and the overall score sheet was used to calculate the results. This study analyzed all participants using the assessment global rating scale described above. The global rating scale baseline is shown for assessing tendon repair differences in the control and VR groups (Table 2). A comparison of participants in both groups according to age, gender, grade point average (GPA), and pretest evaluation revealed no educationally relevant or significant differences. Follow-up ended on September 30, 2020.
subitem not at all important 1 2 3 4 5 essential 14b) Why the trial ended or was stopped (early) 14a-i) Indicate if critical "secular events" fell into the study period Indicate if critical "secular events" fell into the study period, e.g., significant changes in Internet resources available or "changes in computer hardware or Internet delivery resources" 清除選取 Does your paper address subitem 14a-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No significant changes in Internet resources available or "changes in computer hardware or Internet delivery resources" Does your paper address CONSORT subitem 14b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Between August 1, 2019, and August 12, 2020, 121 potential participants were assessed for study participation in the Medical School of Jinan University. Four participants from the control group dropped out of the program for personal reasons. All participants were required to undergo a final assessment on synthetic models, and the overall score sheet was used to calculate the results. This study analyzed all participants using the assessment global rating scale described above. The global rating scale baseline is shown for assessing tendon repair differences in the control and VR groups (Table 2). A comparison of participants in both groups according to age, gender, grade point average (GPA), and pretest evaluation revealed no educationally relevant or significant differences. Follow-up ended on September 30, 2020. Does your paper address CONSORT subitem 15? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Kessler suture with 2 interrupted suture knots 8.00(7-9) 8.00(7-9) 0.132 Bunnell suture with figure-eight suture knots 8.00(7-9) 8.00(7-9) 0.253 Data are number of medical students (%), median (IQR), or mean (SD).
15-i) Report demographics associated with digital divide issues
In ehealth trials it is particularly important to report demographics associated with digital divide issues, such as age, education, gender, social-economic status, computer/Internet/ehealth literacy of the participants, if known. Does your paper address subitem 15-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study 8.00(7-9) 0.132 Bunnell suture with figure-eight suture knots 8.00(7-9) 8.00(7-9) 0.253 Data are number of medical students (%), median (IQR), or mean (SD).
16-i) Report multiple "denominators" and provide definitions
Report multiple "denominators" and provide definitions: Report N's (and effect sizes) "across a range of study participation [and use] thresholds" [1], e.g., N exposed, N consented, N used more than x times, N used more than y weeks, N participants "used" the intervention/comparator at specific pre-defined time points of interest (in absolute and relative numbers per group). Always clearly define "use" of the intervention.
清除選取
subitem not at all important 1 2 3 4 5 essential 17a) For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) Does your paper address subitem 16-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No report multiple "denominators" and provide definitions
16-ii) Primary analysis should be intent-to-treat
Primary analysis should be intent-to-treat, secondary analyses could include comparing only "users", with the appropriate caveats that this is no longer a randomized sample (see 18-i).
Does your paper address subitem 16-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The overall performance under the KS method was significant (p < 0.001) after comparing the control and VR groups. Regarding overall performance, the VR group performed better than the control group. The VR group under the BS method also showed a significantly better result than the control group (p < 0.01).
Does your paper address CONSORT subitem 17a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study 17a-i) Presentation of process outcomes such as metrics of use and intensity of use In addition to primary/secondary (clinical) outcomes, the presentation of process outcomes such as metrics of use and intensity of use (dose, exposure) and their operational definitions is critical. This does not only refer to metrics of attrition (13-b) (often a binary variable), but also to more continuous exposure metrics such as "average session length". These must be accompanied by a technical description how a metric like a "session" is defined (e.g., timeout after idle time) [1] (report under item 6a).
Does your paper address subitem 17a-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The overall performance under the KS method was significant (p < 0.001) after comparing the control and VR groups. Regarding overall performance, the VR group performed better than the control group. The VR group under the BS method also showed a significantly better result than the control group (p < 0.01).
17b) For binary outcomes, presentation of both absolute and relative effect sizes is recommended Does your paper address CONSORT subitem 18? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory
18-i) Subgroup analysis of comparing only users
A subgroup analysis of comparing only users is not uncommon in ehealth trials, but if done, it must be stressed that this is a self-selected sample and no longer an unbiased sample from a randomized trial (see 16-iii).
19) All important harms or unintended effects in each group
(for specific guidance see CONSORT for harms) subitem not at all important 1 2 3 4 5 essential Does your paper address subitem 18-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study 您的答案 Does your paper address CONSORT subitem 19? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No important harms or unintended effects in each group.
19-i) Include privacy breaches, technical problems
Include privacy breaches, technical problems. This does not only include physical "harm" to participants, but also incidents such as perceived or real privacy breaches [1], technical problems, and other unexpected/unintended incidents. "Unintended effects" also includes unintended positive effects [2].
Does your paper address subitem 19-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This does not only include physical "harm" to participants, but also incidents such as perceived or real privacy breaches.
Does your paper address subitem 19-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study GRS is a semi-objective scale.
subitem not at all important 1 2 3 4 5 essential subitem not at all important 1 2 3 4 5 essential 22-i) Restate study questions and summarize the answers suggested by the data, starting with primary outcomes and process outcomes (use) Restate study questions and summarize the answers suggested by the data, starting with primary outcomes and process outcomes (use).
清除選取
Does your paper address subitem 22-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study In the field of orthopedics, researchers mainly focus on sophisticated surgical procedures, for example, (thoracic) pedicle screw placement and insertion ( Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study To our knowledge, this is the first study to adopt VR simulation for tendon repair training. It has been reported that the challenge of adopting VR simulation for regular curricula was due to the limited efficacy of VR as a learning tool [48]. To clarify the effectiveness, we demonstrated that the VR simulator was an effective tool in the acquisition of tendon repair in our blinded randomized trial. Modern VR simulations have a common disadvantage, which is the high cost. Clarke et al. [48] reported that individual simulators cost up to 6figure sums. Our platform removed the cost for the students and was open to the public to maximize cost-effectiveness. To consider whether the VR simulator is an educational tool, the results have to be significantly differenced with positive feedback. If the VR group participants did not improve any surgical performance, the VR simulator was not considered as part of the regular training.
20-i) Typical limitations in ehealth trials
Typical limitations in ehealth trials: Participants in ehealth trials are rarely blinded. Ehealth trials often look at a multiplicity of outcomes, increasing risk for a Type I error. Discuss biases due to non-use of the intervention/usability issues, biases through informed consent procedures, unexpected events.
清除選取
Does your paper address subitem 20-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The follow-up period can only reflect the short-term effect of the VR simulator, which was a limitation of this study. The long-term effect on orthopedic specialists who practice on VR simulators could take years to evaluate.
21-i) Generalizability to other populations
Generalizability to other populations: In particular, discuss generalizability to a general Internet population, outside of a RCT setting, and general patient population, including applicability of the study results for other organizations 清除選取 Does your paper address subitem 21-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study The VR simulator can provide a realistic surgical scenario, thus allowing students to train for a particular skill continuously or to master any unfamiliar procedures. According to the results, students learning from the VR simulator had significantly better scores than those learning from the traditional method with respect to the tendon repair technique. This finding may indicate that students using a VR simulator will be able to follow the whole operation more carefully and master the knowledge of the procedure in the future.
21-ii) Discuss if there were elements in the RCT that would be different in a routine application setting
Discuss if there were elements in the RCT that would be different in a routine application setting (e.g., prompts/reminders, more human involvement, training sessions or other co-interventions) and what impact the omission of these elements could have on use, adoption, or outcomes if the intervention is applied outside of a RCT setting.
OTHER INFORMATION
23) Registration number and name of trial registry 24) Where the full trial protocol can be accessed, if available Does your paper address subitem 21-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study During the Cornonavirus-19 (COVID-19) pandemic, the use of technology in the medical education field has become a popular topic. Tendon repair is a procedure that requires senior professional surgeons; therefore, medical students and junior doctors may not practice enough to be able to perform suturing independently. The possible solution is for junior doctors to practice through the VR simulator, hence becoming more familiar with the procedure and more confident when performing it. The VR simulator can maximize a medical student's efficiency with respect to mastering this technique. It has been proven that the VR simulator in a simulation lab rather than in the operating room is a better practice method than the traditional classroom study in terms of the high flexibility of location [20]. Medical students or residents can perform tendon repair via the VR simulator before performing a formal operation. Using the VR simulator serves the purpose of shortening the operating time, reducing operation errors, and alleviating patients' postprocedure pain [49]. However, the expense of textbooks or teaching assistance when using the traditional method has no significant comparison with the investment in equipment for VR simulators.
Does your paper address CONSORT subitem 23? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Does your paper address CONSORT subitem 24? * Cite a Multimedia Appendix, other reference, or copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Full trial protocol can be accessed is on http://www.chictr.org.cn/index.aspx Does your paper address CONSORT subitem 25? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study None sources of funding and other support.
X27-i) State the relation of the study team towards the system being evaluated
In addition to the usual declaration of interests (financial or otherwise), also state the relation of the study team towards the system being evaluated, i.e., state if the authors/evaluators are distinct from or identical with the developers/sponsors of the intervention.
清除選取
About the CONSORT EHEALTH checklist yes, major changes yes, minor changes no Does your paper address subitem X27-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study None conflicts of interest was presented.
As a result of using this checklist, did you make changes in your manuscript? * What were the most important changes you made as a result of using this checklist?
Title is the most important changes I made.
How much time did you spend on going through the checklist INCLUDING making changes in your manuscript * I have spent 3 hours to finished. yes no 其他: yes no 其他: STOP -Save this form as PDF before you click submit To generate a record that you filled in this form, we recommend to generate a PDF of this page (on a Mac, simply select "print" and then select "print as PDF") before you submit it.
When you submit your (revised) paper to JMIR, please upload the PDF as supplementary file.
Don't worry if some text in the textboxes is cut off, as we still have the complete information in our database. Thank you!
Final step: Click submit !
Click submit so we have your answers in our database! 請勿透過 Google 表格提交密碼。 As a result of using this checklist, do you think your manuscript has improved? * Would you like to become involved in the CONSORT EHEALTH group?
This would involve for example becoming involved in participating in a workshop and writing an "Explanation and Elaboration" document | 16,377.2 | 2021-01-30T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Resurgent Supersymmetry and String Theory
We study a realization of accidental supersymmetry in type IIB string theory as a proof-of-principle of the mechanism and as a prototype of the strong sector present in resurgent supersymmetry, a warped UV-completion of natural supersymmetry. We first introduce the mechanism of accidental supersymmetry as a way of producing a supersymmetric spectrum in the IR of a quasi-conformal field theory, and then go on to discuss the utility of the mechanism in the specific BSM model of resurgent supersymmetry. The realization of accidental SUSY that we study is IIB string theory on an orbifold of the Klebanov-Strassler solution.
Introduction
The strongly coupled regimes of gauge theories are the home of many diverse phenomena in quantum field theory which are often missed in perturbative studies of those theories, such as confinement and the growth of extra dimensions. Strong coupling has been used as a tool for realizing several mechanisms in field theory, and such lines of inquiry have suggested new solutions to the hierarchy problem [1][2][3]. However, with the recent discovery of a SM-like Higgs boson [4,5] and the absence of any discoveries of new physics to stabilize the electroweak scale, this motivates even more strongly the need to test the postulate of naturalness very thoroughly. Recently, there have been many models which add in a minimal module of new physics capable of stabilizing the "little hierarchy" from the electroweak scale up to a new physics scale above the reach of the LHC. The supersymmetric version of this resolution to this "little hierarchy problem" has been dubbed "more minimal" or "natural" SUSY [6][7][8][9], and LHC search strategies capable of probing such a scenario have been studied extensively [10][11][12][13][14][15][16][17][18][19].
However, with such a low cutoff for such models, one very rapidly requires a UV completion in order to study the viability of the model. Supersymmetric extensions are certainly possible, but can themselves come with UV tuning issues [20], motivating one to consider alternative UV completions. It seems only natural to consider UV completions of natural SUSY which themselves involve the ingredients of those mechanisms which solve the big hierarchy problem. Strong coupling is such an ingredient, and indeed, using it to solve the little hierarchy problem has been proposed [21] and studied in the context of the Randall-Sundrum (RS) framework [22] through the use of the AdS/CFT correspondence [23]. We will refer to this framework as resurgent supersymmetry.
Resurgent supersymmetry builds off of the idea that the Higgs boson could be a light composite of a nonsupersymmetric quasi-conformal field theory [24]. In a general QFT with a supersymmetric matter content but no supersymmetry, the flowing of scalar masses to be small can sometimes lead to a supersymmetric theory in the IR, despite the fact that the flow there was nonsupersymmetric. We will refer to the mechanism at work in such a construction as accidental supersymmetry, defined generally as a feature of models in which supersymmetry is an accidental symmetry of the QFT in the IR. We emphasize that accidental SUSY is a property that can be possessed by a general quantum field theory, and is not specific to BSM model building. Although accidental SUSY itself is a broad-reaching subject with many possible uses, what we concern ourselves with primarily is its utility in BSM physics in the model of resurgent SUSY. It was argued in [21] that the use of an accidentally supersymmetric strong sector could be utilized in BSM model building in order to keep a select few superpartners light in order to reproduce the bottom-up minimal natural SUSY framework [8].
However, by virtue of being a model in RS, the story of resurgent supersymmetry is necessarily not UV-complete due to the presence of higher-dimensional operators in the 5d Lagrangian in order to reproduce the proper low-energy, 4d physics, and therefore the physics responsible for the UV completion must appear imminently at higher energies. This is in fact true of almost every model in a warped 5d setting, and so therefore it is paramount to ascertain whether UV completions of any mechanism in RS even exist, resurgent supersymmetry being no exception. Beyond the comfort of having a UV completion in hand, studying even a part of the space of UV completions might offer insight into generic non-minimal low-energy phenomenology. In this paper, we describe details of the construction of an accidentally supersymmetric strongly coupled sector in IIB string theory, and offer a specific string-theoretic model which serves as a prototype of the strongly coupled sector in resurgent SUSY.
The outline of the paper is as follows: in section 2 of this work, we will construct the mechanism of accidental supersymmetry in a completely general setting, and then in section 3, move on to discuss details of an accidental supersymmetric sector suitable for the story of resurgent supersymmetry. Readers concerned with the details of field-theoretic model building may opt to read through section 2, whereas readers interested in BSM physics and its appearance in string theory may elect to instead begin in section 3.
In section 4, we shift our attention from details of accidental and resurgent SUSY in field theory to details of the AdS-dual string theory. We discuss necessary ingredients for the construction of accidentally supersymmetric models in IIB string theory, culminating in a proof-of-principle example of the mechanism; the model is IIB string theory on a Z 2 orbifold of the Klebanov-Strassler solution, also described as a warped product of R 1,3 with a deformed complex cone over the zeroth Hirzebruch surface F 0 . We outline the necessary computations in section 5, and expound upon relevant details as well as some background material in the appendix.
Because this model serves as a prototype of the strongly coupled sector in resurgent supersymmetry, this model also serves as evidence meriting further study of the mechanism of accidental SUSY in the context of RS. We do not attempt to make our stringy model fully realistic because of the great technical challenge in realizing precisely the MSSM in string theory (see, e.g., [25] and references therein), but merely attempt to identify features of our model which can act as proxies for a fully realistic construction. Although we choose to focus on type IIB string theory as our UV-complete framework, it would be interesting to pursue realizations of accidental SUSY in other UV-complete frameworks in the future.
Accidental SUSY
In this section, we describe the mechanism of accidental supersymmetry in generality, before moving on to discuss its role in the story of resurgent supersymmetry. Before proceeding, however, we clarify some notation. We work in a d-dimensional quasi-CFT, viewed as an effective field theory valid below some scale. A scalar primary operator in the quasi-CFT which is a singlet under all global symmetry groups (including U (1)s and discrete symmetries) is known as a global singlet operator (GSO). If the operator has scaling dimension ∆ < d, it is said to be a global singlet relevant (primary) operator (GSRO), following the notations of [24]. Likewise, an operator with scaling dimension ∆ = d is a global singlet marginal (primary) operator (GSMO).
Throughout this work, in order for us to maintain computational control of the theory, we need to be in a large CFT-color N limit, or more generally, a regime in which conformal perturbation theory is valid.
We will be classifying primary operators to leading order in 1 N (or more generally, the AdS perturbation expansion), but use the terms GSRO and GSMO to refer to the leading-order scaling dimensions. If the O( 1 N ) corrections to scaling dimensions make a marginal operator relevant or irrelevant, we say that the operator is 1 N -marginally relevant or 1 N -marginally irrelevant 1 . Accidental supersymmetry describes field theories in which supersymmetry is an accidental symmetry in the IR of an RG flow. Accidental SUSY can come in a number of different varieties; it can be exact or approximate, and weak or strong. We now expound upon this terminology in more detail.
Exact strong accidental supersymmetry is a feature possessed by quantum field theories which have a nonsupersymmetric flow that ends on a supersymmetric fixed point 2 , as illustrated on the left in figure 1. The flow into the UV of models of strong accidental SUSY could be either supersymmetric or nonsupersymmetric, as described in [21]. If SUSY is broken in the UV in a way that respects the global symmetries of the theory, then at some high scale Λ, but still below the scale of SUSY-breaking, one can parameterize the effects of SUSY-breaking by adding SUSY-breaking spurions with arbitrary coefficients ε i multiplying every GSO 3 to the Lagrangian: As we run into the IR down to a scale µ, the Wilson coefficients run: where the ellipses refer to higher-order corrections in ε. We see that if all 4 ∆ i > d, then as we continue flowing into the IR, SUSY is parametrically restored since our SUSY-breaking terms flow to 0. Therefore, to say that a model possesses strong accidental SUSY is equivalent to saying that the QFT does not possess GSROs or GSMOs. Figure 1. Schematic IR RG flows associated with strong exact (left) and approximate (right) accidental supersymmetry. We do not impose any requirements on the UV. In the exact flow, the flow itself is nonsupersymmetric (despite having a supersymmetric matter content), but ends on a supersymmetric fixed point. In the approximate flow, we flow towards a supersymmetric fixed point, but exit the quasi-conformal regime before reaching the fixed point, leaving an approximately supersymmetric field theory of composites below Λ comp .
In weakly coupled QFTs, anomalous dimensions of operators are perturbatively small, and so if there are any scalars φ a in the theory (possibly transforming nontrivially under global symmetry groups), then |φ| 2 is always a GSRO. Therefore, in general, one needs to be strongly coupled to eliminate GSROs, as anomalous dimensions can be O(1). However, strong coupling alone is not enough; one requires strong coupling all along the flow in order for SUSY-breaking effects to flow to zero. Therefore, most simply, the theory must sit near a conformal fixed point in order to exhibit accidental SUSY.
Despite the beauty of strong accidental SUSY, the requirement that the flow end on a strongly-coupled fixed point is not helpful for BSM model building, where there exists a mass gap. However, it is possible to have approximate strong accidental SUSY instead; such models would only be approximately conformal for some duration of IR flow, and at some dynamically generated, hierarchically small "compositeness scale" Λ comp ≪ Λ, the theory leaves the quasi-conformal regime. This could occur when supersymmetry-preserving but non-conformal operators O SU SY j are generated in the low-energy Lagrangian: 3) Without SUSY-breaking in the UV, the theory would have looked approximately superconformal at high energies, then would have had a gap at µ ∼ Λ comp , with a weakly-coupled description given in terms of SQCD below Λ comp . However, once we include SUSY-breaking terms, the low-energy theory depends on whether there exist and whether the theory has generated SUSY-breaking GSROs or GSMOs. If there had been no GSROs or GSMOs, then we would have been suppressing SUSY-breaking effects along the flow, and below the compositeness scale, the resulting field theory could be described by an approximately accidentally supersymmetric theory of light "composites".
Exact and approximate weak accidental supersymmetry are defined in a similar fashion to strong accidental SUSY, but only insisting that there are no GSROs, allowing the possibility of GSMOs. In such models, any SUSY-breaking that was present in the UV could still be present in the IR, but could remain parametrically small, controlled by a perturbatively small parameter ε (such as the SUSY-breaking scale divided by the messenger scale, possibly dressed by loop factors) sitting in front of a GSMO in the Lagrangian. Such models are useful for BSM model building as they provide a way of transmitting a small amount of SUSY-breaking to the IR.
It is possible that a model possesses weak accidental SUSY in the planar limit, but that O( 1 N ) corrections spoil this effect, resulting in 1 N -marginally relevant GSOs. In this case, we write ∆ = d + γ where the anomalous dimension 5 γ is O( 1 N ) and negative. In such cases, one must worry that the effects of the operator remain perturbative as we flow into the IR. If we parameterize the UV coupling ε as λΛ d−∆ , where λ is dimensionless and may be parametrically small, then the IR coupling will be given by At scale µ, this matches onto a description in terms of the effective IR coupling ε ′ = λ ′ µ d−∆ : 5 Normally in field theory, the anomalous dimension refers to twice the difference between the true scaling dimension and the engineering dimension of an operator. In this context, we mean a slightly different definition; we mean the difference between the true scaling dimension and the leading order in 1 N -scaling dimension, or in AdS, the size of the quantum correction to the field's tree-level mass.
We would like this IR coupling to remain perturbative all the way down to µ = Λ comp ; i.e. λ ′ ≪ 1. Performing the matching, we see that this is equivalent to In 1 N -marginally relevant realizations of resurgent SUSY, this will constrain how large a hierarchy there can be between the SUSY-breaking scale Λ and the cutoff scale Λ comp of the (B)SM EFT.
We emphasize that is a fairly generic concern in such models, because such models often come with global symmetries with associated conserved currents j µ with protected scaling dimension 3. These currents have scalar superpartners 6 j s in the IR SQFT with protected scaling dimension 2. In the case of a U (1) global symmetry, this scalar superpartner could be a deadly GSRO, but could be eliminated with a discrete symmetry. However, if the global symmetry is nonabelian, then j a s carries some global symmetry index a and is not a GSRO. Nevertheless, it can still be worrisome, because it can be used to build a double-trace GSMO at the planar level j a s j a s , which could possibly receive negative O( 1 N )-corrections to its scaling dimension, making it 1 N -marginally relevant.
Resurgent SUSY
We now discuss the story of resurgent supersymmetry. We first outline the requirements of the model, then expound upon these points below. The framework has three sectors: • The MSSM minus the third-generation quark and Higgs superfields, which we call the "elementary" sector • An approximately superconformal strongly coupled sector exhibiting weak approximate accidental SUSY subject to the constraints listed below, and has composites which serve as third-generation quarks and Higgses • A SUSY-breaking sector which breaks SUSY somewhere well above 10 TeV The accidentally supersymmetric sector should satisfy the following checklist: • It should not contain GSROs • It should contain global symmetries that can be weakly gauged to serve as the SM gauge groups 6 Just as the gauge boson A a µ is sourced by some current j a µ , so too is its scalar superpartner, the auxiliary D-term, sourced by some scalar current js.
• It should not be destabilized by the weakly-coupled (MSSM-like) sector, as studied and described in [21] • It should exit the quasi-conformal regime at Λ comp ∼ 10 TeV We would like to couple the MSSM to an approximately superconformal sector in such a way that the third-generation quark and Higgs sectors are composites. The MSSM gauge groups are global symmetries of the approximately superconformal sector, which are then weakly gauged 7 . We couple the SUSY-breaking sector to the elementary sector directly. The SM fermions remain light because of chiral symmetry, but the elementary sfermions obtain large masses. Finally, as in [21], we arrange for the gauginos to also be light by model-building an R-symmetry to forbid gaugino mass-terms. The R-symmetry is to be broken at a scale Λ comp , giving the gauginos masses.
The SUSY-breaking sector also communicates with the approximately superconformal sector, but the approximately superconformal sector possesses weak approximate accidental supersymmetry, and so below the compositeness scale, the quasi-conformal sector (containing the Higgs, Higgsinos, and third-generation quarks and squarks) is approximately supersymmetric. In doing so, we have arranged for the spectrum to resemble that of natural SUSY [8] in the IR.
The above discussion has been in terms of four-dimensional physics; through the use of the AdS/CFT correspondence, we can also discuss higher-dimensional dual descriptions of the above framework. Such a description is naturally best formulated in the language of RS. The dual description of the above model is portrayed in figure 2. We break SUSY on the UV brane, but preserve it in the bulk and on the IR brane. The Higgs and third-generation quark superfields are IR-localized, whereas all other matter fields are UV-localized. Therefore, the low-energy theory contains light third-generation squarks and Higgses. The gauge superfields propagate in the bulk. Models of this sort have been explored, e.g. in [26].
For completeness, we repeat the checklist of criteria our model should satisfy to be accidentally supersymmetric, but on the gravity side: • A small-coupling expansion • The supergravity should not contain tachyons capable of transmitting SUSY-breaking, and the radion should be stabilized with the Goldberger-Wise mechanism or something comparable • The RS bulk should contain gauge fields • RS loop effects 8 should drive weakly gauged sectors towards accidental SUSY as well • There should be an IR brane at a position corresponding to an energy scale of 10 TeV 7 Note that our previous discussions of accidental SUSY focused on the spectrum of the approximate SCFT; accidental SUSY is even broader-reaching because a "focusing effect" can push us towards an accidentally supersymmetric spectrum in the weakly gauged external sector as well [21,22]. 8 Note that it is not just one-loop or two-loop effects which are important; verifying stability requires the inclusion of all-loop diagrams, a feat which can be accomplished with the holographic RG (see, e.g., [21]). Finally, for dynamical 4d gravity, we should have a UV brane, on which we break SUSY.
Searching for Accidental SUSY in String Theory
In this section, we outline requirements for building a prototype of an accidentally supersymmetric strong sector by restricting its dual string theory, viewed as the UV-completion of the RS model described above. In section 5, we go on to find such a prototype; it is an orbifold of the Klebanov-Strassler string theory.
The goal of this paper is to provide a realization of an approximately superconformal sector which exhibits approximate weak accidental supersymmetry. Although such a model can be obtained readily in RS, it requires a UV-completion due to the presence of higherdimensional operators in the Lagrangian, and so we pursue a realization of such a sector in IIB string theory instead. Furthermore, the technology that has been developed to study string theory and its supergravity approximation provide computational control over the model at hand, but in a fashion which assures us that a full UV-complete description of the physics exists. We now discuss how to convert our gauge-description requirements into string-description requirements.
The goal will be to have a gauge-description model which preserves SUSY, and then break SUSY by hand at high energies and have it return as we flow to the compositeness scale. Such a model requires a strongly coupled quasi-SCFT, which is a warped throat with a quasi-AdS geometry in the dual string description. In order for the low-energy theory to possess dynamical four-dimensional gravity, the dimension which is warped should not be noncompact like the AdS geometry, but rather a one of six compactified dimensions comprising a manifold M 6 (while nevertheless having a warped region). In order for supersymmetry to be preserved in the bulk, M 6 should be a Calabi-Yau threefold. We would like to utilize the supergravity approximation to the full string theory, and so we work at small string coupling. This ensures that SUGRA modes remain light (and are therefore dual to operators with small scaling dimensions) but make the string modes heavy, and therefore not dual to GSROs/GSMOs. The existence of a mass gap (making our accidental SUSY approximate) is dual to the finiteness of the throat. Finally, Lagrangian deformations of the SCFT are dual to classical linearized perturbations to a supergravity solution, and the scaling dimensions and representations of the primary operators we deform the Lagrangian by are determined by the radial scaling and representations of the perturbations of fields in supergravity.
However, right away we run into trouble, because there is an no-go theorem [27] stating that the only way to obtain a warped throat in type IIB string theory on a compact M 6 is through the presence of O3-planes or D7-branes. In the absence of local objects in the theory, the only solution to the equations of motion are vanishing G 3 and constant fiveform flux and warp factor, giving an unwarped solution. Consequently, in order to have a warped solution in string theory, and therefore a quasi-CFT in the dual gauge theory, one needs to add local objects such as O3-planes or D7-branes which violate the assumptions of the no-go theorem. Therefore, one would hope that one could find warped models of accidental SUSY in F-theory compactifications [28], or in the orientifold limit of F-theory [29].
However, these constraints do not apply to warped, noncompact geometries. Although these models are perfectly satisfactory in their own right, they lack dynamical four-dimensional gravity, and therefore are unsatisfactory for attempts to recreate BSM physics models. Nevertheless, these geometries can act as noncompact "linearizations" 9 of full compact solutions. The bulk of these compact manifolds act as "UV branes" on the EFTs living in the throat. Therefore, one can study the physics of a throat alone, and incorporate fluctuations sourced by physics in the bulk of the compact manifold in the effective description. This methodology is described and utilized in, e.g., [30,31] to study the potentials of D3-branes for the purposes of studying inflation in string theory. It is advantageous to use this formalism because it frees us from needing to discuss the specifics of SUSY-breaking, allowing us to be sure that we've captured all possible methods of transmission of SUSY-breaking into the IR.
By carrying out a study of noncompact Calabi-Yaus as outlined above, we are not guaranteed the existence of a compact F-theory solution (giving rise to dynamical 4d gravity) which reproduces the result of using the noncompact geometry. However, if such a solution does exist, we are ensured that we have captured the relevant physics to the extent that the noncompact "linearization" is valid. Furthermore, F-theory compactifications such as the ones described in [27,32] give hope that an appropriate compactification should exist to reproduce the model discussed below. Finally, the strong sector's global symmetry groups in our model below, which can be weakly gauged to serve as prototypes of the SM gauge groups, are based on isometries of the noncompact Calabi-Yau. However, when we compactify F-theory on a Calabi-Yau fourfold, there cannot possibly be isometry groups to weakly gauge, because in general, compact Calabi-Yaus do not possess continuous isometry groups. Therefore, we require symmetry groups from another source, such as brane con-structions, in our F-theory construction. These issues are beyond the scope of this work, but must be considered when offering an F-theory description of accidental and resurgent SUSY, and so we offer additional discussion in section 6.
We are led, therefore, to consider a warped space which is of the form where ds 2 X 5 is the metric on a five-dimensional horizon manifold. In order for the noncompact space dr 2 + r 2 ds 2 X 5 to be Calabi-Yau (thereby preserving bulk supersymmetry), X 5 should be Sasaki-Einstein [33]. As a full F-theory construction complete with SUSYbreaking is quite a challenging feat, we settle for the more modest goal of looking for noncompact solutions to supergravity with brane sources, bulk supersymmetry and with this metric ansatz which have no GSROs, deferring the question of realizations on compact manifolds until future work.
Supposing we had such a background to work with, classical perturbations of the various supergravity fields by non-normalizable solutions to the linearized equations of motion are dual to deformations of the Lagrangian of the dual SCFT. Of course, such solutions would be normalizable upon compactification of the manifold. These non-normalizable modes would include, for example, the elementary particles of the MSSM. The masses of the linear supergravity perturbations determine the scaling dimensions of the dual primary operators, and the quantum numbers of the global symmetry group follow as well. By the AdS/CFT correspondence, those objects which are dual to Lagrangian perturbations of the CFT are scalar fields on AdS 5 . Therefore, one can classify all scalar single-trace primary operators, their scaling dimensions (to leading order in 1 N ) and their global quantum numbers by studying the UV perturbation theory of 5d scalars about the background in question. Note that these supergravity field perturbations will be sourced by SUSYbreaking sources in the compact part of the manifold and therefore do not need to respect supersymmetry.
There is a famous example of a noncompact, finite warped throat in string theory; this is the Klebanov-Strassler (KS) solution [34], which describes IIB string theory on a warped deformed conifold. It is a perturbation away from the fixed point of the warped throat of Klebanov-Witten (KW) [35]. KW describes string theory on a warped product of R 1,3 with the conifold, a real cone over the space T 1,1 . KW describes a theory with N D3-branes placed at the tip of the conifold, whose worldvolumes coincide with R 1,3 . The dual CFT is a strongly-coupled fixed point of the quiver diagram shown on the left in fig. 3. Discussions of T 1,1 and the KW solution are given in the appendix.
The KS solution is obtained by modifying the KW solution by wrapping M D5-branes (with M ≪ N ) around the twocycle at the base of the conifold, and allowing the remaining three directions to be parallel to the D3-branes. This action famously deforms the conifold and generates a logarithmic dependence of the total flux through T 1,1 on the radial coor- dinate, which is dual to a gauge theory with gauge groups SU (N ) × SU (N + M ). Due to nonvanishing beta functions, this gauge theory cascades as we flow into the IR, undergoing repeated Seiberg dualities and returning to self-similar states, but with lower-rank gauge groups, finally ending deep in the IR when we've exhausted all possible Seiberg dualities. The finiteness of the throat is AdS/CFT dual to dimensional transmutation in the gauge theory, and is what will be responsible for the "approximate" nature of accidental SUSY in our model discussed below. KS contains a warped throat with a nontrivial supersymmetric field content in the IR, and an "IR brane" that comes to be via dimensional transmutation, making it a promising starting point for a study of accidental SUSY in string theory. However, compactifying and breaking SUSY generically in the UV will lead to a violent disruption of IR SUSY, in particular due to the presence of the GSRO |Tr(AB)| 2 (∆ = 3) in the spectrum of operators [32]. Consequently, KS does not exhibit accidental SUSY. Therefore, we will be interested in an orbifold of KS that removes this operator from the spectrum, as under our orbifold, Tr(AB) is odd.
A Z 2 orbifold of Klebanov-Strassler
There is a close cousin of KW which describes string theory on a warped product of R 1,3 with the complex cone 10 over F 0 ; this is just a Z 2 orbifold of the conifold 11 [33]. The orbifold can be described on the supergravity side by taking the KW solution and modding out by z i ∼ −z i . In terms of T 1,1 , this operation can be described as identifying ψ ∼ ψ+2π. Note that this is nontrivial because the coordinate range of ψ is 0 ≤ ψ < 4π on the conifold. This is dual to orbifolding the gauge group and A ∼ −A in the gauge dual, removing the dangerous Tr(AB) from the spectrum. The resulting theory has four gauge groups and four types of matter field, as illustrated by the quiver diagram in figure 4. More details of the orbifold are given in the appendix.
When we carry out this orbifold, we insist that whatever F-theory construction we use respects two particular Z 2 symmetries: first, a Z 2 outer automorphism of KW (inherited by the orbifold theory) where we exchange A and B and exchange the two gauge groups (referred to simply as Z 2 in table 5), and second, the Z 2 in the orbifold theory obtained by setting the gauge couplings of gauge groups 1 and 3 equal as well as the gauge couplings of gauge groups 2 and 4 equal. We insist on the first so that the U (1) B scalar current Figure 4. The quiver diagram associated with the theory dual to the complex cone over F 0 .
is not in the spectrum, and we insist on the second because it ensures that the theory is conformal, as well as enforcing that the deformed theory's RG flow is KS-like rather than chaotic.
This solution can also be deformed and exhibits KS-like running. In terms of the basis of oneforms described in the appendix, all g i are invariant under this operation, and therefore so are the fluxes in KS. This theory has been studied before, but in a different context [36][37][38]. Also note that this is to be contrasted with the SUSY-breaking orbifold described in [32], as our orbifold preserves SUSY. As discussed in section 4, the idea is to categorize all non-normalizable perturbations of the supergravity solution, and therefore gain an understanding of the scaling dimensions of possible Lagrangian deformations of the dual gauge theory. Fortunately, this computation can be related to a computation of the spectrum of primary operators in KW, as we now discuss.
Our only goal is to obtain the effective AdS 5 masses of all non-normalizable perturbations of the string theory on the deformed complex cone over F 0 due to compactifications which behave as AdS 5 scalars, as these are the perturbations which will be dual to scalar perturbations of the dual gauge theory Lagrangian. Studying the perturbation theory around the deformed cone would in general be a true feat, but fortunately, since we're only interested in UV Lagrangian perturbations, dual to perturbations at large r, we can utilize the fact that the geometry becomes asymptotically that of AdS 5 × T 1,1 /Z 2 , and study perturbations on that geometry instead. Furthermore, since the action of the orbifold becomes much more transparent on the gauge side, it is generally sufficient to classify perturbations in KW and study the action of the orbifold after AdS/CFT matching. This procedure has been carried out extensively before for KW/KS [31,39,40]; we present the results below, and review some details of the derivation in the appendix. However, there are some subtleties related to topological effects which should be treated with care, which we briefly elaborate on and discuss in more detail in the appendix.
In the original KW theory, there is a vector-like "baryon number" U (1) symmetry where we assign A charge 1 and B charge −1. This U (1) is inherited by the orbifold theory, but due to the division of A into Q 1 and Q 3 and B into Q 2 and Q 4 , additional U (1)s are present in the daughter theory. We denote these U (1) B A , where Q 1 has charge 1 and Q 3 has charge −1, and U (1) B B , where Q 2 has charge 1 and Q 4 has charge −1. One might worry that additional U (1)s give rise to additional conserved currents, introducing GSROs not present in the parent theory. However, these additional U (1)s are both anomalous, and so they are not symmetries of the theory, and their associated currents have scaling dimensions which are not protected. Finally, in the dual string theory, the supergravity has no modes which could be dual to conserved currents. Therefore, we suspect that those currents of the U (1)s are in fact dual to massive twisted-sector string modes, and so are not GSROs. We discuss this issue further in the appendix. Table 1. A matching of SUGRA and SCFT scalar modes with ∆ ≤ 4. The columns include: scalars in their respective superoperators, why their dimensions might be protected, whether they are even under the outer Z 2 -automorphism of KW, whether they are present in our orbifold theory, their scaling dimension to leading order in 1 N , what mode they are dual to in SUGRA, and their quantum numbers under SU (2) × SU (2) × U (1) R . Leg refers to the Legendre-transform of an operator, as discussed in the appendix.
We have elucidated the matching of all scalar SUGRA perturbations and scalar SCFT primaries with ∆ ≤ 4 in table 5. In order for an operator to be dangerous to us, it must be a GSRO and furthermore be even under the orbifold action A → −A, B → B. Furthermore, it must be even under the particular outer automorphism Z 2 symmetry discussed earlier. These operators have scaling dimensions which can be inferred from radial scaling of the supergravity perturbations, or can also be protected and therefore determined directly in the CFT. We also state the origin of the perturbation in the supergravity, following the conventions laid out in [31] and reviewed in the appendix. Finally, we express the quantum numbers of the operator under the CFT global symmetry group SU (2) × SU (2) × U (1) R 12 .
A dangerous GSRO would be a singlet relevant single-trace primary operator or a double-trace primary operator which is the square of a (non-singlet) single-trace primary operator 13 with ∆ < 2. We see that there are no such primary operators of either kind in the spectrum, and therefore we conclude that the complex cone over F 0 exhibits weak accidental SUSY. Drawing on this conclusion, we expect that the deformed complex cone over F 0 exhibits weak approximate accidental SUSY, and is therefore a proof-of-principle of the existence of a UV-complete prototype of a quasi-SCFT sector suitable for BSM model-building.
Conclusions
In this work, we have discussed the mechanism of accidental SUSY, describing QFTs which are nonsupersymmetric, but have supersymmetry appearing "by accident" in the IR. Accidental SUSY can be exact or approximate, depending on whether we reach a supersymmetric fixed point or simply flow towards one for a while, and can be strong or weak, depending on whether there are not or are global singlet marginal primary operators in the spectrum, respectively. We discussed the model of [21], which we refer to as resurgent SUSY. This model utilizes a weak approximate accidentally supersymmetric sector in a BSM model in order to partially UV-complete natural SUSY. We discuss that sector in detail and impose constraints on how such a sector must be realized in a full UV-completion.
We then move on to discuss what one must do in a string construction to realize even a prototype of the aforementioned accidentally supersymmetric sector in a fashion compatible with resurgent SUSY. From there, we move to study a SUSY-preserving Z 2 orbifold of the noncompact Klebanov-Strassler theory. We were able to describe the space of global symmetry-respecting UV Lagrangian deformations of the dual gauge theory by classifying classical perturbations to the KW theory, and used the AdS/CFT correspondence to match those perturbations to primary operators in the gauge theory. We were therefore able to conclude that this orbifold theory admits weak approximate accidental supersymmetry, and is therefore a suitable prototype for the quasi-conformal sector in resurgent SUSY.
The theory discussed in this paper is over a noncompact Calabi-Yau, meaning that the theory does not have dynamical four-dimensional gravity. This is not a problem from the point of view of attempting to exhibit any UV-complete model of accidental SUSY, but one 12 Note that in KW and its orbifold, the R-symmetry is exact. However, instantons in KS break this symmetry, and therefore could presumably induce dangerous operators. We treat operators in the UV, KW limit, and so we should not expect to be able to generate operators which violate the R-symmetry. Even if we were to discuss operator perturbations to KS, though, for m ≥ 2, a group larger than R-parity is preserved, and so instantons would not be able to generate any operators in the superpotential that would threaten to destabilize the hierarchy. 13 The double-trace primary operator will have an anomalous dimension, but it will be O 1 n and so we ignore the correction. might hope to incorporate 4d gravity in more realistic string models which reproduced the MSSM or the natural SUSY spectrum. Due to a no-go theorem, a compactification to four dimensions is best described in the language of F-theory. In that language, the noncompact model described in this paper is an effective "linearization" of the warped throat present in the full compactification. By allowing for non-normalizable perturbations in supergravity, we systematically allow any and all operators respecting the Z 2 and continuous global symmetries described in B.2, knowing that all such operators cannot effectively transmit SUSY-breaking.
In future work, we plan to pursue the question of constructing an F-theory model which reproduces the model in this paper in some local patch of the base of the Calabi-Yau fourfold. It is not immediately obvious that such a fourfold should exist, but the existence of F-theory models such as described in [27,32] gives us hope that it is feasible. Even if this particular noncompact construction cannot be realized in F-theory, it is nevertheless interesting to ask about the possibility of other F-theory models which do exhibit accidental supersymmetry. One would like the F-theory compactification which locally reproduces the model described in this paper to satisfy the following checklist: • It should contain a warped throat which supports an adequately large hierarchy, meaning that the Euler number of the fourfold should be sufficiently large.
• It must respect the two Z 2 symmetries described in section B.2, which are necessary to protect against GSROs.
• Compact Calabi-Yaus do not possess continuous isometries, and so we must preserve a sufficiently large discrete subgroup of SU (2) × SU (2) × U (1) R to prevent those non-global singlet operators described in 5 from being generated.
• We must break SUSY in the bulk of the Calabi-Yau.
There are, of course, the usual worries about moduli stabilization; we assume that the F-theory fluxes stabilizes the moduli of the compactification. It would be interesting to explore general string corrections to this model; in particular, it would be interesting to explore the breaking of the no-scale structure present in the supergravity.
Furthermore, it would be interesting to attempt to modify the model presented here in order to make it more realistic; one would like to add weak gauge groups by stretching D7s down the length of the throat, and add a composite Higgs which breaks some of the gauge groups. These are necessary because the F-theory compactification will prevent us from utilizing the continuous isometries in our noncompact case as proxies for the SM gauge group.
Finally, although the question of individual models of accidental SUSY are interesting in their own right, it would be very exciting to have a sense of how "generic" features such as warped throats or accidental SUSY are on the landscape, defined in relation to the number of vacua with high-scale SUSY-breaking and a finely-tuned electroweak scale. We hope that tests of the presence of warped throats or accidental SUSY can be developed and utilized to scan the landscape, in a similar spirit to the surveys of [41][42][43], as inspiration to pursue the idea of naturalness further at the 14 TeV LHC and beyond, as well as open new chapters in the field of string phenomenology.
Notations and conventions
We use the mostly-plus metric signature. d 4 x is shorthand for dx 0 ∧ dx 1 ∧ dx 2 ∧ dx 3 . κ, α ′ and g s are related by 2κ 2 = (2π) 7 α ′4 g 2 s . We switch between setting the string length l s = 1 (α ′ = 1 2 ) and setting the AdS radius of curvature R = 1, but attempt to indicate when we have done which. g s = e φ = 1 Imτ .
Gauge transformations
In KW/KS, we use the following conventions for superfield gauge transformations:
IIB Supergravity
We define the following combination of IIB supergravity fields: The IIB Einstein-frame action is [44]: where under a modular SL(2, Z) transformation, and every other field is invariant. The associated equations of motion and Bianchi identities are [45]: Finally, we must impose by hand thatF 5 = * F 5 . Although the above equations of motion are consistent with self-duality, they do not imply it.
We often work with the metric and fiveform flux ansätze in terms of which we can define the fields Φ ± = e 4A ± α. The equations of motion above become where both 6 and ∂ m are to be evaluated with respect to the M 6 metric (without warp factors). In KW, the solutions are Φ − = 0, Φ + = 2r 4 . We have introduced the projection operators P ± which project threeforms onto their imaginary self-dual and imaginary antiself dual (ISD and IASD, respectively) components. These operators are defined by where * 6 only dualizes with respect to M 6 and g mn . These operators satisfy
The conifold
The conifold is a noncompact Calabi-Yau threefold, and it can be described as the subspace of C 4 satisfying Fixing the radial distance from the origin, the horizon manifold X 5 is the five-dimensional Sasaki-Einstein space T 1,1 = S 3 ×S 3 S 1 diag . T 1,1 can be described by coordinates (ψ, θ 1 , θ 2 , φ 1 , φ 2 ) [46], and is topologically S 2 × S 3 . Here, θ i runs from 0 to π, φ i from 0 to 2π, and ψ from 0 to 4π. In terms of these coordinates, the Sasaki-Einstein metric is The metric can be rewritten diagonally in terms of the globally-defined one-forms g 1...5 , related to the coordinates above by The metric in these coordinates is It is convenient to define the following: We use ω 5 to denote the volume form; it is Integrating this over T 1,1 gives the volume V = 16π 3 27 .
The Klebanov-Witten theory
The Klebanov-Witten solution to IIB supergravity is where R is the AdS radius of curvature measured in units of the string length, l s = 1. Note that T 1,1 is Sasaki-Einstein, implying the existence of 4d N = 1 SUSY. The dual SCFT is described by gauge groups SU (N ) × SU (N ) along with bifundamental and antibifundamental fields A i and B j , respectively, each transforming as a doublet under their own SU (2) flavor symmetry group. At a strongly-coupled conformal fixed point, the scaling dimension of the matter fields is ∆ = 3 4 , and the theory has a superpotential At weak coupling, the Kähler potential is given by The Klebanov-Strassler theory The IIB supergravity solution that describes KS is governed by the following equations: (A.34) where the solution has been specified in terms of the following auxiliary functions: and again, l s = 1. The radial coordinate ρ is related to r by r 3 = 27 32 ǫ 2 e ρ , with ǫ 2/3 being a mass parameter. This theory is dual to a gauge theory which cascades as we flow into the IR, undergoing repeated Seiberg dualities until finally ending when we've reached µ ∼ ǫ 2/3 . The superpotential is the same as in KW.
B Spectroscopy of IIB Supergravity on the Complex Cone Over F 0 We discuss the supergravity perturbation theory in subsection B.1, then turn to a categorization of operators in the dual gauge theory in subsection B.2. These are matched in the main text in table 5.
B.1 The Supergravity Side
We consider a supergravity perturbation theory similar to that described in [31], allowing all fields to be systematically expanded in a formally small parameter 14 ε about their background values; e.g.F 5 =F In this case we want to allow all supergravity fields to be expanded, but insist that the resulting modes be AdS 5 -scalar perturbations. Consequently, we ignore all fermion equations of motion. Furthermore, we would like the fluxes associated with the various gauge fields to not break 4d Poincaré-invariance. With respect to the KW background solution, the relevant 15 linearized perturbation equations become 3 ) = 0 dF In addition, there is the Einstein equation describing symmetric tensor perturbations of T 1,1 ; however, having found the spectrum of scalar operators in the CFT, we can deduce that any protected CFT operator not matched to one of the above supergravity fields must be dual to a symmetric tensor perturbation, and therefore do not need to solve the Einstein equation.
The process of solving the perturbation equations is greatly simplified by knowing the radial scaling of the scalar harmonics on T 1,1 , which are known to be [39] where (j, l, r) describe the representation of the harmonic with respect to the isometry group of T 1,1 , which is SU (2) × SU (2) × U (1). H is defined as H(j, l, r) = 6 j(j + 1) + l(l + 1) − The lowest values of δ are shown in table 2. Note that for every nonvanishing value of r, there is another representation (j, l, −r) with the same scaling dimension. Table 2. The scaling dimensions and quantum numbers of the first few scalar harmonics on T 1,1 .
The τ (1) perturbation equation is independent of the rest and is satisfied by any harmonic scalar on T 1,1 . We use coordinates x for AdS 5 and y for T 1,1 ; then we expand τ (1) (x, y) = jlr φ jlr (x)Y jlr (y). The 10d Laplacian becomes AdS − T 1,1 , where T 1,1 Y jlr (y) = H(j, l, r)Y jlr (y). Therefore, the AdS equation of motion for each term τ jlr in the expansion becomes AdS τ jlr (x, y) = H(j, l, r)τ jlr (x, y) (B.5) These have mass-squareds beginning at m 2 = H(0, 0, 0) = 0 in AdS 5 , and so with the exception of the constant mode, the rest cannot transmit SUSY-breaking from the UV as they're dual to operators with ∆ > 4. The constant mode is dual to an operator with ∆ = 0 or 4, where these two choices are related by a Legendre transform [47]; the ∆ = 0 mode is a modulus which is the sum of the inverse-squared gauge couplings 1/g 2 1 + 1/g 2 2 [35].
∆ j l r 4 0 0 0 By the Bianchi identity, G 3 is a closed threeform; therefore it is either exact or a representative of a cohomology class. We consider these in turn. As τ (0) is constant, we can write G 3 as dA 2 , where A 2 is the complex twoform C 2 − i gs B 2 . Perturbations of A 2 must be AdS 5 -scalars; therefore A 2 can be decomposed in terms of twoform harmonics on T 1,1 . We write this decomposition as where Ω i 2 are all harmonic two-forms on T 1,1 , satisfying * 5 dΩ 2 = iδΩ 2 , where * 5 is the Hodge star operator with respect to T 1,1 . δ are (−i times) the eigenvalues of the Laplace-Beltrami operator. However, f i must be only a function of r and not of 4d Minkowski coordinates; otherwise we would have a nontrivial flux along R 1,3 in the CFT, breaking the Poincaré invariance of the CFT vacuum. We use the ansatz f i (x) = r α i , allowing for potentially multiple values of α for a given i. As we can solve the perturbation equation term-by-term, we must solve The solution to this equation was found in [31]; the claim is that our ansatz solves the perturbation equations when α = δ − 4 or −δ (for δ = 0). In these cases, the scaling dimension of the dual operator 16 is ∆ = max(4 − δ, δ). The eigenvalues of the twoform harmonics were worked out in [39]; the answers can again be expressed in terms of the quantum numbers of the isometry group (j, l, r). There are six eigenvalues of the Laplace-Beltrami operator for each (j, l, r), and they are 17 δ (j,l,r+2) = 1 ± H(j, l, r) + 4 (B.8) δ (j,l,r−2) = −1 ± H(j, l, r) + 4 (B.9) δ (j,l,r) = ± H(j, l, r) + 4 (B.10) where the allowed values of (j, l, r) and the definition of H(j, l, r) are the same as in table 2, but now the physical value of r of the perturbation may not be r, but rather r ± 2, as indicated in the subscript of δ. This occurs because the twoform harmonics can be built out of scalar harmonics, but some of the solutions depend on the holomorphic threeform Ω on the conifold, which carries an R-charge of 2.
The de Rham cohomology representative twoform ω 2 of T 1,1 is an acceptable harmonic perturbation of the twoform gauge fields, corresponding to vanishing G 3 . ω 2 is therefore another modulus of the dual CFT; in this case it is the difference in the inverse gaugesquared couplings 1 Again, this is the ∆ = 0 choice related to the ∆ = 4 choice by a Legendre transform.
The second singular cohomology of T 1,1 /Z 2 contains a Z 2 torsion subgroup, related by Poincaré-duality to the presence of torsion threecycle in homology. However, although the torsion threecycle plays an important role in AdS/CFT, the associated twoform is not present in the de Rham cohomology; H 2 dR (T 1,1 /Z 2 , R) = R. Consequently, there is no A 2 -perturbation associated with the torsion twoform.
Finally, one can ask about G 3 -flux in H 3 (T 1,1 ). However, the cohomology representative ω 3 = g 5 ∧ ω 2 of T 1,1 does not satisfy d(r 4 P − ω 3 ) = 0, and so it does not constitute an allowable perturbation of the solution. We turn to Φ − , satisfying 6 Φ Perturbations of the fiveform flux which are in the cohomology of T 1,1 clearly satisfy the equations of motion as dω 5 = 0 and d * ω 5 = 0; however, perturbing the fiveform flux in KW simply changes the number of D3-branes one has in the solution, dual to changing the rank of the gauge group.
B.2 The Gauge Side
In this section, we study the gauge dual of the supersymmetry-preserving Z 2 -orbifold of Klebanov-Strassler. We denote n = 1 2 N and m = 1 2 M , implicitly assuming that N and M are even. Recall that the orbifold action in the supergravity was z i → −z i , where z ∼ AB. A and B are bifundamentals under the gauge group, and can therefore be written as N × (N + M ) and (N + M ) × N matrices, respectively. The action of the orbifold can therefore be embedded on the gauge side as follows: • SU (N + M ) is orbifolded by diag(I n+m , −I n+m ). The resulting groups are two copies of SU (n + m), and these are called groups G 1 and G 3 , respectively.
• SU (N ) is orbifolded by diag(I n , −I n ). The resulting groups are two copies of SU (n), and these are called groups G 2 and G 4 , respectively.
• The gauginos are embedded the same way as the gauge bosons so as to preserve supersymmetry.
• The two superfields A i are odd under the orbifold. The resulting superfields are embedded in A i as: • The two superfields B j are even under the orbifold. The resulting superfields are embedded in B j as: Under the action of the orbifold, the superpotential can be written as where ε 12 = 1. The representations under the various groups are listed in table 6. The U (1) A is a spurious symmetry 18 , with the superpotential coupling λ acting as the spurion. Furthermore, the U (1) R symmetry is anomalous; the exact symmetry is Z 2m . We also record the charges of the holomorphic intrinsic scales Λ i , which we introduce shortly.
There are also two additional baryon numbers B A and B B we can define in the orbifolded theory. These baryon numbers rotate the daughters of A and the daughters of B into each other, respectively. However, these symmetries are both anomalous with respect to the gauge groups; both U (1) B A and U (1) B B are broken by instantons down to the intersection of Z 2n and Z 2(n+m) . Notice that this would be true even in the conformal (m → 0) limit. As this is the limit we match to on the supergravity side, we study it further.
At m = 0 we observe that there are baryons associated with the symmetries; they are B A and B B are chiral operators and so they have protected scaling dimension 3 4 n. They transform in the spin-n 2 rep of their respective SU (2) flavor groups. They have baryon charge n under their respective U (1)s, and so under their respective residual exact symmetries each transform as B → −B. We expect that since D3-branes wrapping the threecycle in homology in KW are dual to baryons in the CFT, then this Z 2 is related to the nonvanishing torsion threecycle in homology, which generates a Z 2 [33]. We expect that the currents J B A and J B B associated with these to not have protected scaling dimension 2 due to instanton-induced O(1) anomalous dimensions.
There is a residual Z 2 symmetry of the quiver that exchanges groups 1 and 3 and also groups 2 and 4, along with an associated swapping of matter fields. We opt to impose that symmetry on all of our perturbations as deviations will lead to different physics in IR as compared to KS. This will place a restriction on the allowed sorts of supersymmetrybreaking in the compactification. Crucially, this forces the equality of gauge couplings g 1 and g 3 , as well as g 2 and g 4 .
Furthermore, there is a Z 2 outer automorphism of KW that exchanges the two gauge groups as well as A and B. The orbifold inherits this automorphism when m = 0; in fact, it combines with the Z 2 of the previous paragraph to form a full D 4 symmetry group [33], though we will not need the full D 4 for the purposes of this paper. The Z 2 needs to be respected by SUSY-breaking in the compactification, as this prevents us from deforming the Lagrangian with the U (1) B scalar current with protected scaling dimension ∆ = 2.
The NSVZ exact beta function for SQCD with N colors and F vector-like flavors is [48]: where γ is the anomalous dimension 19 of a quark field, related to the scaling dimension ∆ of a field with engineering dimension d by ∆ = d + 1 2 γ. As with KS, there is a UV fixed point when m = 0. There, the anomalous dimension of the quark superfields are γ = − 1 2 [33,36], and there are two marginal couplings corresponding to a two-parameter family of fixed points. Here, we choose to sit near the fixed point, so γ i = − 1 2 + O m 2 n 2 . The vanishing of the first-order term arises because of a symmetry m → −m, n → n + m of the 19 Here we mean the conventional definition of "anomalous dimension", as opposed to the deviation due to 1 N -corrections as we used in section 2.
theory. For groups 1 and 3, we have n + m colors, and we effectively have 2n vector-like flavors. For groups 2 and 4, we have n colors and 2(n + m) vector-like flavors. To leading order in m n , then, we have b e 1 = b e 3 = −3m and b e 2 = b e 4 = 3m. Thus, gauge groups 1 and 3 are UV-free, meaning that as we flow into the IR, they confine at some scale |Λ 1 |, |Λ 3 | respectively. Groups 2 and 4, on the other hand, are IR-free, so the gauge couplings flow towards zero as we flow into the IR. As with SQCD, we introduce the holomorphic confinement scales where θ i is the theta-angle of the i-th group and b i = −3N +F is the one-loop beta function coefficient. In our model, we have b 1 = b 3 = n + 3m and b 2 = b 4 = n − 2m.
If we impose for our UV-free gauge groups g 1,3 = g 0 at µ = µ 0 , we reach a Landau pole as we flow into the IR at µ = |Λ|, where The parent KS theory has an RG cascade, where one performs a Seiberg dual on the confining gauge group and continue flowing into the IR [49]. The orbifolded theory we've constructed inherits this RG cascade when g 1 = g 3 and g 2 = g 4 . The dual theory is identical to the original theory, with n + m replaced with n − m, the U (1) B and U (1) A charges rescaled, and the coupling λ inverted. The case where the gauge couplings do not start out equal results in a dual theory which is not self-similar [36][37][38], but this is not relevant for our purposes. We list the field content of the dual in table 7, where G ′ 1,3 are SU (n − m) and B, A, etc. are again U (1)s. The charges under the U (1)s were determined by anomaly matching. The SU (2) 1 , SU (2) 2 and U (1) B anomalies should match exactly. From the point of view of groups 1 and 3, groups 2 and 4 are flavor symmetries, and therefore we should match those anomalies as well. However, the U (1) B A , U (1) B B , U (1) A and U (1) R symmetries are anomalous 20 ; therefore, we only require the anomaly coefficients to match up to actual symmetry transformations 21 [50]. The U (1) R is broken to Z 2m , and U (1) A is broken to the intersection of Z 4(n+m) and Z 4n . That intersection is Z 4 when n+m and n are coprime, but is Z 4k for some natural number k when n + m and n are not coprime. Finally, note that we don't need to match anomalies with gauge groups 1 or 3, because the anomaly matching trick would require the addition of spectators that were charged under those groups, which changes the details of the confinement. Up to the above caveats, all anomaly coefficients match, lending credence to the idea of a duality cascade.
In the dual theory, there is a superpotential inherited from the original theory, which is self-similar after integrating out the mesons: Now, groups 2 and 4 are UV-free and groups 1 and 3 are IR-free, and so the cascade continues until we cannot dualize any more, at which point the story plays out in a similar fashion to KS, with the deformation of the complex cone and the introduction of an ADS superpotential.
Such a duality cascade, complete with chiral symmetry breaking in the IR, is a fantastically rich IR, with lots of opportunities for model-building.
As with supergravity, we can classify operators by setting m = 0 and studying their quantum numbers in the high-energy theory. The allowed operators fall into representations of the superconformal symmetry algebra su(2, 2|1) plus the global symmetry algebra su(2)+ su(2)+ u(1) R . Note also that we ignore the global baryon number B, as all combinations of the fields which are gauge-invariant are automatically B-singlets. Chiral primary operators formed from various gauge and matter fields can be determined at weak coupling 22 and are subject to the following constraints: • They must be gauge-invariant.
• The F -term equations of motion kill various potential flavor-singlet operators: • The D-term equations of motion are 20 Of course, U (1)A isn't even a symmetry in the first place; regardless, it's a useful check to match anomaly coefficients. 21 In order to utilize the results of [50], one needs to rescale U (1) charges to be all integers. 22 We are concerned with the spectrum at strong coupling; many of the operators we consider have protected scaling dimension and therefore their fixed-point scaling dimension can be determined. This approach misses possible mixing between various primaries and descendants, but these are irrelevant for the purposes of spectroscopy.
telling us that the adjoint operators A i e −V 2Ā i −B j e V 2 B j andĀ i e V 1 A i − B j e −V 1B j vanish in the supersymmetric vacuum, and only the associated singlets can be used to build operators. However, other than these operators themselves, any other operator we could build from them will necessarily be double-trace, and therefore irrelevant to our discussion.
• The super-equations of motion reduce the number of chiral primaries. They arē D 2 (e −V 2Ā e V 1 ) = 0D 2 (e −V 1B e V 2 ) = 0 (B.23) • One must consider that various "commutator" operators may not, in fact, be chiral primaries 23 due to superspace identities such as • The one-θ components of the W are real; therefore they equal the one-θ components of theW : D α W α =DαWα (B.25) Superfields have protected scaling dimensions if they fall into one of the following categories: • They are chiral;DαX = 0. The classification of the operators in KW was carried out in [35,39,40]; we list the results for the chiral primaries with scalar components which have scaling dimension ∆ ≤ 4, using the convention (j, l, r) for the global quantum numbers to match the supergravity solutions. Note that non-real operators have hermitian conjugates with (j, l, −r) and the same ∆ which are excluded from the below list for brevity. This list can be found in table 5, matched to their dual supergravity modes. | 14,859.6 | 2014-08-20T00:00:00.000 | [
"Physics"
] |
Cretaceous–Quaternary seismic stratigraphy of the Tanga offshore Basin in Tanzania and its petroleum potential
In this study, the available 2D seismic lines have been interpreted to understand the basin development and petroleum potential of the Late Cretaceous–Quaternary stratigraphy of the Tanga offshore Basin in Tanzania. Conventional seismic interpretation has delineated eight sedimentary fill geometries, fault properties, stratal termination patterns and unconformities characterizing the studied stratigraphy. The Late Cretaceous was found to be characterized by tectonic quiescence and uniform subsidence where slope induced gravity flows that resulted during the Miocene block movements was the major mechanism of sediment supply into the basin. The Quaternary was dominated by extensional regime that created deep N-S to NNE-SSW trending graben. The graben accommodated thick Pleistocene and Holocene successions deposited when the rate of tectonic uplift surpasses the rate of sea level rise. Thus, the deposition of lowstand system tracts characterized by debris flow deposits, slope fan turbidites, channel fill turbidites and overbank wedge deposits, known for their excellent petroleum reservoir qualities, especially where charged by Karoo Black Shales. Subsequent tectonic quiescence and transgression lead to the emplacement of deep marine deposits with characteristic seismic reflection patterns that indicate the occurrence of Quaternary shale sealing rocks in the study area. The occurrence of all the necessary petroleum play systems confirms the hydrocarbon generation, accumulations and preservation potential in the Tanga Basin.
Introduction
An increase in the demand for hydrocarbon resources to fuel industrialization across the globe has led to the overexploitation of currently identified hydrocarbon reserves (IEA 2017). This has necessitated the need to investigate new unexplored/frontier basins to identify future reserves (Busygin et al. 2010). However, most of these frontier basins lack extensive geological and geophysical information to properly characterize them (Jacques et al. 2003(Jacques et al. , 2004Mickus et al. 2009;Bastia et al. 2010;Blaich et al. 2010;Lentini et al. 2010;Uruski 2010;Ali et al. 2012;Becker et al. 2012;Dehler and Welford 2012;Pángaro and Ramos 2012;Houghton et al 2014). This challenge is more obvious in African countries where data confidentiality conditions imposed by foreign companies restrict local researchers from accessing key information to study potential areas for hydrocarbon exploration. For the case of Tanzania, lack of extensive seismic data covering her sedimentary basins has limited hydrocarbon exploration achievements in both onshore and offshore geological settings (Bosellini 1986;Wescott and Diggens 1998;Dapeng 2001;Roberts et al. 2012;Zhixin et al. 2015;Mkuu 2018). Lack of detailed information on the petroleum system of the basins means that not much is known about the basin's development, lithology, stratigraphy and its petroleum potential (Zhixin et al. 2015;Mkuu 2018;Mvile et al. 2020). To contribute to the ongoing exploration activities in Tanzania, the available 2D seismic data have been used in this study for investigation of the Cretaceous-Quaternary stratigraphy of the Tanga Basin for presence of key petroleum system elements. Tanga Basin is a coastal basin situated along portions of the East African coast with both onshore and offshore components (Fig. 1). Tanga onshore Basin covers most part of the Tanga region located in the north-eastern Tanzania (Fig. 1). Eastern limit of the basin is marked by a more or less N-S trending shoreline of the Indian Ocean while the NNE-SSW trending Tanga Fault marks the western limit of the basin. Duruma Basin in Kenya and Ruvu Basin in Tanzania form the northern and southern limits of the Tanga onshore Basin, respectively (Delvaux 2001;Wopfner 2002). The Tanga Fault separates the Neo Proterozoic basement rocks of the Mozambique Belt from the overlying sedimentary successions (Kapilima 2003;Mvile et al. 2020). Tanga offshore Basin covers the offshore part off the coast of Tanga region. The basin is dissected by several structural elements including Quaternary faults (Fig. 1).
Development of the Tanga Basin results from several extensional tectonic events that culminated in the breakup of the Gondwana Supercontinent (Zongying et al. 2013). These tectonic events influenced sedimentation in different basins in the continental margin settings of Somalia, Ethiopia, Kapilima (2003). Points P and Q mark the ends of a composite seismic line (red zigzag line) that was generated to assist age assignment to the studied successions. The WGS-84 UTM Zone 37S coordinate system has been used in this map Kenya, Tanzania, Mozambique and Madagascar (Zongying et al. 2013). The basins that resulted from the Gondwana breakup contain Mesozoic and Cenozoic clastic reservoirs and drift and marginal marine shales as potential cap rocks (Brownfield 2016). Despite the presence of several research works reporting tectono-sedimentary development of the offshore basins of East Africa, Cretaceous-Quaternary successions of the Tanga offshore Basin have been poorly studied and their petroleum potential is not well known (Zongying et al. 2013;Brownfield 2016;Mvile et al. 2020). This work was therefore aimed at improving an understanding of the Late Cretaceous-Quaternary tectono-sedimentary development, and assesses petroleum potential of the study area based on 2D qualitative seismic interpretation.
Tectonic development
Sedimentary development of the offshore basins of East Africa has been influenced mostly by tectonics and partly by climate, sea level fluctuations, basin topography and syn-depositional interaction of down-slope gravity flows and along-slope bottom currents (Kent et al. 1971;Wopfner 2002;Kapilima 2003;Sansom 2018). Major tectonic events include the Permo-Triassic Karoo rifting which created several inland basins, Jurassic rifting and the East African Rift system during Cenozoic-recent times Fig. 2 Chronostratigraphic scheme of the Tanga Basin showing major extensional tectonic events, sea level fluctuations, depositional environments and unconformities characterizing the study area. Strati-graphic column and respective features have been modified after Seward (1922), Kent et al. (1971), TPDC (1992), Kapilima (2003), Mahanjane et al. (2014), Franke et al. (2015) and Magohe (2019) (Kapilima 2003;Fig. 2). Structural evolution of East Africa Rift system (EARs) followed old structural grains associated with fragmentation of the Gondwana Supercontinent (Franke et al. 2015). The EARs has two inland arms forming the western and eastern branches of the system. However, age equivalent rift features have been mapped off the coast of East Africa where up to 30 m wide, N-S trending compartmentalized rift basin exist Foster et al. 1997;Franke et al. 2015). Wide variation of initiation ages of the EARs are reported in different places (Franke et al. 2015;Macgregor 2015). The oldest and the initial block movements were reported to occur in the Late Eocene-Oligocene, but several of initial offshore block movements began during Miocene-Pliocene time (Franke et al. 2015;Macgregor 2015). Rapid rift subsidence and sedimentation in the offshore setting happened during Pliocene and Pleistocene (Franke et al. 2015). Most of the extensional rift features in the offshore EARs were formed during Pleistocene to Recent times (Franke et al. 2015). While the occurrence of the highlighted tectonic elements has been presented in a regional context, this study has presented the exact locations of the preserved Cretaceous-Quaternary tectonic features in the Tanga offshore Basin.
Stratigraphy of the study area
There is no formal stratigraphic subdivision that has been established for the Tanga Basin (Magohe, 2019). Informally, the basin stratigraphy is hereby subdivided into three major depositional intervals with varying characteristics (Fig. 2). These are the Permian-Jurassic Karoo successions, Jurassic-Cretaceous marine deposits and Cretaceous-Quaternary successions. Part of the Cretaceous-Quaternary successions off the coast of East Africa was deposited during the Late Cretaceous transgression that reached its maximum during the Eocene (Key et al., 2008;Franke et al., 2015;Fig. 2). The stratigraphy of the Tanga Basin is characterized by several unconformities (Kent et al. 1971;TPDC 1992;Magohe 2019;Fig. 2).
Karoo successions
Deposition of the Karoo sediments began by emplacement of poorly sorted, pebbly-conglomeratic, arkose and felspathic continental sandstones outcropping near Tanzania-Kenya border (Seward 1922;Kapilima 2003). The basal conglomeratic sandstone is overlain by localized high density turbidites which resulted from episodic gravity failures. Low density turbidites alternating with thin deformed black carbonaceous shales, deposited during intervening quiet periods, follow upward. The carbonaceous black shales, with abundant plant remains, have high organic content (Seward 1922;Kreuser et al. 1990;Magohe 2019) and are exposed on the dry beds of River Kakindu in Gombero village, Tanga. The upper part of the Karoo interval in the Tanga Basin is dominated by low density turbidites characterized by alternating sandstone layers and sandy shales forming tail deposits of the system (Seward 1922). Localized, very coarse, poorly sorted sandstones also form part of the upper Karoo interval (Seward 1922). Karoo age equivalent evaporites, alternating with organic rich claystones, have been documented in the Mandawa Basin in the southern part of the Tanzania coastal basin (Kapilima 2003).
Jurassic-cretaceous successions
Karoo sedimentation took place under continental environments ( Fig. 2) with localized marine incursions along the coast (Kent et al. 1971;Kapilima 2003). The Karoo regime was followed by shallow marine deposition during the Early Jurassic (Fig. 2). The entire coastal area was overwhelmed by marine conditions during the Middle-Late Jurassic leading to the development of stable carbonate platform in the study area. The Tanga limestones and their age equivalent deep marine shales and sandstones (Fig. 2) were deposited during this time period. The Cretaceous sediments consist of sandstones, shales and limestones with the Cretaceous sandstones dominating most of the reservoirs in the Tanzania coastal basins (Mbede 1991;Kapilima 2003).
Late cretaceous-quaternary successions
The Late Cretaceous-Quaternary seismic stratigraphy has been studied in order to understand the development of the basin and its petroleum potential. The interval consists of debris flow deposits and turbidites that occurred as part of the lowstand slope fan and channel fill deposits. The debris flows were deposited mostly during the block movements that triggered episodic gravity flows. These intervals are overlain by deep marine deposits interpreted to be hemipelagic shales deposited during the periods of relative sea level rise. Quaternary carbonates are also present along the shores of the Indian Ocean in Tanga beaches (Fig. 2).
Dataset and methodology
Eight (8) 2D seismic sections were interpreted using Petrel application software to improve the understanding of the sedimentary system of the offshore Tanga Basin. Three of the seismic lines were acquired along E-W trending profiles, two each along NE-SW and N-S profile lines while one section was established along NW-SE trending profile (Fig. 1). The Ras Machuisi North 1 well situated near the southernmost edge of the Tanga block, (Fig. 1), which penetrated up to the Late Cretaceous interval (Fig. 3) was used to assign ages to the five major seismic reflectors mapped from the seismic sections. The reflectors were found to be Late Cretaceous, Mid-Late Eocene, Miocene, Quaternary and Sea Bottom markers (Fig. 3). Seismic to well tie technique ( Fig. 4), comprising of sonic log calibration, generation of synthetic seismogram and tying time calibrated seismic reflectors and depth measure geologic rock formation was employed to assign ages to the delineated seismic reflectors. The Sonic log used to accomplish this task was calculated by using Gardner equation. The synthetic generation process encountered slight mismatch due to the absence of well logs in the shallow part of the hole; this likely generated some degree of uncertainty in the correct wavelet and possibly processing errors. Specific characteristic seismic expressions allowed extensive lateral tracing of the major reflectors. Figure 3 is a composite seismic line generated to assist age control of the studied succession in the study area. This was done because it was not possible to access a wellbore that intercepts the seismic line(s) in the area of interest. Ages of other reflectors and respective sedimentary packages were assigned tentatively based on their seismic stratigraphic positions relative to the known five markers. Seismic geometries of sedimentary bodies, palaeodepositional basin shapes, stratal relationships and termination patterns (for example angular erosional truncations, onlap and downlap) were used to identify unconformities which bound different depositional regimes. The geometries of sedimentary fills, stratal termination patterns, internal seismic expressions and basin shapes were interpreted based on the works of Mitchum et al. (1977), Vail (1987), Prosser (1993), Nøttvedt et al. (1995, Ravnås and Bondevik (1997), Posamentier et al. (2000), Posamentieier and Kolla (2003), Armitage et al. (2012), Braathen et al. (2012), Kyrkjebø et al. (2004) and Kiswaka and Felix (2020). The unconformities presented herein were identified based on the original definitions by Mitchum et al. (1977) and recent works of Kyrkjebø et al. (2004) and Kiswaka and Felix (2020). The use of the term unconformity follows previous explanations of the seismic expression of the stratigraphy off the coast of East Africa (Franke et al. 2015;Kiswaka 2015;Fonnesu et al. 2020). Surface and depth contour maps were used to establish structural orientation and identification of thick sedimentary successions that may have a petroleum potential (e.g. Ali et al. 2019a, b;Ali 2020). Similar techniques will also be employed to accomplish this work.
Sedimentary fills and seismic reflectors
Based on observed seismic reflections, their termination patterns and ages, basinal fill geometries and perceived palaeodepositional basin shapes, the following Cretaceous-Quaternary tectono-sedimentary development were identified in the Tanga offshore Basin.
Sedimentary fills
Eight different sedimentary fills have been identified in the Tanga offshore Basin (Fig. 5). The fills together with their respective termination patterns and properties of internal reflectors were used to delineate the Late Cretaceous-Quaternary tectonic events and seismic facies characterizing stratigraphy of the studied interval.
Fill type A (Fig. 5a) consists of a relatively constant thickness sedimentary packages that have been discontinued by a fault. The packages are characterized by weak discontinuous internal reflectors. This fill type indicates that deposition took place during tectonic quiescence (see Kiswaka and Felix (2020)); faulting shows a post-fill tectonic deformation. Parts of the Late Cretaceous deposits of the Tanga offshore Basin are characterized by this fill type (Fig. 6).
Fill type B (Fig. 5b) is a gently dipping strata that onlap onto a base reflector, with characteristic chaotic internal reflection pattern deposited during massive input of clastic sediments (Braathen et al. 2012). The chaotic internal configuration suggests that debris flow deposits dominate the fill system and the onlapping surface marks an unconformity (Mitchum et al. 1977;Posamentier et al. 2000). The basal surface (unconformity) in this fill type is the Mid-Late Eocene reflector while the onlapping strata are Miocene in age (Figs. 7 and 8). This suggests rapid basinward sedimentation during Miocene and that the Mid-Late Eocene is an unconformity that resulted from erosion of Miocene gravity currents.
Fill type C is characterized by a wedge-shaped sedimentary package expanding toward the bounding fault (Fig. 5c). Internally, the fill type consists of depressions filled by localized, uniformly thick strata bounded by strong positive amplitude reflectors. Part of fill type C is discontinued by near vertical faults. The wedge-shaped sedimentary package suggests a post-rift infill of remnant rift topography (Nøttvedt et al. 1995), and the depressions are interpreted to show palaeochannel system that followed the fault system at a time when fault movement has stopped. Migration of the channels toward the right side of the cartoon is similar to seismic expression of channel evolution system reported by Armitage et al. (2012). This fill type is shown by some of the Miocene seismic intervals in the study area, thus the bounding fault is named Miocene fault (Fig. 9). The part of the fill type C deformed by near vertical faults is referred to as Quaternary faults because they can be traced up to the Sea Bottom (Fig. 10).
Fill type D (Fig. 5d) is also a wedge-shaped geometry containing two sections (lower and upper parts) but with different arrangement of internal strata. The lower part is characterized by gently dipping internal strata that thicken toward the bounding fault. The upper part contains internal strata that are characterized by a more or less constant thickness. This fill type, which is shown in some of the Miocene intervals of the Tanga offshore Basin (Fig. 6), is overlain by a relatively continuous reflector that blanket the wedgeshaped sub-basin. Sedimentary packages above the blanket are more or less parallel. Presence of two sections within a wedge suggests that the depocenter was filled in two stages linked to fault activity. The lower pattern indicates that Miocene sedimentation took place during active faulting, i.e. syn-rift deposition (Nøttvedt et al. 1995;Ravnås and Bondevik 1997;Ravnås and Steel 1998;Kiswaka and Felix 2020). The constant thickness of this unit implies that deposition took place after the fault movement has stopped, i.e. post-rift sedimentation (see Kiswaka and Felix 2020). Continuous reflector blanketing the wedge-shaped sub-basin marks complete infill of a remnant rift topography above which a new depositional configuration is established (Prosser 1993).
Fill type E (Fig. 5e) is composed of three major elements. Its base is marked by a near-parallel reflectors (first element) followed by moderate dipping clinoforms (second element) that have both chaotic and uniform internal seismic reflection patterns. The chaotic configurations are seen on the left side of the cartoon while clear sedimentary pattern is seen in the right side of the cartoon. The clinoforms onlap onto the near-horizontal basal surface and their top seismic boundary is both irregular and regular. This top boundary is an onlap surface for the immediate superjacent strata (lower part of the third element) that followed deposition of the clinoforms. These onlap terminations are stepping upslope until the top boundary of the clinoforms is blanketed by near parallel strata characterized by both weak and strong reflections. The prograding clinoforms suggest that deposition took place under low sea level environment at a time when there was a high influx of clastic sediments into the basin; mark a lowstand system tract (LST) (Vail 1987). The internal chaotic configurations of the clinoforms suggests debris flow deposits while clear sedimentary strata imply presence of turbidites (Figs. 11 and 12) in the distal part of the flow system. Generally, the LST was deposited farther into the basin while the overlying successions (third element) were deposited within the more proximal settings (Posamentier et al. 1991). The irregular and regular parts of the top seismic boundary of the clinoforms show that the reflectors are both unconformable in the proximal/shallow part of the basin and comformable in the deeper basinal areas, respectively. This boundary is a "sequence boundary" (SB) and it marks a transition from LST to highstand system tract (HST) (Vail 1987;Posamentier and Vail, 1988;Mitchum et al. 1977). Onlapping of the immediate superjacent strata onto the SB shows early stage of transgression. Further blanketing of the SB implies continued transgression and gradual decrease in coarse deposits (Posamentier et al. . Fill type E was observed in some intervals of the Quaternary successions in the study area (e.g. Figure 11).
Fill type F (Fig. 5f) comprised of lower, middle and upper subunits. The lower part downlap onto the bottom boundary and characterized by distinct layering pattern which presents clear seismic reflections. The middle part is characterized by sinuous reflectors that seem to migrate upslope. Troughs marked by these sinuous reflectors are filled by deposits with transparent seismic manifestation. The upper part of this fill type is composed of sedimentary strata that onlap onto the wavy successions and the downlap termination pattern suggests basinward deposition (Vail 1987). Comparable successions are called lowstand slope fan deposits (Vail 1987) and that interpretation is adopted herein. Sedimentary successions in the lowstand slope fan deposits have high sand to mud ratios (Posamentier and Vail 1988). Clear layering with strong positive reflections indicates presence of high density turbidites in the lower part of the unit. Sinuous features similar to that of the middle part were identified in the Oligocene interval offshore southern Tanzania (Sansom 2018). Sansom (2018) interpreted these arrangements as upslope migrating sediment waves and considered them to be part of channel levee deposits. However, the features also resemble laterally isolated clinoforms and are hereby interpreted to mark LST deposits emplaced during limited clastic inputs. Transparent trough fills present muddy-deposits laid down during quiet periods that followed periodic gravity flows which emplaced the prograding clinoforms. Strata onlapping onto the sinuous interval marks the beginning of a transgressive regime that led to deposition of HST. Fill type F is seen in some of the Quaternary deposits at a relatively higher stratigraphic level than fill type E (Fig. 12) suggesting that two transgressive-regressive cycles controlled Quaternary sedimentation in the study area.
It is worthy to note that on seismic sections both fill type E and F overlie laterally extensive sedimentary deposits with chaotic to transparent internal configurations (Fig. 12). This type of seismic expression is usually associated with debris flow deposits (Posamentier et al. 2000) that predate fill types E and F.
Fill type G (Fig. 5g) contains localized migrating depressions accommodating strata bounded by strong positive seismic reflectors. At the flanks, the reflectors geometries suggest wedge-shaped deposits that gently dip and thin away Fig. 3 coarse sand deposits. Stratigraphically, fill type G overlie fill type F in the Quaternary succession of the Tanga offshore Basin (Fig. 12).
Fill type H (Fig. 5h) has two subunits, the lower strata which onlap onto a base reflector and are followed by clinoforms of the upper subunits. The clinoforms toplap onto the top surface interpreted to mark the Sea Bottom. These onlap and toplap surfaces suggest presence of two unconformities in the youngest part of the Quaternary geology studied in this work. This fill type is seen in the topmost interval of the Quaternary section (Fig. 13). On seismic, fill types E-H are discontinued by several high angle dipping chaotic linear features. These features mark Quaternary faults that deformed the sedimentary system. Overall, fill types E-H show that Late Cretaceous-Quaternary stratigraphy of the Tanga offshore Basin marked by at least 7 unconformities (Fig. 2).
Reflectors and general sesimic expression
Late cretaceous The Late Cretaceous interval is marked by an undulated, strong positive reflection horizon that is discontinued by moderate-low angle dipping Miocene faults (Figs. 6 and 9). This reflector underlies sedimentary packages having relatively constant thickness with chaotic internal seismic expression (e.g. Figure 6). Part of the superja- SBm Eq is an equivalent reflector to the Sea Bottom (dashed line has been used because the reflector could not be followed with certainty due to limited seismic quality). Pr Eq is an equivalent reflector to the bounding reflector of a thick sedimentary succession that overlies the Miocene syn-rift basin fill (Post rift blanket shown in Fig. 6). Red arrows show erosional truncations cent successions is discontinued by near vertical Quaternary faults (Fig. 6). Undulating nature of the Late Cretaceous reflector indicates that it marks an erosional surface. Packages with uniform thickness suggest dominance of tectonic quiescence during the Late Cretaceous when sedimentation kept pace with basin subsidence.
Mid-late eocene
The Mid-Late Eocene interval is marked by an extensive, undulated, weak-medium amplitude reflector through which younger strata onlap (Figs. 7, 8 and 9). The undulations are characterized by up to 15 km wide structural highs and up to 7 km wide depressions (Figs. 7 and 9). Some of these structural highs and lows could be reflected Fig. 11. Red arrow marks the extent of the depression/graben with Quaternary reflector as its bottom boundary 1 3 by the overlying sedimentary successions. The reflector is irregular in some places (e.g. Figure 7) and regular in other areas (Fig. 8). Superjacent sedimentary successions are gently dipping to the south and thin progressively toward the north in places where the Mid-Late Eocene reflector is regular (Fig. 8). The reflector is mostly irregular in intervals where the Miocene faults are present (e.g. Figure 9). Varied properties of the reflector suggest lateral variation and localization of factors that controlled sedimentation during the Eocene. The Mid-Late Eocene reflector is interpreted to mark an unconformity since it is an onlap surface and also because of these irregularities, which are frequently seen in Mitchum (1977) for definitions). The undulations may have resulted from possible episodic gravity flows probably triggered by initial block movements that predates Miocene faulting.
Miocene The Miocene reflector is laterally extensive and is marked by undulating, strong positive reflection that is discontinued by moderate-low angle dipping Miocene faults (Figs. 6 and 9). This reflector marks the base of wedgeshaped sedimentary fills expanding toward the bounding faults (Figs. 6 and 9). The wedges have different internal properties (see descriptions of fill types C and D). Part of the superjacent successions is discontinued by near vertical Quaternary faults (e.g. Figure 6). The wedge-shaped deposits suggest that deposition during the Miocene in the Tanga offshore Basin was largely influenced by tectonic activity which created several depocenters that were filled differently based on interplay between sediment supply and tectonic subsidence. Therefore Miocene successions document both syn-rift sedimentation and post-rift infill of remnant rift topography.
Quaternary Quaternary reflector is also laterally extensive but highly discontinued by near vertical faults. Most of these and their respective levees (colored dashed lines) are seen further up in this seismic image. White arrow shows an onlap surface interpreted to be a sequence boundary (SB) faults are deep rooted along E to ENE, W to WNW, N and S dipping directions (e.g. Figures 8 and 10). The faults which deep in opposing directions flanks and bound a depression on both sides and thus creating accommodation the Quaternary successions (e.g. Figure 10). The prograding clinoforms and their bounding packages (fill types E and F) occur within the Quaternary interval (Figs. 5, 11 and 12). These clinoforms, which indicate basinward deposition (Vail 1987), are overlain by sedimentary fills having characteristic weak, near parallel reflectors with transparent internal configuration. This seismic expression suggests the occurrence of hemipelagic shale deposits (Fonnesu et al. 2020). Channel shaped depressions and their respective overbank packages are visible further up in the Quaternary stratigraphy. The channel systems (Fig. 12) are interpreted to contain coarse sand deposits on the account of their strong reflections. The youngest part of the Quaternary geology contains upslope stepping onlaps blanketed by basinward prograding packages truncated by recent sea bottom sediments (Fig. 13). In the deeper parts of the basin, the youngest part of the Quaternary system consists of sedimentary strata bounded by strong negative reflections (Fig. 7) which suggests the presence of fine grained deposits, possibly deep water shale.
Sea bottom
The Sea Bottom is marked by high amplitude extensive reflector discontinued by several Quaternary faults (Fig. 10). The Quaternary channel systems (fill type G) coincide with the depressions seen on the Sea Bottom reflectors. These depressions are bounded by the Quaternary faults which dissected the Sea Bottom, thus suggesting that the palaeochannel systems is followed by weak zones generated during faulting.
Discussion
Sediment supply, basin subsidence, tectonics and climate are the four major factors that influence development of stratal patterns and facies distribution in a sedimentary basin (Vail 1987). The observed sediment distributions and lateral and vertical variations in seismic reflection patterns show that sediment supply was an integral component in the evolution of the Tanga Basin. Occurrence of limestone and salt deposits in the stratigraphy of the Tanzania coastal basins (Kapilima, 2003;Hudson and Nicholas, 2014;Didas, 2016) indicate that deposition was partly influenced by climate variations. However, sediment supply and climatic influences over the examined successions would require further study due to limitations of the available data in analyzing these parameters and thus are left out in this work. The 2D seismic lines analyzed in this study were employed to establish the interplay between the tectonics and sea level fluctuations, especially as this influences the development Based on sedimentary fill geometries and fault properties, it has been shown that the Late Cretaceous sedimentation took place during tectonic quiescence. This is in agreement with previous works reporting tectonic development of the offshore basins of East Africa (e.g. Franke et al. 2015). The Late Cretaceous erosional surface reported here is thought to be an equivalent of the Albian unconformity (Late Cretaceous unconformity in Fig. 2) reported elsewhere in the offshore settings of East Africa (Mahanjane et al. 2014;Franke et al. 2015;Kiswaka 2015). Block movements which occurred during the Miocene period was followed by faulting that created Miocene depocenters. The associated fault movement, corresponding to Miocene tectonic event (Fig. 2), created sediment sources and triggered gravity flow that eroded part of the Late Eocene interval leading to deposition of thick sedimentary successions in the deeper basinal areas (Fig. 14). These successions are thought to be similar to age equivalent mass transport complexes reported by Sansom (2018) in Mnazi Bay, further south of the study area. The Mnazi Bay Miocene complexes contain hybrid turbidite-contourite deposits formed due to interaction of downslope gravity flows and along shore bottom currents (Sansom 2018). These deposits are characterized by clean deep water sandstone reservoirs with high net-to-gross ratio (e.g. Fonnesu et al. 2020). Major gas discoveries off the coast of East Africa have been made in hybrid turbidite-contourite deposits, examples of which are the Coral and Mamba gas fields in Mozambique (Fonnesu et al. 2020). The presence of these deposits in the Tanga Basin implies occurrence of favorable hydrocarbon reservoirs within the study area. Future researches will focus on the assessment of possible presence of matured source rocks that may have charged the Miocene reservoirs of the Tanga Basin. The Miocene deposits of the Tanga Basin onlap onto the Mid-Late Eocene reflector in the deeper basinal areas, thus leading to identification of the Mid-Late Eocene unconformity in the study area. The Miocene epoch is reported to have been dominated by extensional regime (Franke et al. 2015). The Quaternary geology of the offshore Tanga Basin has been strongly influenced by the East African Rift system (EARs). This rifting created about 25 km wide and approximately 2.5 s two-way travel time (TWT) deep graben during the Quaternary (Fig. 15). The graben which trends N-S to NNE-SSW (Fig. 15) is located between Pemba and the Indian Ocean shoreline. This graben accommodates thick Quaternary sedimentary deposits (Fig. 16). Tentative age assignment suggests that the graben was formed during the Pleistocene epoch. Franke et al. (2015) reported Pliocene and Pleistocene epochs as the periods of rapid rift subsidence and sedimentation in the offshore settings of East Africa, but seismic reflections indicate that the same occurred during Pleistocene and Holocene as it is reflected by two intervals with prograding clinoforms in the Quaternary stratigraphy (Fig. 11).
Two intervals with lowstand system tracts (LST) deposits are seen in the Quaternary part of the studied interval (Fig. 11). Posamentier and Vail (1988) and Posamentier et al. (1991) described two conditions through which prograding clinoform patterns characterizing the LST are deposited in a basin. These conditions are right when (1) the rate of sea level fall is greater than total rate of basin subsidence or the rate of tectonic uplift surpasses the rate of sea level rise, or (2) the depocenter has shifted basinward due to sea level fall. This work and previous researches (e.g. Franke et al., 2015) report rapid rift subsidence, in the offshore basins of East Africa, during the Pleistocene and Holocene epochs marking occurrence of two tectonic events during the Quaternary (Fig. 2). These observations give room for a suggestion that the observed LTS deposits in the Quaternary section were laid down at a time when the total basin subsidence/rate of tectonic uplift was significantly greater than the rate of sea level rise. The LST deposits identified herein overlie intervals that are interpreted to present debris flow deposits; beneath deep water deposits (Figs. 11 and 12). The debrites are seen because as the sea level falls, slope instability occurs causing gravity failures that lead to dominance of the debris flow processes at a time (Hunt and Tucker 1992;Posamentier et al. 2000). Deposition of the Quaternary deep water successions is interpreted to have occurred during intervening periods of tectonic quiescence when the rate of sea level rise was relatively greater than total basin subsidence. These fluctuations suggest that at least two transgressive-regressive cycles occurred during the Quaternary period, and they conform with the Holocene and Pleistocene tectonic events (Fig. 2). Fig. 15 Quaternary surface map showing a deep graben formed during the Quaternary. Eastern limit of the graben is Pemba, southern limit is Zanzibar, northern limit is the Quaternary fault and the western limit is the Indian Ocean shoreline. Here, a N-S to NNE-SSW trend of the basin can be seen Previous works have reported presence of Mesozoic and Cenozoic clastic deposits, Post-rift Cretaceous regressivetransgressive marine sandstone, slope-turbidite channel sandstones, Upper Cretaceous carbonates, Maastrichtian and Paleocene turbidite deposits (slope channels sandstone and basin floor fan) as the potential reservoirs off the coast of Tanzania (e.g. Brownfield 2016). This work reports presence of potential Miocene reservoirs with high net-to-gross ration as well as delineate two Quaternary intervals with LST deposits indicative of potential lowstand slope fan (Fig. 11). The Quaternary interval studied contains channel sand deposits and overbank turbidites (Fig. 5g). The lowstand slope fan deposits are reported to be key reservoir intervals (Vail 1987). Channel sand deposits and overbank turbidites have also been delineated as potential petroleum reservoirs (Vail 1987). The deep water deposits overlying the LST (Figs. 11 and 12) are interpreted to form potential cap rocks in the area. In this study, the Karoo carbonaceous black shales which have high organic content (e.g. Seward 1922) are believed to be potential source rocks that possibly charged the Miocene and Quaternary reservoir intervals in the Tanga Basin.
Conclusion
Two dimensional (2D) seismic images have been analyzed in order to evaluate the Late Cretaceous-Quaternary geology of the Tanga offshore Basin which is characterized by several unconformities. Different depocenter geometries, sedimentary fills, stratal termination patterns and internal seismic reflection attributes indicate that deposition within the basin was mostly influenced by sediments influx, tectonics and sea level variations. Major sediments influx was due to episodic gravity flows triggered by block movements that are linked to periods of extensional tectonics. Example of this is manifested by the Miocene sedimentary successions that have a general basinward thickening trend. These Miocene successions have been interpreted to be characterized by hybrid turbidite-contourite deposits that contain sandstone reservoirs. An interplay Fig. 16 An isopach map showing thickness variation of sediments between Quaternary and Sea Bottom reflectors. Thick sedimentary successions have a N-S to NNE-SSW orientations conforming to the deep Quaternary graben in Fig. 15 between tectonics and sea level variations caused two major transgressive-regressive cycles that culminated at the emplacement of lowstand slope fan deposits, channel sand deposits and overbank turbidites that form potential petroleum reservoir rocks in the Quaternary stratigraphy. These reservoir intervals are capped by deep water shales forming potential seal rocks in the area. Both, Miocene and Quaternary reservoir rocks, are interpreted to have been charged by the Karoo black shales. The Quaternary successions are accommodated within a more or less N-S to NNE-SSW trending graben created during the Pleistocene and Holocene tectonic events. The Quaternary graben is bounded by the near vertical Quaternary faults characterized by multiple dip directions. | 8,005.2 | 2021-11-01T00:00:00.000 | [
"Geology"
] |
Generator Assessment of Hydro Power Station Adequacy after Reconstruction from Classical to HIS SF6 Substation
Received Dec 9, 2016 Revised Mar 12, 2017 Accepted Mar 26, 2017 Reliability analysis of substations and generator assessment of power plant stations are very important elements in a design and maintenance process. This paper presents a generator adequacy assessment of a classical “H” scheme for an open conventional substation, which is often used, and a new HIS High Integrated Switchgear with SF6 gas isolation. Generator adequacy indices of both types of classical and HIS switchgear were compared and the results showed a high level of reliability and availability of the HIS presented substation. The input data were the annual reports of Croatian TSO-Transmission System Operator (HOPS) and statistics of operation events of Croatian National Electricity (HEP Inc.). For the HIS substation, the input reliability data were used from relevant international literature since only few of HIS substations are installed in Croatia. The generator is modelled with a three-state Markov state space model and Monte Carlo simulations were used for the generator assessment analysis. Adequacy indices LOLP and EDNS were obtained using DIgSILENT software. Keyword:
INTRODUCTION
During the analysis of the composite transmission system network reliability, it is generally not enough to view the switchgears solely as network junction points, and thereby omit the influence of individual switchgear components on the reliability of the system as a whole. Every switchgear configuration has a significant impact on the occurrence of power line outages and consequently on the entire system reliability [1], [2]. This especially concerns the multiple power line failures which considerably depend on switchgear failures [3], [4]. The paper analyzes switchgear reliability using Mont Carlo, the loss of one consumer and generator adequacy assessment using Monte Carlo simulation [5]. The basic definitions related to modelling failures from a switchgear are as follows: a. Switchgear components: transformers, circuit breakers, disconnectors and busbars; surge arresters, current and load measuring transformers are usually omitted. The protection system is generally not taken into the account; b. Switchgear availability: the probability of presence of a corresponding electrical connection between switchgear busbars from a HV to LV side; c. Passive failure: unavailability is reduced to the faulty component (it does not cause a triping of protection). It should be noted that the definitions of active and passive failures vary from an author to author; f. Maintenance: a planned activity aiming at improving the condition of a component. It can be postponed if required by switchgear conditions. Generator adequacy assessment is a part of dependability evaluation (reliability, availability, maintainability, and safety) [6]. The analysis is performed for the events of the first level of coincidence which is a forced failure or switching-off of one component. The paper deals with the generator adequacy assessment connected on a substation using a computer software. The load in a substation is modelled with a load duration curve which is then modelled with different load states considered in the generator assessment analysis using Monte Carlo simulation [7][8]. The paper includes the generator assessment analysis using a sequential Monte Carlo simulation module in DIgSILENT Power factory software [9].
MARKOV STATE SPACE MODEL FOR BUSBARS AND TRANSFORMERS WITHIN THE SWITCHGEAR
Busbars and transformers, as renewable components, with regard to reliability of supply, can have two different statesthey are either available or unavailable. A two-states component model is the most oftenly used model since it gives the best description of the continuous operation of a component. It is presented in Figure 1.
Frequency of being in a state 1 and 2 is defined as follows:
MODEL OF A SYSTEM WITH TWO COMPONENTS
The model presented in Figure 2 shows the state of two different components -each component can be either ready to operate or not ready to operate (a coincidence of a transformer and busbar failure). Since it had been shown earlier that the expressions: present stationary availability, that is, unavailability of one component, the stationary probabilities of the states in the case of two-components system can be expressed as follows: According to the model of a system with two different components, the frequency of an individual state can be determined either as a product of the state probability and the sum of intensity of abandoning that identical state, or as a product of the sum of intensity of entering the state and the probability of a state that is being abandoned. Frequencies are equal regardless of being observed from a perspective of exits or from a perspective of entrances. According to that, frequencies of states are as follows:
MODEL OF COMPONENT WITH MAINTENANCE
Switchgear components count as renewable components and their maintenance (planned repair) is carried out periodically. Maintenance of switchgear components increases their reliability and availability because the tendency of growth of failure intensity function is being reduced and maintained at a sufficiently low constant value. Figure 3 shows the Markov state space model of a component with maintenance. It is presumed that the planned repair of a component will not be performed when a component is not functional, and upon finishing the repair, a component is again ready for the operation. Stationary system solutions, i.e. stationary probabilities of the states are: Based on the earlier considerations, it follows that state "1"denotes component availability, state "2" denotes a failure-induced component unavailability, and state "3" component unavailability due to repair: Frequencies of failure and repair states are:
FAILURE COINCIDENCE MODEL WITH PLANNED MAINTENANCE
Preventive maintenance is conducted in order to keep the frequency of component failures at the lowest possible level. However, when these coincide with failures of other components in the system, the number of system failures may be increased. Thus, if possible, preventive maintenance and repairs should be conducted when they would not have negative effects on the system. It is generally considered that once the repair has been initiated, it has to be finished. Figure 4 shows a case of failure coincidence with the planned maintenance. If the possibility of a transition from state "3" into state "4" is removed, then the request that the repair cannot be initiated in state "3" is accepted, i.e. during the failure state of the other component. However, if state "4" does not represent the system failure state, that transition is allowed and the failure and repair processes are independent. The probabilities of being in certain states in case the transition from state IJECE Vol. 7, No. 2, April 2017 : 729-740 733 "3" into state "4" is allowed, since state "4" does not represent the system failure state are as follows: However, if state "4" also means system failure, the transition from state "3" into state "4" is not allowed, which means that the frequency of repair of the first component in the third and fourth system equation has a zero value ( ). In that case, the values are: Since the most common frequencies of the component repair and maintenance are significantly higher than the respective frequencies of entering into those states, i.e. the multiple products of very small values can be disregarded, the near solution values (10) are: (11) The frequency of the system failure, (failure and repair coincidence state) which also means the failure of the system, is: The mean time of failure coincidence with the planned maintenance and repair is: (13) Figure 5 shows the model of the component with the derated state (reduced capacity). It is assumed that the transition of the component to the derated state from the forced outage is not allowed because it is believed that during its stay in the forced outage, the repair is performed and that component is ready for the operation with the rated capacity.
GENERATOR MODEL WITH A DERATED STATE
The mean time of generators states:
DESCRIPTION OF A HYDRO POWER PLANT SUBSTATION 7.1. Basic Characteristics of the Previous Switchgear
It is placed at the level of 183,5 m in the narrow area between the engine room facility and building entrance at the end of diversion channel [12]. The switchgear is in "H" scheme with five feeders, two block transformer feeders, each equipped with a circuit breaker, disconnectors, current measuring transformers and surge arresters, two outgoing feeders equipped with outgoing disconnectors only, section feeder with a diagonally placed disconnector for the connection between blocks, i.e. transmission lines. It is dimensioned for short circuit power of 3500 MVA. The switchgear is connected to the Nedeljanec substation by 110 kV double transmission line (Al/Fe 240/40 mm 2 ). Considering that outgoing feeders do not have circuit breakers installed, distance protection in a hydro power plant works at the switching off of the circuit breaker in a block transformer feeder and the circuit breaker in the outgoing feeder of Nedeljanec substation which in this way represents distant busbars of this switchgear. Switchgear equipment is nearly 30 years old, except for the new SF 6 circuit breakers produced by "ABB" and installed four years ago in a place of the old pneumatic circuit breakers 3P 123 type, produced by "Končar d.d". The whole equipment is installed on the iron stands made from steel pipes and profile, coated with anticorrosive paint, fixed to a concrete base with anchor bolts. The connection of the devices is done by wires (Al/Fe 240/40 mm 2 ), i.e. AlMg 70 mm pipes. A busbar system is carried out by wires attached above the devices by using double tension insulator strings with seven glass insulators, type U 160 BS, on each of them. Busbars are attached to "T" portals made from welded iron profiles. The classical 110 kV switchgear is designed for the outdoor use and shown in Figure 6.
Description of the new SF6 High Integrated Switchgear
The new metal enclosed SF 6 gas insulated switchgear, high integrated for the outdoor installation (HIS) at the new location will be installed on the concrete laminated foundation. The switchgear will be constructed for the outdoor installation along with the appropriate treatment of the external surfaces, which enables the installation in open spaces exposed to the atmospheric conditions and solar radiation. The switchgear constructed in the "H" scheme form will consist of three-poled enclosed busbars, two busbar measuring feeders, one section/coupling feeder, two transformer feeders and two outgoing feeders. The switchgear will be connected to the network and transformers by the overhead connectors Al 300 mm 2 . A block transformer is connected to the current measuring transformers by means of Cu pipes 32 × 2 mm. The designed switchgear satisfies the following minimum demands -modular performance of manufacture completed mounting groups with the possibility of subsequent extension by means of upgrade by additional fields, sections, busbars, circuit breakers, disconnectors and other components without unnecessary dismounting of the equipment's main parts. The switchgear should secure the highest possible security for the workers and others around the switchgear surrounding area under normal operating conditions and during failures (short circuit). In accordance with the conditions at the installation location, the switchgear will be sized for the insulation degree 123 Si 230/550 and will have the following rated values and technical characteristics shown in Table 1. Figure 9 presents the 110 kV SIEMENS SF6 switchgear.
RESULTS OF THE GENERATOR ADEQUACY SIMULATION
The simulation was performed in the program package of DIgSILENT with the power flow calculation [12] for the period of 1 year. The load on the 110 kV side of the power transmission line Nedeljanec is displayed by the annual curve of duration presented in Figure 10. Also, 110 kV busbar outages were considered within the load Nedeljanec. The input data for HIS was used from [12], and for the existing switchgear from the statistics of the plant events HOPS from [13]. The input data for generators in the hydro power plant are given in Table 2. After the generator assessment analysis is performed using Monte Carlo simulation for 100 000 runs, the generator adequacy indices are shown in Figure 11. Figure 11. Total available capacity, available dispatch capacity, grid total demand and total reserve generation ) 2010 5535 2418 793 14 8760 2011 5884 2024 850 2 8760 2012 5834 1980 946 0 8760 2013 6490 1481 651 138 8760 Average 6030,8 The difference in adequacy indices of the new SF6 gas insulated High Integrated Switchgear in relation to the standard existing switchgear for the outdoor usage is obvious. This can only be one of the advantages and additional criteria for choosing the HIS and justifying the reconstruction.
CONCLUSION
This paper presents the generator adequacy assessment connected to 110 kV/10 kV switchgear in the hydro power plant, particularly of the present switchgear for the outdoor use, as well as the new SF6 HIS-High Integrated Switchgear. Markov space-state method was used, together with the generator assessment Monte Carlo simulation using DIgSILENT software. Monte Carlo generator assessment is also performed in consideration to the generator three states model (in work, derated state and in outage). All outage data are taken from the statistics of the plant events for the four years period of time, form 2009 to 2013 year, from HEP d.d.. The information on the equipment and switchgear elements outages has been statisticaly processed. For the new switchgear in the "H" scheme, which was conducted in the HIS SF6 technology, outage data are taken from the relevant literature statistics. All relevant adequacy generator assessment indices of the switchgear have been computed. The generator adequacy indices of the HIS are far better than the standard switchgear for the outdoor use, which of course, aside from other advantages, can be crucial in transition to the High Integrated Switchgears. | 3,161.2 | 2017-04-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
The Nutritional Content of Meal Images in Free-Living Conditions—Automatic Assessment with goFOODTM
A healthy diet can help to prevent or manage many important conditions and diseases, particularly obesity, malnutrition, and diabetes. Recent advancements in artificial intelligence and smartphone technologies have enabled applications to conduct automatic nutritional assessment from meal images, providing a convenient, efficient, and accurate method for continuous diet evaluation. We now extend the goFOODTM automatic system to perform food segmentation, recognition, volume, as well as calorie and macro-nutrient estimation from single images that are captured by a smartphone. In order to assess our system’s performance, we conducted a feasibility study with 50 participants from Switzerland. We recorded their meals for one day and then dietitians carried out a 24 h recall. We retrospectively analysed the collected images to assess the nutritional content of the meals. By comparing our results with the dietitians’ estimations, we demonstrated that the newly introduced system has comparable energy and macronutrient estimation performance with the previous method; however, it only requires a single image instead of two. The system can be applied in a real-life scenarios, and it can be easily used to assess dietary intake. This system could help individuals gain a better understanding of their dietary consumption. Additionally, it could serve as a valuable resource for dietitians, and could contribute to nutritional research.
Introduction
Maintaining a healthy diet is crucial for overall well-being and is fundamental in preventing and managing various health conditions and diseases, such as cancer and diabetes.However, nutrition-related health issues are often undermined or overlooked, despite their significant impact on global health [1,2].Additionally, it was reported that approximately 2.3 billion adults were affected by malnutrition alone, encompassing both under-and over-nutrition, in 2021 [3].The consequences of these diet-related issues extend beyond individual health and have implications for global health.The burden they place on healthcare systems is substantial, straining resources while affecting the overall quality of healthcare services available, whereby nutrition-related conditions often extend hospital stays [4].Therefore, if we are to effectively address health conditions related to diet and promote overall well-being, it is crucial to monitor dietary intake accurately.
While it is generally acknowledged that it is important to monitor and assess diet [5,6], there are multiple practical limitations to its implementation.Individuals may rely on various approaches to assess their diet, such as weighing their own meals or seeking assistance from dietitians or other healthcare professionals [7].The latter approach primarily relies on the individuals' food records, food frequency questionnaires (FFQs), or 24 h recalls [7].Such methods often fail to provide quantitative or visual representations of the food intake and rely solely on the individuals' descriptions, thus leading to subjective estimations and difficulties in accurately assessing portion sizes, thereby resulting in inaccurate evaluations and ineffective dietary management [8][9][10].Moreover, the lack of nutritional literacy further exacerbates this challenge [11], which arises from individuals' limited familiarity with foods and the units of measurement for their nutritional content, and it can potentially lead to less precise estimations.While reliance on professional experts for dietary assessments may be advocated, this can be time-consuming, burdensome, and prone to errors as it depends on available records and the individuals' ability to accurately maintain detailed accounts of food consumption and to provide precise descriptions of food items.In addition, access to dietitians and healthcare services is not universal, particularly in countries or remote areas with limited healthcare infrastructure [12,13].
The need for more accessible, accurate, and less time-consuming dietary assessment is then evident [14].Recent advancements in artificial intelligence (AI) and smartphone technology have provided new opportunities in this field.With the rise of mobile health and its widespread adoption, mobile apps have emerged as a promising solution for automating the process of dietary assessment [15,16].These apps offer a comprehensive approach by incorporating food segmentation, food recognition, as well as estimations of portion size and nutritional content, and thus they may reduce the burden on individuals [17][18][19][20].Numerous studies have compared smartphone app-based methods to conventional approaches and have emphasised positive outcomes, such as increased user satisfaction and preference for mobile-based approaches [21,22].Nevertheless, it is important to note that most existing apps still heavily rely on manual or semi-automatic data entry, particularly for estimating portion sizes.Apps that have attempted to incorporate automatic portion estimation often necessitate phones equipped with depth sensors or the recording of lengthy videos or multiple frames from different angles, and thus introduce potential discrepancies in results based on different conditions [23].Notably, errors in volume estimation have been predominantly observed in dietary apps, with reported errors reaching as high as 85% [24].Nonetheless, the recent advancements in food image and video analysis have demonstrated the potential for fully automating the pipeline of dietary assessment [25,26].
In our previous publication, we demonstrated the effectiveness of goFOOD TM , an AI-based mobile application for automatic dietary assessment [27].However, a significant limitation of the previous system was the requirement for users to capture two images or a short video, and this could be perceived as a burden.It was shown that people would rather take a single image while compromising on the accuracy of the final results instead of two images, which could provide results closer to the ground truth (GT) [28].To overcome such challenges, we propose an enhanced system utilising a single image captured with a smartphone, and we compared this to an adaptation of the previous version [27].While our previous study demonstrated the adequacy of AI-based dietary assessment in a controlled hospital setting [29], we now sought to assess its applicability in free living conditions.
Materials and Methods
In this manuscript, we conducted a study where we recruited participants from Switzerland to record short videos of the foods and beverages they consumed over the course of one day using the goFOOD TM Lite application, which is described in detail in Section 2.1.The participants were asked to complete a feedback questionnaire regarding their satisfaction with the goFOOD TM Lite application.A dietitian contacted the participants the following day to collect the 24 h recall data.The collected images and 24 h recall were used to retrospectively assess the goFOOD TM system accuracy in estimating the energy and macronutrient intake of each participant.We further note here the difference between the goFOOD TM Lite application and the goFOOD TM system.The former is used in this study solely for data collection, while the latter receives as input one or two images (extracted from the video recorded from goFOOD TM Lite).Even though, the goFOOD TM system needs one or two images to perform automatic dietary assessment, we still asked the users to capture a short video in case the images would be blurry.
goFOOD TM Lite Application
goFOOD TM Lite is an Android application used for recording meals during the initial phase of our feasibility study.This application allows participants to record meals, including food items, drinks, and packaged products, without receiving feedback on nutritional content.For food and drinks, recordings were logged in the form of a short video (∼10 s), while for packaged products, an image of the product's barcode had to be recorded.Particularly for barcode and food logging, users have to choose the label of either snack, breakfast, lunch, or dinner.To enable reliable retrospective analyses, users have to take videos both before and after consumption.In the case of barcodes, the user also has to indicate the percentage of the packaged product that was consumed.Users could also consult their previous recordings in the history log screen.Figure 1 shows the application's homepage, meal recording, and barcode recording screens.
Feasibility Study
To further optimise, extend, and improve the goFOOD TM system [27], we recruited 50 adult participants across Switzerland who were proficient in German or English and could use an Android smartphone.Individuals adhering to specific diets, studying nutrition/dietetics, or dealing with neurocognitive or terminal-stage diseases, were excluded from the study in order to maintain homogeneity and avoid potential biases.The participants were recruited through various methods, including our website, social media advertisements, and university mailing lists.Once participants had been screened and expressed their willingness to participate in the study, they were required to fill out a demographics questionnaire and sign a consent form.Participants then had to attend an online session to be instructed on how to use the goFOOD TM Lite application, which was supported by information on the study.We then provided the participants with an Android smartphone with the app installed, along with a reference card that would be used to analyse the collected data retrospectively.
For a single day, the participants had to record their food and drink items before and after consumption, while ensuring that the reference card was placed next to the meal items.
To retrospectively analyse the data collected and evaluate our system, we extracted two images from the videos at 90°and 75°, as based on the smartphone orientation.In the case of the consumption of a packaged product, the users had the option of capturing a photo of the product's barcode.
In addition, users had to fill out a feedback questionnaire regarding the goFOOD TM Lite app.We utilised a feedback questionnaire based on the validated System Usability Scale questionnaire [30], as well as incorporated additional questions tailored for the usage of goFOOD TM Lite.Specifically, the user had to grade from a scale of 1-5 (very bad, bad, neutral, good, very good, respectively) their satisfaction regarding the following elements: (1) recording/logging, (2) time needed, (3) usability/user interface, (4) functionality/performance, and (5) user satisfaction/usage.The participants were also asked if they would be willing to use the app in their everyday life and if they would recommend it to friends.Finally, they were asked about what features they liked and disliked, and if they would prefer for the app to offer food content estimation based on the images.Both the demographics and the feedback questionnaires were created and handed out to the participants via the REDCap web application [31].
The following day, a dietitian contacted the participants and asked them about all of the foods and beverages they had consumed over the previous 24 h with the Multiple-Pass Method [32].The dietitians used the nut.s nutritional software [33] and the United States Department of Agriculture [34] and Swiss nutrient database [35] for analysing the nutrients.
Database
The images collected from the feasibility study comprise the Swiss Real Life 2022 dataset (SwissReLi2022).In particular, SwissReLi2022 contains 444 images with 587 food items.We annotated the images using a semi-automatic tool and only used those that were taken before consumption, keeping one for each meal.Labels were assigned based on the food categories supported by our system.For ease of understanding and categorisation, we split the different foods into 18 coarse, 34 middle, and 301 fine categories (e.g., meat/red meat/meatball).This categorisation was based on our previous dataset [27], and from the food categories collected from the images during the feasibility study.To evaluate the full system, as described in Section 2.4.1, we filtered out the erroneous images, e.g., those from the excluded participants or duplicates, and ended up with 548 recordings, including 55 barcode images.Of these, 47, 61, 64, 81, and 295 corresponded to breakfast, lunch, dinner, snack, and drinks (of which 195 were water), respectively.
As previously mentioned, participants were instructed to capture images of their meals both before and after their meals.While this was true for most cases, a few participants did not record their post-consumption.Moreover, when these post-consumption recordings were available, we observed empty plates or dishes, obviating the necessity to calculate calories for these images.Consequently, only the pre-consumption recordings were taken into account.
For the segmentation task, we compiled a database consisting of previously collected images [29,36,37] and v2.1 of the publicly available MyFoodRepo dataset [38].The segmentation dataset contained approximately 57,000 images with 107,000 segments.We used the SwissReLi2022 dataset to evaluate the segmentation module and 120 pictures from the MADiMa dataset, as captured from different angles [37].
For the classification task, we used web-crawled data and images from MyFoodRepo [38].We ended up with approximately 200,000 images, which were divided into the categories supported by our system as previously mentioned.To test the recognition module, we used the 587 annotated items from the SwissReLi2022 and 234 items from the MADiMa dataset [37].
System Pipeline
The goFOOD TM system is designed to offer a complete automated pipeline for dietary assessment.It consists of five modules, as illustrated in Figure 2: (1) food segmentation, (2) food recognition, (3) food volume estimation, (4) barcode scanning, and (5) nutrient estimation.In the previous implementation of the goFOOD TM system, two images were required as input, while the new adapted method needs only a single image as its input.The images, which are extracted from the video, are sequentially processed by a segmentation and recognition network.Afterwards, the module for estimating the food volume generates a 3-D model and makes an accurate prediction of the volume of each item.Finally, the module for nutrient estimation provides information on the kilocalories (kcal) and macronutrient content of the entire meal (as well as for barcode food products), based on existing databases.In Figure 2, we show both the current and the previous goFOOD TM system pipeline with one or two images, respectively.
Food Segmentation
Given the substantial variations in food datasets in terms of classes, we chose to train the segmentation network exclusively on food-versus-background data.This approach allowed us to merge diverse datasets for food-versus-background segmentation.Subsequently, the recognition task was handled by a separate model, eliminating the need to find a dataset that includes both segmentation masks and specific classes.As detailed in Section 3.2.1,our decision proved to be more effective compared to the alternative of combining segmentation and recognition into a single model.By focusing on binary segmentation first and then employing a separate recognition module, we achieved superior results in our overall system.
The first module of the pipeline comprises a Convolutional Neural Network (CNN) to segment the food items within the picture.We used a state-of-the-art segmentation network, namely Mask R-CNN [39], which was pre-trained on the COCO dataset [40].As the backbone, we adopted the popular ResNet-50 [41], which was pre-trained on ImageNet [42].We set the batch size to 8, and used the Adam optimiser with a learning rate of 10 −4 and a weight decay of 5 × 10 −4 for six epochs.
Food Recognition
Each segmented item from the previous task was fed into a recognition network that predicted categories at three different levels of granularity: coarse, middle, and fine.We selected RegNetY-16GF as the classification network [43], i.e., a CNN trained through neural architecture search to find the optimal parameters (e.g., block width and network depth).
To address the label noise present in the dataset for training and to ensure accurate predictions, we adopted a noise-robust training approach.We utilised DivideMix [44], which has proven to be effective in handling label noise-even in the context of food images [22].In particular, DivideMix trains two networks for a few epochs as a warmup to prevent overfitting to the label noise.Then, the two networks separate the training set into a clean and a noisy subset.The labels of the clean subset are further refined based on their probability of being clean and the network's predicted labels.On the contrary, the noisy subset's labels are substituted by the average of both networks' predictions.The MixUp [45] technique then interpolates the samples in the training set so that the model learns to make linear predictions on the linear interpolations of the images.
Input images were resized to 256 × 256, randomly horizontally flipped, and then normalised.We followed the DivideMix [44] training pipeline and set the number of warm-up epochs to three and subsequent training for another five; this was achieved using the Adam optimiser with a batch size of 32 and a learning rate of 10 −4 .
Food Volume Estimation
The module for the estimation of the food volume necessitated translating 2-D food items into a 3-D space.To facilitate this process, it was essential to position a reference card with known dimensions within the field of view, as shown in Figure 2. In our study, we utilised two approaches to generate depth maps, which served as the initial step in converting food items into 3-D.The first approach involved a neural-based approach using a single image.In contrast, the second approach utilised a geometry-based approach using image pairs, as outlined in [27].Next, the known distance between the reference card and the camera enabled the translation of the depth map to the actual world distances.We implemented a filtering process to remove noise from the depth map, and thus ensured that the information on depth was accurate and reliable.The segmentation mask, obtained from the segmentation module, was applied to the depth map; this allowed us to associate each depth value with the corresponding food item, effectively separating the different objects in the scene.Based on the segmented depth map, each food item was re-projected to a 3-D point cloud, cleaned through outlier removal whenever necessary, and scaled to real-world coordinates based on the reference card.We computed the volume of each food item within this transformed space, thus enabling accurate estimation of the food volume.In cases where the reference card was absent or undetected, we used the volume of a standard serving of the food item, as both volume estimation methods rely primarily on the presence of the reference card.Moreover, in order to remove noise within the 3-D point clouds, we assumed that no food item could have a volume of over 2.5 cups.
Neural-Based Approach
The main contribution of the enhanced version of goFOOD TM is the usage of single images, for depth estimation, taken at 90°.For this purpose, we employed a model to estimate depth, namely Zoe [46], which combines multiple depth modules within an encoder-decoder architecture.Zoe was a suitable candidate for our depth estimation module since it was trained on diverse indoor datasets and demonstrated robust generalisation performance.To further improve the prediction accuracy, we also introduced certain rules to prevent significant misestimations originating from the model itself or due to poor quality images.In particular, after predicting the depth map and converting it into real-world distance values, we made two reasonable assumptions to remove possible outliers.Firstly, we specified that the bottom of the food item plane should not be excessively distant from the card, with a fixed maximum of 5 mm.Secondly, we set a constraint for the top part of the food item, based on our data collected, within the 3D model, and thus ensured that it did not extend more than 5 cm above the card plane.
Geometry-Based Approach
The earlier goFOOD TM version used a stereo matching pipeline, which entailed capturing two images of the meal at two angles (90°and 75°) [27].In this work, we slightly refined some components to overcome a key limitation of the previous food volume estimation module-the need for food to be placed on a plate.In practice, we first detected key points from the reference card and the generated segmentation masks for each image.The stereo image pair was rectified, thus aligning points in one image with the corresponding row in the second image while correcting distortions.Following the rectification process, a disparity map was created by horizontally comparing the differences between the pixels from the first and second images in order to infer depth information about the scene.Subsequently, the disparity was converted into a depth map.
Nutrient Estimation
Once we had obtained the volume for each food item, we automatically fetched its nutritional content from nutrient content databases.Despite dietitians utilising the USDA and Swiss food databases, there was a lack of clarity regarding the specific database employed for each food item.As our major evaluation criterion relies on volume rather than on weight, which is not provided by the USDA database directly, we utilised Nutritionix [47] and AquaCalc [48], which both rely on the USDA database.While Nutritionix contains most food item information and volume-to-weight ratios, in the event there was a missing food item, we used AquaCalc.
The final output was the nutritional content of the full meal.The selection of the database was primarily aimed at improving data standardisation.Additionally, the nutritional information of packaged products was extracted from the Open Food Facts database [49].
Feasibility Study
Of the 50 participants, 35 were female, and most of them were of a white ethnicity (90%).The volunteers were generally young, with an average age of 29.2 years and a standard deviation of 11.4 years; moreover, the majority were students (27/50).Table 1 shows the demographics of the 50 recruited participants.One participant was excluded from the analysis since their 24 h recall was not completed.Of the 50 participants enrolled in the study, 8 did not complete the final feedback questionnaire.Based on the responses, 29/42 of the participants (69%) agreed that the logging and recording were good or very good, while 35 (83.3%) found the app easy to use and self-explanatory.Regarding the time needed to complete a recording, 23/42 (54.8%) agreed it was good or very good, while 10 (23.8%) expressed no opinion.However, when the participants were asked, in the "user satisfaction/usage" question, if they would use the app, 19 (45.2%) remained neutral.On the other hand, 22/42 (52.4%) participants said that they would be willing to use the goFOOD TM Lite application for tracking their food intake in their everyday life, while 30 (71.4%) answered that they would recommend it to their friends.Lastly, almost all the participants (41/42) answered positively when asked if they would like the application to offer automatic food content estimation, and 28 said they would be willing to participate in the upcoming validation study.The participants' answers to the feedback questionnaires are visualised in Figure 3.
Food Segmentation and Recognition Evaluation
Initially, we evaluated the Mask R-CNN instance segmentation network trained on the binary segmentation task (food-vs.-background).We additionally trained a Mask R-CNN that predicted not only the food items' positions, but also their coarse class (meat products, cereals/potatoes, liquids, dessert, and fruits/vegetables/nuts).The binary segmentation network achieved an intersection over union (IoU) of 74%, thus outperforming the other model (which scored 67% IoU).We argue that the reason for this is that the binary model is less prone to overfitting, since it only has to discriminate between two macro-classes.Figure 4 shows the color image (a), the GT mask (b), and the mask predicted by the Mask R-CNN (c) for one meal.
We also evaluated the recognition CNN using the GT annotated food items as input.As metrics, we used the top-1 and top-3 accuracy for the coarse and middle categories, and the top-1 and top-5 accuracy for the fine food categories.We compared the ResNet-50 model and the RegNetY-16GF with or without the DivideMix training procedure to contrast the label noise.The results are shown in Table 2. RegNetY-16GF with the DivideMix procedure clearly outperformed the other approaches for the fine-grained recognition task, which is the most critical since the nutritional values are based on the fine categories.
System Results
Within our study, we designed a comprehensive pipeline for dietary assessment that encompassed multiple stages and algorithms, as previously mentioned, to estimate dietary intake.To establish a reliable benchmark, we relied on the expertise of professional dietitians and their 24 h recall data, which we considered as the baseline for comparison in the absence of GT weights and volume.However, it is important to note that misestimations from dietitians can be quite high.For instance, as shown in a previous study, the mean absolute error for the carbohydrate (CHO) estimations of dietitians reached 15 g [50].The dietitians performed a dietary intake assessment for each participant for the whole day, but not for every meal.Therefore, we evaluated the performance of our system for each participant individually.
Quantifying the accuracy and variability of the system's estimations was vital in understanding the overall performance.We compared the results obtained when using the two approaches (as shown in Table 3): using a single image for our new method, and using two images for the previous method.We further assume that if the GT food category was among the top five predicted categories, the user would select it and the GT class would be chosen.We assessed the percentage error in terms of four key dietary components: kcal, CHO, protein, and fat.We considered the evaluation of kcal as the most significant, given its applicability for everyday use.Since our analysis considered the entire day's intake rather than individual meals, there was no necessity to separate between results for packaged and non-packaged products within this study.For the newly proposed method that uses a single image, the mean absolute percentage error in kcal estimation per participant was 27.41%.Furthermore, we observed a percentage error of 31.27% for the CHO, 39.17% for the protein, and 43.24% for the fat estimation compared to the dietitians' estimations, which used the 24 h recall method.The previous method that used two images, gave a percentage of error of 31.32% for the kcal, 37.84% for the CHO, 42.41% for the protein, and 51.75% for the fat intake estimations.
Even though the new method achieves a slightly lower empirical mean absolute percentage error, the results did not exhibit a statistically significant difference under an independent t-test.Nevertheless, the new method only requires a single image as input, whereas the previous method uses two images from specific angles.Our lowest misestimation percentages with the new method for kcal, CHO, protein, and fat were 2.16%, 0.34%, 3.46%, and 0.02%, respectively.Figure 5 shows the Bland-Altman plot for the 49 participants regarding their energy intake (kcal) for one day.The estimations of the neural-and the geometry-based approach appear in blue and green, respectively.The plot shows that although both methods tended to overestimate the kcal compared to the dietitians, the geometry-based approach had a higher variance.
Discussion
We conducted a feasibility study in Switzerland to assess the user satisfaction with the goFOOD TM Lite application, as well as collected data for a retrospective analysis of individual dietary habits.The participants found the app easy to use, and most of them would recommend it to their friends.It is worth noting that 41 of 42 participants said they would like to receive nutrient information for their meals.However, a significant proportion of participants were female and students of white ethnicity.We note that to ensure more robust and comprehensive conclusions, it would be essential to incorporate a broader and more diverse population.
We compared the performance of two methods in order to evaluate their suitability: a neural-based depth estimation method that uses a single image as input, and a geometrybased approach that uses two images.The newly introduced method utilised a single image as input and achieved similar results to our previous approach regarding energy and macronutrient intake.However, the new method can potentially enhance user satisfaction and adherence to the system by requiring only a single image capture.User preference plays a pivotal role in the adoption and success of any system.Our previous study demonstrated that users prioritise systems that minimise effort, even if this forfeits accuracy [28].
During this study, we encountered several challenges that affected the overall performance and reliability of the system.The first challenge was related to meal recording.The acquisition of two images from a video, based on the smartphone's position, can result in occasional blurry pictures due to user movements and variations in lighting conditions.
Using different phones with varying camera resolutions led to additional variability in image quality.Moreover, discrepancies were observed in individuals' compliance in recording their meals, with both meal omissions, multiple entries, or neglecting the use of the reference card.The person with the highest error for the kcal intake (64.53%) recorded only four meals, of which one depicted a glass of water, and in two, the reference card was missing.These issues could potentially introduce inconsistencies and inaccuracies in our results.
Another challenge stems from the fact that the system does not account for userspecific information.For both approaches, the system exhibited the highest misestimation in the case of fat.This can be attributed mainly to the amount and type of oil and butter present in the food, which varies for each participant and cannot be visually extracted from images.For example, the participant with the highest error in kcal estimation used 50 g of olive oil, which leads to an additional 400 kcal and 45 g of fat.It is important to note that not only do ingredients affect the nutritional content of foods, but different cooking methods also play a role in this matter [51].It is crucial to acknowledge that even though our system supports mixed and layered food items (e.g., rice soup and lasagna), the way meals are prepared and presented might affect their complexity and, hence, the precision of our system.In the future, approaches such as incorporating recipes or manual entries could be introduced in the pipeline to tackle these issues.
Furthermore, the absence of GT exacerbates the complexity of this study.Without reliable reference measurements for food volumes, it is challenging to provide an objective assessment of the accuracy and validity of our estimation methods.Although 24 h recalls are commonly used, they can introduce major errors in energy and macronutrient information, as they rely on a person's memory and subjective estimation of portion sizes [52].Additionally, in the volume estimation module, the formulation of the depth map incorporates inherent assumptions as part of the established methodology.Estimating accurate depth maps and evaluating them is challenging without a depth sensor.
To address the challenges encountered in this study, as well as to further improve and optimise the system to meet the end user expectations, a second phase will take place.In this next phase, the involved participants will be asked to use the goFOOD TM system in real-time for one week, as well as record their food intake with an FFQ and participate in two unannounced 24 h recalls.Users will directly capture pictures from the goFOOD TM app, thus eliminating the use of video recording methods.The users will see the results of the segmentation, recognition, and volume estimation modules, and will be able to change them if needed.The users will also have the option to manually input the amount of hidden ingredients, like oil or butter, used for a meal.Through app nudges, we aim to encourage users not to neglect their meals and actively utilise the reference card.In future investigations, integrating a food weighing method or submitting captured images to a qualified dietitian for meticulous assessment emerges as the most promising alternative to dietitians performing 24 h recall.Finally, we must acknowledge the need to refine our system to ensure its effectiveness across various food types and under multiple conditions, such as varying distances and viewing angles.The depth model we used was not specifically trained on closely viewed items; therefore, its effectiveness in accurately estimating volumes for such items may require additional optimisation.These adjustments will contribute to the enhanced accuracy and dependability of the collected data, as well as ultimately improving the performance of our system.
Conclusions
This manuscript presents a fully automatic system that estimates energy and macronutrients from a single meal image.The newly introduced method within the system demonstrates a comparable performance, but the neural-based approach is more user-friendly as it only requires a single image.This indicates that our system has the potential to facilitate the monitoring of individuals' dietary habits while reducing the costs associated with dietary assessment.However, further improvements are needed to ensure that the system is effective for closely viewed items, as well as to address the challenges related to depth map formulation, image acquisition, and user compliance.For future work, we plan to conduct the second phase of this study, which will involve participants using the goFOOD TM system directly on their smartphones for a longer period.Thus, we will obtain real-life data, which will be used to validate and refine the system's accuracy and reliability.
Figure 1 .
Figure 1.Screenshots of the goFOOD TM Lite application.
Figure 2 .
Figure 2. The goFOOD TM system pipeline.The previous version of our system required two images from different angles as input, while the new method requires only a single image.
Figure 4 .
Figure 4.A sample image from the SwissReLi2022 dataset.(a) The colour image.(b) The GT segmentation mask.(c) The predicted mask by the model.(d) The GT and predicted categories.(e) The predicted depth by the model.
Figure 5 .
Figure5.Bland-Altman plot of the neural-based approach (blue) and the geometry-based approach (green) in terms of energy (kcal) versus the dietitians' estimations, which used the 24 h recall method.The dashed blue and green lines indicate the 95% confidence intervals, while the continuous line is the mean difference.The dashed red line indicates zero difference between the dietitian and the goFOOD TM approaches.
Table 2 .
Comparison between the % accuracy of the different recognition models.
Table 3 .
Mean absolute percentage error (%) between the two systems and the dietitians' estimation for the energy and nutrient intake.In the parentheses, the standard deviation is shown. | 7,561.4 | 2023-09-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Medicine",
"Computer Science"
] |
The pressure-enhanced superconducting phase of Sr$_x$-Bi$_2$Se$_3$ probed by hard point contact spectroscopy
The superconducting systems emerging from topological insulators upon metal ion intercalation or application of high pressure are ideal for investigation of possible topological superconductivity. In this context, Sr-intercalated Bi$_2$Se$_3$ is specially interesting because it displays pressure induced re-entrant superconductivity where the high pressure phase shows almost two times higher $T_c$ than the ambient superconducting phase ( $T_C\sim$ 2.9 K). Interestingly, unlike the ambient phase, the pressure-induced superconducting phase shows strong indication of unconventional superconductivity. However, since the pressure-induced phase remains inaccessible to spectroscopic techniques, the detailed study of the phase remained an unattained goal. Here we show that the high-pressure phase can be realized under a mesoscopic point contact, where transport spectroscopy can be used to probe the spectroscopic properties of the pressure-induced phase. We find that the point contact junctions on the high-pressure phase show unusual response to magnetic field supporting the possibility of unconventional superconductivity.
In superconductors, due to particle-hole symmetry, the positive and negative energy eigenstates of the Bogoliubov-DeGennes Hamiltonian come in pairs [1,2]. In the superconducting ground state, the negative-energy eigenstates are fully occupied. Therefore, as in case of insulators, depending on the dimension and the symmetries of the system, various topological numbers (e.g., the Chern number) for the occupied states can be defined [3][4][5][6]. If non-zero topological numbers exist for a superconductor, that can be classified as a "topological" superconductor [7][8][9][10]. By this definition, when certain unconventional superconductors display nodes in the order parameter symmetry, the node themselves might have non-zero topological numbers thereby making the superconductors "weakly" topological.
On the other hand, in strong topological superconductors, the non-zero topological numbers can exist along with a fully gapped bulk superconducting gap. Hence, characterizing the topological nature of strong topological superconductors is a challenging task. However, due to topological restrictions, the surface of such superconductors host gap-less modes which can be detected by surface sensitive spectroscopic techniques [11][12][13][14][15]. Potentially, point-contact Andreev reflection can be a powerful technique to probe transport through such topological surface states in a topological superconductor [13,[16][17][18]. One popular route to possibly achieving topological superconductivity is doping charge carriers through metal intercalation in topological insulators like Bi 2 Se 3 [19][20][21][22]. ARPES experiments have confirmed that at the required doping level for superconductivity (∼ 2 × 10 20 cm −3 ) in charge doped Bi 2 Se 3 systems, there is still significant separation in the momentum space between the topological surface states and the bulk states [22]. Hence, it is expected that when the bulk superconducting phase leads to proximity-induced superconductivity on the surface, due to the inherent topological nature of the surface states, the proximity induced phase should become a 2D topological superconductor [10,14,15,23]. Another potentially interesting way of inducing superconductivity in a topological insulator is through applying pressure [24][25][26][27]. A pressure-induced superconducting phase was indeed found in undoped Bi 2 Se 3 [27]. A more interesting pressure-induced superconducting phase was seen to appear in Sr-intercalated Bi 2 Se 3 which shows ambient superconductivity below T c = 2.9 K [28]. In this case, superconductivity was first seen to disappear with applying pressure and re-emerge at higher pressure [26]. The high-pressure re-entrant superconducting phase was found to be interesting owing to a significantly higher T c compared to the T c of the ambient superconducting phase of Sr-Bi 2 Se 3 . More importantly, the pressure-induced re-emerged phase showed strong signatures of unconventional superconductivity indicating a high possibility of the pressure-induced superconducting phase of Sr-Bi 2 Se 3 being topological in nature. However, because technologically it is extremely challenging to perform spectroscopic investigation of the re-entrant phase, the exact nature of superconductivity in this phase remained poorly understood. In this paper, we discuss a unique way of realizing such a superconducting phase by applying uniaxial pressure under a point contact, where the superconducting phase can be investigated through mesoscopic transport spectroscopy.
We have performed experiments on high quality single crystals of Sr 0.1 Bi 2 Se 3 . The bulk magnetization (Figure 1(a)) and transport measurements (Figure 1(b)) revealed a critical temperature T c ∼ 2.9 K below which the system superconducts. The high quality of the crystals were further confirmed by atomic resolution scanning tunnelling microscopy and spectroscopy. As shown in Figure 1(c), the atomic lattice is seen with very low defect density. Tunnelling spectroscopy revealed a fully formed superconducting gap that evolves systematically with increasing temperature and near 2 K the spectra become too broad for the gap to be clearly seen. The gap also evolves systematically with magnetic field before being almost completely suppressed at 15 kG ( Figure 1 After that, we continued pressing the tip harder onto the crystal surface. During the process, we found signature of superconductivity at a temperature higher than the T c of pristine Sr-Bi 2 Se 3 . As it is seen in Figure 3(a), upon applying large pressure under the point contact, superconductivity at a higher temperature is achieved, but now due to large force 8 K which corresponds to the known pressure-induced re-entrant superconducting phase of Sr-Bi 2 Se 3 [26]. As it is seen in the point contact R − T data, the transition is broad and at a relatively lower temperature (around 6 K), another transition like feature is seen.
These could be attributed to multiple electrical contacts with different contact geometries formed. Each contact may experience different pressure due to the difference in their effective contact area. Comparing the measured T c with the published literature [26], we estimate the approximate pressure experienced by the superconducting region under the point contact to be 9 GPa.
In order to gain further understanding on the pressure-enhanced superconducting phase, we carried out detailed temperature and magnetic field dependent experiments. As seen in (Figure 3(f)). For all the micro-constrictions, the critical current is seen to decrease at a slow rate. For the constriction with highest critical current (red dots in Figure 3(f)), the critical current shows slight increase at lower fields and then starts decreasing slowly. At a field of 6 kG, the critical current has become only half of the zero field value.
The over-all superconductivity-related spectral features completely disappear at 7.5 kG.
In order to find out whether the unusual magnetic field dependence is also seen in the transport experiments, we have analyzed the R vs. T data of the thermal limit point contact obtained at different magnetic fields. The field-dependent R − T curves are shown in Figure 4(a). We have tracked the shift in transition at higher temperature with magnetic field to construct the H − T phase diagram. As shown in Figure 4 phase [26].
It should be noted that despite multiple attempts, a ballistic point contact could not be realized in this phase as during our efforts to reduce the contact diameter through controlled withdrawal of the tip, the effective pressure also decreased thereby causing a sudden disappearance of the pressure-induced phase. | 1,624 | 2020-06-17T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Anti-fungal activity, mechanism studies on α-Phellandrene and Nonanal against Penicillium cyclopium
Background Essential oils from plants have been reported to have wide spread antimicrobial activity against various bacterial and fungal pathogens, and these include α-Phellandrene, Nonanal and other volatile substances. However, biological activities of α-Phellandrene and Nonanal have been reported only in a few publications. Further investigations are necessary to determine the antimicrobial activity of these compounds, especially for individual application, to establish the possible mechanism of action of the most active compound. Results The results are shown that α-Phellandrene and Nonanal have a dose-dependent inhibition on the mycelial growth of Penicillium cyclopium. The minimum inhibitory concentration (MIC) and minimum fungicidal concentration (MFC) are 1.7 and 1.8 mL/L for α-Phellandrene, 0.3 and 0.4 mL/L for Nonanal, respectively. The volatile compounds altered the morphology of P. cyclopium hyphae by causing loss of cytoplasmic material and distortion of the mycelia. The membrane permeability of P. cyclopium increased with increasing concentrations of the two volatile compounds, as evidenced by cell constituent release, extracellular conductivity and induced efflux of K+. Moreover, the two volatile compounds induced a decrease in pH and in the total lipid content of P. cyclopium, which suggested that cell membrane integrity had been compromised. Conclusions The results demonstrated that α-Phellandrene and Nonanal could significantly inhibit the mycelia growth of P. cyclopium by severely disrupting the integrity of the fungal cell membrane, leading to the leakage of cell constituents and potassium ions, and triggering an increase of the total lipid content, extracellular pH and membrane permeability. Our present study suggests that α-Phellandrene and Nonanal might be a biological fungicide for the control of P. cyclopium in postharvest tomato fruits.
Background
Many plant species, including tomato, synthesize and store numerous volatile terpenoid compounds during normal leaf development (Buttery et al. 1988;Paré and Tumlinson 1999). Tomato is a constitutive emitter of low amounts of mono-and sesquiterpenes under non-stressed conditions, but these emissions become greatly enhanced under stress (Jansen et al. 2009;Maes and Debergh 2003). The volatile blends from Solanum lycopersicum leaves detected with SPME GC-MS were mainly terpenoids (i.e., α-Phellandrene), fatty acid derivatives (i.e., Nonanal) and aromatic compounds (Zhang et al. 2008).
The lipophilicity of essential oils enable them to preferentially partition from an aqueous phase into membrane structures of the fungi, resulting in membrane expansion, increased membrane fluidity and permeability, disturbance of membrane-embedded proteins, inhibition of respiration, alteration of ion transport processes in fungi and induced leakage of ions and other cellular contents (Burt 2004;Fadli et al. 2012;Khan et al. 2010;Oonmettaaree et al. 2006).
Biological activities of α-Phellandrene and Nonanal were reported only in a few publications. Further investigations are necessary to determine the anti-microbial activity of these compounds, especially for individual application, to establish the possible mechanism of action of the most active compound to combat resistant pathogenic fungi. This study aims to analyze α-Phellandrene and Nonanal on the mycelial growth of Penicillium cyclopium. The effects of different concentrations of α-Phellandrene and Nonanal on surface morphology, cell membrane permeability, and release of cellular material were investigated to elucidate their anti-fungal mechanisms.
Pathogens
Penicillium cyclopium was provided by the Department of Biotechnology and Food Engineering, Xiangtan, China, and the fungal was isolated from infected tomato (S. lycopersicum) fruit. The fungal spores concentrations were adjusted to 5 × 10 5 spores/mL using a haemocytometer before each test. 100 μL fungal suspensions (10 5 spores/mL) were added into the triangle bottle with 40 mL potato dextrose broth (PDB) and incubated in a rotatory shaker at 28 ± 2 °C and 120 rpm for 4 days.
Measurement of mycelial growth
Effects of α-Phellandrene and Nonanal on the mycelial growth of P. cyclopium were evaluated in vitro by agar dilution method (Yahyazadeh et al. 2008). PDA (20 mL) was poured to sterilized Petri dishes (90 mm diameter) and measured amounts of α-Phellandrene and Nonanal were added to PDA mediums (plus with 0.05% Tween-80) to give the following concentration of 0, 0.25, 0.50, 0.75, 1.00, 1.25, 1.50, 1.75 and 2.00 mL/L for α-Phellandrene; of 0, 50, 100, 150, 200, 250, 300, 350 and 400 μL/L for Nonanal. 6 mm diameter discs of P. cyclopium inocula were cut from the center of an actively growing P. cyclopium culture on fresh PDA plates without antibiotics at 28 ± 2 °C for 4 days with a paper punch, and then was placed at the center of each new Petri plate. The culture plates were then incubated at 28 ± 2 °C for 48 h. As controls, PDA dishes were supplemented with the same amount of filtration sterilization alcohol (99.5%) instead of α-Phellandrene and Nonanal. Each treatment was performed in triplicate. The lowest concentration that completely inhibited the growth of the fungus after 24 h of incubation was considered as the minimum inhibitory concentration (MIC). The minimum fungicidal concentration (MFC) was regarded as the lowest concentration which no growth of the pathogen was observed after a 72 h incubation period at 28 ± 2 °C in a fresh PDA plate, indicating more than 99.5% killing of the original inoculum (Talibi et al. 2012).
Different amounts of volatile substances were added to 50 ml PDB liquid medium for the final concentration of α-Phellandrene 0.00, 225.00, 450.00, 900.00, 1350.00, 1575.00 and 1800.00 μL/L, and of Nonanal 0.00, 50.00, 100.00, 200.00, 300.00, 350.00 and 400.00 μL/L. One hundred microlitres P. cyclopium conidial spore suspensions (10 5 cfu/mL) were poured into each triangle bottle, which was incubated on 28 ± 2 °C and a rotatory shaker at 120 rpm for 4 days. Fungal growth was estimated gravimetrically by weighting the biomasses after drying at 80 °C to a constant weight. The percentage of mycelial growth inhibition (PGI) was calculated according to the following formula: where d c (g) is net dry weight of control fungi and d t (g) is net dry weight of treated fungi (Helal et al. 2006).
Release of cytoplasmic material absorbing at 260 nm
The release of cytoplasmic material absorbing at 260 nm was measured following the method of Yahyazadeh et al. (2008) and Paul et al. (2011), with some modifications. Viable cells of P. cyclopium in their exponential logarithmic phase were collected by vacuum filtration, then washed three times with phosphate buffered saline (pH 7.0), and re-suspended with 20 mL of the above buffer solution. The suspensions were then incubated at 28 ± 2 °C in the presence of α-Phellandrene and Nonanal at three different concentrations (0, MIC and MFC) for 0, 30, 60 and 120 min. After incubation, cells were centrifuged at 12,000g for 2 min, and the absorbance (260 nm) of the above supernatant (1 mL) was determined with the UV-2450 UV/Vis Spectrophotometer (Shimadzu Corporation).
Measurement of extracellular conductivity and extracellular pH
The measurement of extracellular conductivity of P. cyclopium cells was conducted using a DDS-W conductivity meter (Instructions for Shanghai Precision Scientific Instrument Co., Ltd., Shanghai, China) according to the method described previously (Shao et al. 2013). The extracellular pH of P. cyclopium cells was determined using a Delta-320 pH-meter (Instructions for the Mettler-Toledo Co., Ltd., Shanghai, China). Initially, 100 μL fungal suspensions (10 5 spores/mL) were added to 50 mL PDB and incubated at 28 ± 2 °C in a thermostatic cultivation shaker for 4 days. The mixtures were collected by vacuum filtration, washed for 2-3 times with sterilized double distilled water, and re-suspended in 20 mL sterilized double distilled water. After the incubation of α-Phellandrene and Nonanal at MIC or MFC for 0, 30, 60 and 120 min, the extracellular conductivity and extracellular pH of P. cyclopium were determined. Control flasks without α-Phellandrene or Nonanal were also tested using an equal amount of alcohol instead.
Determination of total lipid content
Total lipid content of P. cyclopium cells with α-Phellandrene and Nonanal at three concentrations (0, MIC, MFC) was determined using the phosphovanillin method (Helal et al. 2007). The 3-day-old mycelia from 50 ml PDB was collected by vacuum filtration and dried with a vacuum freeze drier for 6 h. About 0.05 g of dry mycelia were homogenized with liquid nitrogen and extracted with 4.0 mL of methanol-chloroform-water mixture (2:1:0.8, v/v/v) in a clean dry test tube with vigorous shaking for 30 min. The tubes were centrifuged at 4000 rpm for 10 min. The lower phase containing lipids was thoroughly mixed with 0.2 mL saline solution and centrifuged at 4000×g for 10 min. Then, an aliquot of 0.2 mL chloroform and lipid mixture was transferred to a novel tube and 0.2 mL concentrated sulfuric acid (H 2 SO 4 ) was added, heated for 10 min in a boiling water bath. Three milliliter phosphovanillin was added, the tube was shaken vigorously, and incubated at room temperature for 10 min. The absorbance at 520 nm was utilized to calculate total lipid contents from the standard calibration curve. Cholesterol served as a standard.
Statistical analysis
All of the experiments were repeated three times. The results were expressed as the mean value ± standard deviation and were compared using an analysis of variance (one-way ANOVA) for multiple comparisons. All dates were processed by SPSS statistical software package release 16.0 (SPSS Inc., Chicago, IL, USA).
Release of cell constituents
The release of cell constituents when P. cyclopium was treated with MIC and MFC of α-Phellandrene and Nonanal for 0, 30, 60 and 120 min are shown in Fig. 1. The optical density values (OD 260 ) produced by α-Phellandrene were higher than those by Nonanal at the same treatment condition. When the P. cyclopium cells were treated with MIC or MFC of α-Phellandrene or Nonanal, the release of cell constituents increased. Suspensions of P. cyclopium with α-Phellandrene at MIC (1.7 mL/L, v/v) at 120 min reached an optical density value of 0.554, which is higher (P < 0.05) than that of the control (0.249), but lower than that of 0.668 at MFC (1.8 mL/L, v/v). Nevertheless, the OD 260 values of P. cyclopium suspensions with α-Phellandrene treatment were almost identical before 60 min of exposure, and then increased after 60 min incubation. On the other hand, the OD 260 values of P. cyclopium suspensions with Nonanal at MIC (0.3 mL/L, v/v) and MFC (0.4 mL/L, v/v) were higher than those of the control at the same exposure time. After 30 min of exposure, the OD 260 values at MIC or MFC were almost the same.
Scanning electron microscopy (SEM)
The effect of Nonanal and α-Phellandrene on the morphology of P. cyclopium was examined using SEM (Fig. 2). The conidia of P. cyclopium grown on PDA plates for 4 days had normal, plump and homogenous morphology (Fig. 2b, d, f, h, j), and the hyphae of control fungus growing on PDA were normal, linearly tubular, regular, and homogeneous (Fig. 2a, c, e, g, i). The growth of P. cyclopium on PDA with α-Phellandrene or Nonanal at MIC or MFC treatment for 2 days demonstrated that all fungal mycelia and conidia showed considerable changes in morphology. P. cyclopium with Nonanal or α-Phellandrene at MIC concentrations showed slightly depressed conidia, partly distorted and shrunken mycelia (Fig. 2c,d,g,h). In contrast, the conidia of P. cyclopium treated with Nonanal and α-Phellandrene at MFC concentrations appeared severely collapsed and depressed possibly because of the lack of cytoplasm (Fig. 2f, j). Moreover, severely shrunken and distorted hyphae were also observed with Nonanal and α-Phellandrene at MFC concentrations (Fig. 2e, i).
Extracellular conductivity
Exposure of P. cyclopium cells to different concentrations of α-Phellandrene and Nonanal for 0-120 min caused varying levels of extracellular conductivity (Fig. 3). In general, conductivity increased with exposure time and concentrations of α-Phellandrene and Nonanal. The concentrations for α-Phellandrene at their MIC (1.7 mL/L) and their MFC (1.8 mL/L) significantly affected the extracellular conductivity of P. cyclopium cells. Nevertheless, the concentrations for Nonanal at MIC (0.3 mL/L) and at MFC (0.4 mL/L) were only slightly affected with the extension of processing time. At 30 min of exposure, the extracellular conductivity
Table 1 Effect of α-Phellandrene and Nonanal on the mycelial growth of Penicillium cyclopium
a-e Significant differences at P < 0.05 level f Values arepresented as mean ± SD of P. cyclopium suspensions with α-Phellandrene at MIC or MFC remained at almost the same levels, but the values were significantly higher (P < 0.05) than the control. By contrast, the conductivity of P. cyclopium suspensions with Nonanal at MFC at 30 min of exposure was 150.3 μs/cm, which was significantly higher than that at MIC (135.8 μs/cm) and that in the control (120.5 μs/cm). After 60 min of exposure, the conductivity of P. cyclopium suspensions treated with Nonanal remained at a stable increase. On the contrary, the conductivity of P. cyclopium suspensions in the presence of α-Phellandrene was markedly increased after 60 min of exposure at MFC. At 120 min, the conductivity of P. cyclopium suspensions at MIC reached 179.6 and 193.7 μs/cm for MFC, respectively, which were significantly higher (P < 0.05) than those treated with Nonanal for MIC and MFC (145.3 and 178.2 μs/cm, respectively).
Extracellular pH
The extracellular pH of P. cyclopium cells exposed to α-Phellandrene and Nonanal is decreased in comparison to the controls (Fig. 4). A gradual decrease in extracellular pH was observed in the control. After 30 min of exposure to α-Phellandrene at MIC and MFC, a sharp reduction in the extracellular pH of the P. cyclopium suspensions occurred, whereas the extracellular pH values in the P. cyclopium suspensions after incubation with Nonanal gradually decreased. The extracellular pH values in the P. cyclopium suspensions after incubation for 120 min with α-Phellandrene at MIC and MFC were 4.72 and 4.33, respectively, which were significantly lower than that of the control (5.3). No significant difference (P < 0.05) was found between MIC and MFC after 30 min of exposure, the extracellular pH values in P. cyclopium suspensions after incubation with Nonanal at MIC and MFC were 5.06 and 4.81, respectively, and the latter was significantly lower than that of the control (5.25) (P < 0.05).
Potassium ion efflux
Potassium ions (K + ) leaked from P. cyclopium cells incubated with α-Phellandrene and Nonanal (Fig. 5). MFC and MIC of α-Phellandrene and Nonanal significantly induced the release of K + , and the K + concentration after 30 min was 1.520 and 1.330 μg/mL, respectively. When the incubation time increased to 120 min, the release of K + concentration continuously increased. By contrast, incubation with MFC of α-Phellandrene and Nonanal resulted in more K + release than from the P. cyclopium cells with MIC of those. After 120 min of incubation, K + released by MFC of α-Phellandrene and Nonanal reached 2.910 and 2.235 μg/mL, respectively.
Total lipid content
α-Phellandrene and Nonanal affected the total lipid contents of P. cyclopium cells (Fig. 6). Briefly, α-Phellandrene and Nonanal significantly decreased the total lipid content, especially for α-Phellandrene (P < 0.05). The total lipid contents of the P. cyclopium cells were 100.3 ± 2.4 and 117.4 ± 2.1 mg/g dry weight for 120 min after incubation with α-Phellandrene at MFC and MIC, respectively. These values are significantly lower (P < 0.05) than those of the control (158.1 ± 2.3 mg/g dry weight). By contrast, Nonanal at MFC and MIC remarkably reduced the total lipid content of P. cyclopium cells. Moreover, the effect of Nonanal at MFC and MIC was lower than that of α-Phellandrene in P. cyclopium cells. The total lipid contents of the P. cyclopium cells were 130.5 ± 2.4 and 136.2 ± 3.2 mg/g dry weight, respectively, after incubation with Nonanal at MFC and MIC for 120 min.
Discussion
α-Phellandrene and Nonanal exhibited strong antifungal activity against P. cyclopium. The inhibitory effect was positively correlated with the concentration of α-Phellandrene and Nonanal. These results were consistent with those of previous studies describing the antifungal activity of these volatile compounds (Fernando et al. 2005;Rodriguez-Burbano et al. 2010;Pandey et al. 2014;Sharma et al. 2014). At a relatively low concentration (0.1 mL/L), Nonanal reduced the mycelial growth of P. cyclopium by half, making it a promising antifungal substance. In addition, the inhibitory effect of Nonanal on P. cyclopium was more efficient than that of α-Phellandrene on P. cyclopium. The phenomenons were not observed in α-Phellandrene and Nonanal, indicating that the aldehyde compounds are more effective than alcohols and olefine in controlling postharvest pathogens (Droby et al. 2008). Among aldehyde constituents, cinnamaldehyde showed the highest activity, followed by citral, and then perillaldehyde, octanal and Nonanal (Inouye et al. 2001). The potential mechanisms underlying the anti-microbial activity of aldehydes and terpenes are not fully understood, but a number of possible mechanisms have been proposed. Gram-positive bacteria are known to be more susceptible to essential oils than Gram-negative bacteria (Farag et al. 1989;Smith-Palmer et al. 1998). The weak antibacterial activity against Gram-negative bacteria was ascribed to the presence of an outer membrane (Tassou and Nychas 1995;Mann et al. 2000), which possessed hydrophilic polysaccharide chains as a barrier to hydrophobic essential oils. In the current experiment, in vitro antifungal activity enabled us to hypothesize that the potential antifungal activity of α-Phellandrene and Nonanal against P. cyclopium could be closely correlated with the physiology of the hyphae. SEM analysis showed that the volatile compounds could alter the morphology of P. cyclopium hyphae, disrupting the membrane integrity (Yahyazadeh et al. 2008;Tyagi and Malik, 2011).
These changes generally occur because of an increase in the permeability of cells, and such changes commonly result in the leakage of small molecular substances, ions, and formation of lesions (Bajpai et al. 2013). The leakage of cytoplasmic membrane was analyzed by determining the release of cell materials including nucleic acid, metabolites and ions which was absorbed at 260 nm in the suspensions (Oonmetta-aree et al. 2006). After the addition of the α-Phellandrene and Nonanal visibly increased with increasing volatile compound concentration. The maximum release of cell constituents was observed in P. cyclopium treated with α-Phellandrene at MFC. The methyl ester is able to penetrate to the hydrophobic regions of the membranes and the carboxyl groups pass through the cell membrane, perturbing in the lowering of internal pH and denaturing of proteins inside the cell (Marquis et al. 2003). From the results of the present study combined with the previous studies, we can conclude that the two volatile compounds apparently induced the leakage of intracellular protons. These findings suggest that irreversible damage to the cytoplasmic membranes of P. cyclopium occurred, and the ions inside the cells leaked, ultimately leading to apoptosis of the fungus in the presence of volatile compounds.
In addition to cell wall and plasma membrane alteration and disruption, exposure of the hyphae of P. cyclopium to α-Phellandrene or Nonanal resulted in K + leakage. Our results are in agreement with the reported by Helal et al. (2006). The phenomenon could be explained that the release of ions was only based on their size and/or due to formation of holes or lesions of lipid bilayer of the plasma membrane (Prashar et al. 2003). The fatty acid composition of microbial cell membranes affects their ability to survive in various environments (Ghfir et al. 1994). The decrease in lipid content suggested that membrane stability decreased while the permeability of water-soluble materials increased (Helal et al. 2007). Fumigation of P. cyclopium with C. citratus essential oil, induced alterations in both the lipid content and the fatty acids methyl esters composition of the cells (Helal et al. 2006). In the present study, the addition of α-Phellandrene and Nonanal significantly decreased the lipid content of P. cyclopium. The results have shown that the two volatile compounds had the ability to penetrate lipid structures of the cells and disrupt the cell membrane integrity.
In conclusion, this study showed that α-Phellandrene and Nonanal could significantly inhibit the mycelial growth of P. cyclopium cells, disrupt their cell membrane integrity and result in the leakage of cell components. Our present study suggested that α-Phellandrene and Nonanal might be used as fungicides to fight against postharvest fungal diseases.
Conclusions
α-Phellandrene and Nonanal significantly inhibit the mycelia growth of P. cyclopium. These changes disrupt the integrity of the fungal cell membrane, leading to the leakage of cell constituent and potassium ions, and triggering an increase of the total lipid content, extracellular pH and membrane permeability. α-Phellandrene and Nonanal might be used as biological fungicides for the control of P. cyclopium in postharvest tomato fruits in the future. | 4,720.8 | 2017-03-14T00:00:00.000 | [
"Biology"
] |
Vector Field Models for Nematic Disclinations
In this paper, a model for defects in nematic liquid crystals that was introduced in Zhang et al. (Physica D Nonlinear Phenom 417:132828, 2021) is studied. In the literature, the setting of many models for defects is the function space SBV (special functions of bounded variation). However, the model considered herein regularizes the director field to be in a Sobolev space by introducing a second vector field tracking the defect. A relaxation result in the case of fixed parameters is proved along with some partial compactness results as the defect width vanishes.
Introduction
The purpose of this paper is to initiate the rigorous mathematical analysis of a model of the dynamics of disclination line defects in nematics proposed in [ZANV21].Here, we focus on energetic aspects of the model.Combined with the ideas presented in [AD14] and the demonstrations provided in [PAD15, ZZA + 17, ZANV21], which include static fields of straight ± 1 2 disclinations, their annihilation, the dissociation of closely bound pairs of straight disclinations, as well as static fields of disclination loops, the model can be considered as a thermodynamically consistent generalization of the Ericksen-Leslie (EL) model to account for the dynamics of disclination lines, with total energy that remains bounded in finite bodies in the presence of these line defects.
The practical applications of liquid crystalline phases abound, from liquid crystal displays for electronic devices and cell membranes in biology (use in the mechanically 'soft' phase), to a vast variety of liquid crystal polymers including the mechanically 'hard' Kevlar, for body armor, and in tires.Equally, topological defects abound in liquid crystalline media, fundamentally due to microscopic structural symmetries related to the head-tail symmetry of the director.Due to this symmetry, a vector field assigned to a director field can undergo continuous changes in orientation around a non-unique surface terminating at a unique disclination line, where the jump in orientation across the surface is quantized to be π radians (see, e.g., De Gennes and Prost [DGP95, Sec.4.2.1]).The line field representing the director and the corresponding vector field are both discontinuous at the disclination line, and the energy cost of such a line discontinuity can be sustained by the material.It is this fundamental insight, going back to the kinematics of singularities in linear elasticity theory due to Volterra and Weingarten (see, e.g., [Ach19] for a contemporary review), that forms the core idea of our model and, in fact, has been recently used to define an algorithm to detect line defects in molecular configurations of nematics produced by Molecular Dynamics simulations [SAW22], extending the notion of a director down to the level of a single nematic molecule.
The understanding of the energetics and dynamics of topological defects and their interaction form an important part of the theoretical study of liquid crystalline media, and we are particularly interested in the universality of this behavior across the material behaviors of liquid crystals and crystalline solids.A primary justification of the type of model we consider is that it is inter-operable with a study of dislocation defects in crystalline solids with simply a change in interpretation of the field variables involved (this cannot be said of the Landau-DeGennes model [SV12, SS87, S Ž02] which, nevertheless, employs a more adapted order parameter, the Q tensor, to describe the head-tail symmetry of the nematic director, albeit at the cost of including a biaxial phase within the description as well).A distinguishing feature of our model is that, even in a 'smooth' setting, defect cores can be identified as a locally calculable field variable, arguably a desirable feature as discussed in [BS05].
Our model introduces an extra second-order tensor field, B, beyond the EL director field, k (strictly speaking k is a vector field representation of the director line field).This new field is to be physically thought of as a locally integrable realization, at the mesoscale, of the 'singular' part of the director gradient field (Dk) in the presence of line defects, singular when viewed at the macroscale.Thus, at the mesoscale, both the director gradient field and the new field are integrable 1 -with this clear, we nevertheless refer to B as the 'singular part of the director distortion.' Notably, the field B is not a gradient, and this allows it to encode information on the topological charge of line defects through its curl.The evolution of the director field k continues to be obtained from the balance of angular momentum, as shown by Leslie [Les92], and the evolution of B follows from a conceptually simple conservation law for the topological charge of the line defects, which is tautological before the introduction of constitutive assumption for the disclination velocity, the latter deduced from consistency with the second law of thermodynamics.The introduction of dynamics based on such a conservation law, rooted in the kinematics of defect lines, is a conceptual departure from what is done for dynamics with the Landau-DeGennes Q tensor model (see, [SV12,Mac92]), or Ericksen's variable degree of orientation model [Eri91].In doing so, the model also makes connections to the dynamics of dislocation line defects in elastic solids [ZAWB15,AZA20], as well as their statics [AA20].
At the length scales where individual line defects are resolved, partial differential equations-based dynamical models arising from continuum mechanical considerations involve Newtonian and thermodynamic driving forces that include nonlinear combinations of entities representing director distortions and the disclination density fields.This requires a minimum amount of regularity in these fields, and hence it is essential to have a formulation that utilizes at least locally integrable functions, and our model is designed to be 1 The length scales ξ (core width) and εξ (the layer width) that appear subsequently in Sec.1.1 and are at the heart of such a physical regularization, can be precisely defined in configurations of nematic molecules arising in Molecular Dynamics simulations, as shown in ongoing work [SAW22].
Ω ξ εξ
Figure 1.A representation of the domain and the layer where the discontinuity is supported consistent with this requirement (of course, this does not preclude the question of studying limiting situations of such models when such functions tend to singular limits, modeling fields that have discontinuities, and singularities in the macroscopic limit).
1.1.Main Results.We investigate the behavior of local minimizers for the previously discussed model for liquid crystals with disclination defects.Let Ω ⊂ R N , where N = 2 or 3, be the domain occupied by a nematic liquid crystal.We consider the following energy for the director field k ∈ W 1,2 (Ω; R 2 ) and the singular part of the director distortion B ∈ H curl (Ω; R 2×2 ), ) where W : [0, ∞) → [0, ∞) is a nonconvex double-well continuous potential with wells at 0 and 2.
Remark 1.1.The heuristics behind the energy (1.1) above for the prediction ± 1 2 line defects are as follows: the nonconvex potential W assigns vanishing energy cost when |B| = 0 or 2 εξ .This, along with the elastic energy term |Dk − B| 2 , assigns approximately vanishing elastic cost for pointwise values of the director gradient of the type Dk ≈ n−(−n) εξ ⊗ l, where 0 < ε ≤ 1 and n, l are unit vectors.Here, n corresponds to the director field k, and l represents the direction along which the jump of n occurs.
In Fig. 1, this scenario is presented for a rectangular transition layer for the director field.
If the transition layer in Fig. 1 did not terminate inside the domain, the energy cost would be minimal for a (diffuse) jump in the orientation of k by π radians across the layer.However, for a layer terminating inside the domain, curl B is non-zero near the termination (or core), and if the width of the layer in the vertical direction was 0 (i.e., ε = 0), curl B would be singular (the classical defect solution results from the choice ξ = 0, when layer and core widths vanish).When the curl does not vanish, the density Dk cannot annihilate B (regardless of ξ = 0, or not).To see this, the Euler Lagrange equation of a functional with just the energy density |Dk − B| 2 for admissible variations in k with B a specified field is, with Dk − B =: e, div e = 0, and curl e = −curl B.
As an example, for curl B a (mollified) Dirac supported at the layer termination, this produces the approximate elastic energy density field, given here by |e| 2 , of some canonical line defects in 2 dimensions (screw dislocation in solids, the wedge disclination in nematics with unit vector constraint imposed, either exactly or approximately) [ZZA + 17, ZANV21, Nab87, HL82, Fra58, DGP95].Since e = Dk outside the layer, we recover the relevant director field (using the penalized unit vector constraint represented by the first term in (1.1) and a specified value of k at one point of the domain).Within the layer, but outside core, the director field k flips orientation by π radians, with a somewhat more involved distribution in the core.
As we are interested in minimizers of the energy (1.1), we first consider the relaxation at a fixed scale (see Theorem 3.1 for complete details).The energetic relaxation provides a functional to which the direct method of the calculus of variations is amenable, and is the first step to understanding the structure of minimizers.
Theorem.Let Ω ⊂ R N , N = 2 or 3, be an open, bounded set with Lipschitz boundary.For ε, ξ > 0 fixed, the lower semicontinuous relaxation of the energy (1.1) is given by Here, Qf denotes the quasiconvex envelope of f .Remark 1.2.We show that everywhere.Further, we claim that each p ∈ B(0, 2) ⊂ R N ×N is given as the convex combination of two-elements in 2S N ×N −1 differing by a rank-one matrix.Indeed, note we can find λ + , λ − ≥ 0 such that 2), we apply rank-one convexity of the quasiconvex envelope [Dac08] and the fact that the envelope is always below the original function to find that We further conjecture that Q(W (| • |))(p) = W * * (p) due to the radial symmetry, however characterization of the quasiconvex envelope poses challenges even in simple cases (see, e.g., [LDR95b] for one of the few nontrivial examples of a calculation of the envelope).
To motivate the constraints we will place on the field B, we introduce a simple example.We now restrict our attention to dimension N = 2 and consider the limit ε → 0 with ξ > 0 fixed.Considering any k ∈ C 1 (Ω; S 1 ), we set Such a result (though defect free) shows that further constraints on the field B are required to gain physically meaningful insight in the limit as ε → 0.
We consider the particular case of B supported in a layer as in Figure 1, a physically relevant geometric configuration (see, e.g., [ZANV21]).
To be precise, let Ω := (−1, 1) 2 be the domain of a liquid crystal in the plane.We assume the defect is at the origin, and the surface of discontinuity is within a layer L ε,ξ , with with parameters ε, ξ > 0. In physical terms, ξ is the core length of the crystalline defect and ε is a parameter determining the thickness of the defect layer L ε,ξ (see Figure 1).In this paper, we are primarily concerned with ± 1 2 disclinations, which must satisfy the constraint This is a model constraint requiring a disclination to exist in the domain.By Stokes' theorem, (1.4) is consistent with a layer field in the form of B = n−(−n) εξ ⊗ l in a layer of width εξ with normal in the direction l and n a unit vector corresponding to the director field k (see Fig. 1), and as described in Remark 1.1.
After a change of variables analogous to typical dimension reduction problems [LDR95a], we prove a compactness theorem for the rescaled fields, which is precisely stated in Theorem 4.1.Next, we state a theorem that follows from Theorem 4.1 and which emphasizes the coupling of the physical quantities in the asymptotic limit.
Further, suppose B ε satisfies the geometric constraint (1.3) and corresponds to ±1/2 disclination by satisfying (1.4).Then up to a subsequence (not relabeled), is satisfied for all s ∈ (−ξ, 1), where α ∈ L 2 (L 1,ξ ) is the limit of the rescaled { Bε := εB(x 1 , εx 2 )} ε .Furthermore, ∇k, the part of Dk absolutely continuous with respect to the Lebesgue measure has higher regularity, in the sense that We can also make a connection to the recent preprint [GMPS21], where a SBV model for ± 1 2 disclinations is proposed and the constraint that [k] = 2 along the jump set is imposed.The energy we use can be viewed as an attempt to also relax the one used in [GMPS21] by being a Sobolev model allowing for a more general class of jumps in the SBV limit.
There are many open questions stemming from this work which we highlight in Section 5. Foremost, is the integral representation of a precise limiting energy for the case ε → 0 with ξ > 0 fixed.It is also possible to consider the case of ξ → 0 at various rates compared to ε → 0. However, the limit ξ → 0 will be complicated by the need to rescale the energy by log ξ, which leads to a delicate Ginzbug-Landau type problem (see [JS02], [AP14]).
Mathematical Preliminaries
Let W 1,2 (Ω; R 2 ) denote the usual Sobolev space, and we designate by H curl (Ω; R 2×2 ) the space of L 2 matrix valued tensors, whose row-wise distributional curl is also in L 2 .Under this setting, we consider the energy (1.1) with a nonconvex continuous potential W : [0, ∞) → [0, ∞) satisfying the following coercivity and growth properties, for some C > 0, and The mathematical framework we use to study the convergence of the functional (1.1) is encapsulated by the notion of Γ-convergence, which we recall next.Definition 2.1.Given a metric space (X, d), let F n : X → [0, ∞] be a sequence of functionals for n ∈ N. We say that F n Γ-converge to F 0 : X → [0, ∞] with respect to the metric d if the following two conditions hold: (1) (Liminf Inequality) For every u ∈ X and for every sequence {u n } ⊂ X such that u n → u with respect to the metric d, we have (2) (Recovery Sequence) For every u ∈ X, there exists {u n } ⊂ X such that u n → u with respect to the metric d, and the sequence recovers the energy, i.e., lim sup For the relaxation of the energy (1.1), which is by definition the Γlimit of the constant sequence of functionals F n := E ε,ξ , we will rely on the now classical notion of quasiconvexity introduced by Morrey [Mor52].Analogous to the characterization of convex functions via Jensen's inequality, quasiconvex functions satisfy a Jensen's type inequality for gradient fields.Specifically, a Borel measurable function g : The quasiconvex envelope of a function f is given by the greatest quasiconvex function beneath f , i.e., it is defined pointwise by Qf (x 0 ) := sup{g(x 0 ) : g is quasiconvex and g ≤ f }. (2.3) We refer the reader to [Dac08] for further details on such functions.
As mentioned in the discussion preceding Theorem 1.3, we wish to model discontinuities across 2-d surfaces -when viewed at the macroscale -in a vector field representation of the director field containing a line defect, while incorporating the fact that at the microscopic scale such a jump across the surface must necessarily be spread over a region roughly of the order of the spacing between adjacent mesogens (cf.[SAW22]).Specifically, we impose the condition that B vanishes outside of the layer, which is equivalent to the conditions ) where t is the tangent vector to the boundary point.The condition (2.5) comes for free as B ∈ H curl (Ω; R 2×2 ) and there is a well-defined tangential trace matching the condition (2.4) [BF13].Finally, we recall that a function u belongs to BV (Ω) if its distributional gradient is given by a finite Radon measure.Informally, in the case that u has only surface discontinuities, u belongs to SBV (Ω), and for the sake Theorem 1.3, it suffices to know that if u ∈ W 1,1 (Ω \ K) ∩ L ∞ (Ω), where K is a closed set with finite surface measure, i.e., H N −1 (K) < ∞, then u ∈ SBV (Ω).For further details, we refer to [AFP00, Proposition 4.4].
Relaxation for fixed ε, ξ
We study properties of the energy (1.1) with ε, ξ > 0 fixed.A priori, it is not clear that minimizers to the problem exist nor is it clear what the value of the infimum is.In order to apply the direct method of the calculus of variations, we must consider the lower semicontinuous envelope of the functional.Specifically, we obtain an integral representation for the relaxation of energy (1.1) in dimensions N = 2 or 3.This dimension constraint enables us to use the Helmholtz decomposition and the corresponding Sobolev spaces, as detailed in [BF13].Here, Qf denotes the quasiconvex envelope of f as in (2.3).
Theorem 3.1.Let Ω ⊂ R N , with N = 2 or 3, be an open, bounded set with Lipschitz continuous boundary, and let E be defined in (1.1).For all where the convergence is such that k n → k and B n − B in L 2 .This relaxed energy has the integral representation We note that this result can equivalently be phrased as E ε,ξ Γ-converge to Ēε,ξ .To prove Theorem 3.1, we will introduce an intermediate functional, related to (3.1) through the Helmholtz decomposition of B. We denote the space of the divergence-free fields with integrable curl as Define the functional if ( k, z, p) ∈ X , and +∞ otherwise.Here, First, we investigate compactness of the functional I in the following lemma.
Lemma 3.2 (Compactness of I).
Let Ω ⊂ R N , with N = 2 or 3, be an open, bounded set with Lipschitz continuous boundary.Consider a sequence {( kn , z n , p n )} such that sup n∈N I[ kn , z n , p n ] ≤ C. Then there is ( k, z, p) ∈ X such that, up to a subsequence (not relabeled), and Proof.The key is the inequality (see [BF13]) (3.5) From the definition of C in (3.2), we have that div p n = 0.As I in (3.3) controls curl p n 2 L 2 (Ω) , we apply the above inequality to conclude that p n W 1,2 (Ω,R N ×N ) ≤ C < ∞.Convergence as in the statement of the lemma follows from weak compactness.By (2.1) and (3.3), we have sup n ∇z n L 2 (Ω) ≤ C < ∞, and the desired convergence follows from Poincaré's inequality because Ω z n dx = 0. Finally, the uniform bound of the energy (3.3) implies a uniform bound on ∇ kn L 2 (Ω) .Combining this with control of z n L 2 (Ω;R N ) and kn L 2 (Ω;R N ) , shows that sup n kn W 1,2 (Ω;R N ) ≤ C < ∞.
To conclude strong convergence in L 2 , we apply the Rellich-Kondrachov compactness theorem.
We will prove the relaxation of the functionals I using techniques for Γ−convergence (see Definition 2.1), i.e., that I Γ-converges to Ī, with if ( k, z, p) ∈ X , and +∞ otherwise.
We note that this functional is the same as the original functional with W replaced by Q(W (| • |)).In order to prove that this is indeed the correct limiting energy, we first show that the lim inf inequality of Γ−convergence is satisfied (see Definition 2.1(1)).
Lemma 3.3 (Liminf of I).
Let Ω ⊂ R N , with N = 2 or 3, be an open, bounded set with Lipschitz continuous boundary, and assume that W satisfies (2.1) and (2.2).For all sequences such that Proof.We define the function h(s, η) := W (|s + η|) and Qh(s, •) to be the greatest quasiconvex function below h(s, •) as in (2.3).We claim that The first equality is easy to see because we note that there is a translational symmetry to h which gives the equality (3.10) Combining (3.7), (3.9), and (3.10), the lemma is proven.
In order to complete the integral representation for the relaxation of I, we show the existence of a recovery sequence (see Def 2.1(2)).
Lemma 3.4 (Recovery Sequence for I).If ( k, z, p) ∈ X , then there exists a sequence {( kn , z n , p n )} such that (3.12) Proof.As k and p are admissible in the original energy, we take p n := p and kn := k.Now define the function g Note that as W is continuous and p is measurable, g is a Carathéodory function and has polynomial growth in η.By standard relaxation results [Dac08], we can find a sequence {z n } ⊂ W 1,2 (Ω; R N ) such that where Qg(x, η) is defined in (2.3).Again, by a similar argument for (3.8), we have that for x ∈ Ω almost everywhere.By the above relation, (3.13), and (3.14), we obtain (3.12) as the other functions in I (3.3) are fixed.
Combining the last two lemmas, we finish the proof of Theorem 3.1.
Proof of Theorem 3.1.As ε and ξ are fixed, we will let ε = ξ = 1 without loss of generality.Note that for any B ∈ L 2 (Ω; R N ×N ) we can apply the Helmholtz decomposition (see [BF13]) row-wise to find B = p + ∇z for some p ∈ C that is divergence free (see (3.2)) and z ∈ W 1,2 (Ω; R N ).As the curl vanishes on gradients and p ∈ C and ∇(z − k) are orthogonal in L 2 (Ω; R N ×N ) by an integration by parts as div p = 0 and p • ν = 0 (see (3.2)), we have where we have used the fact that Using the above relations, one has that Lemma 3.3 and Lemma 3.4 translate to lim inf and lim sup relations for E 1,1 , thereby proving E 1,1 Γ-converges to Ē1,1 and concluding the proof.
Constrained Minimizers
In the following, we study low energy sequences for the energy E ε,ξ in dimension N = 2 as ε → 0, with ξ > 0 fixed, when the fields B satisfy geometric (1.3) and defect (1.4) constraints.
Here the disclination layer L ε,ξ becomes thin in the limit.As it is typical in dimension reduction problems, we perform the change of variables We remark that we have rescaled the field B as well because the quadratic coercivity of W only gives control over εB.With L ξ := (−ξ, 1) × (− ξ 2 , ξ 2 ), we write the energy in terms of a bulk contribution and layer contribution as where (4.4) Finally, we define the limit layer In the following, we take an arbitrary subsequence ε n → 0 and investigate sequences with uniformly bounded energy with an eye towards ultimately understanding the most physically relevant effective energies 4.1.Compactness.Under the hypotheses (1.3) and (1.4), we consider any sequence with uniformly bounded energy, and using (4.1) and (4.3), we write it as the sum of the non-negative energies We prove the following theorem.
Theorem 4.1.Let k n , Bn have uniformly bounded energy as in (4.7).Then the following hold: where k ∈ SBV (Ω; S 1 ).Furthermore, the jump set of k is contained in L 0 ξ .• In the layer, we can generate a rescaled k n which is denoted by kn .
We will have the convergences: • Further, for almost every (4.8) The following compatibility condition between k and α holds: We note that even though k and B were independent in the beginning, these fields become coupled through k in the limit.In Proposition 4.2, we will show that this relation can be directly expressed without the intermediate field k.
Proof.
Step 1: Bulk Energy.To control the energy in the bulk, we observe that In particular, for any U smooth open set which is compactly contained in the set Ω \ L 0 ξ (recall (4.5)), we have that sup Thus, up to a subsequence (not relabeled), we have that Because of the unit norm regularization, we have that |k| = 1 almost everywhere.Furthermore, since k ∈ W 1,2 (Ω \ L 0 ξ ; R 2 ), it is an integration by parts argument [AFP00, Proposition 4.4] to show k ∈ SBV (Ω; R 2 ) where the jump set of k is contained in L 0 ξ up to a set of H 1 -measure zero.
Step 2: Layer Energy.In this portion of the energy, we have Using the quadratic coercivity of W in (2.1), we have n , (4.12) This implies that up to a subsequence, not relabeled, we obtain for some B ∈ L 2 (L ξ ; R 2×2 ) and α ∈ L 2 (L ξ ; R 2 ).Furthermore, using the quadratic bounds on k and (4.12), we deduce that kn − k weakly in L 2 (L ξ ; R 2 ), (4.16) for some k ∈ L 2 (L ξ ; R 2 ).In order to further characterize B, we can analyze component-wise for i = 1, 2 using (4.17).To be precise, for φ ∈ C ∞ c (L ξ ), it follows that where we have applied (4.16) after integrating by parts.Thus, we conclude that Now we can get information on curl εn Bn by integrating by parts.We will do it component-wise for i = 1, 2. For any φ ∈ C ∞ (L ξ ) not necessarily compactly supported, we have where H 1 is the one dimensional Hausdorff (surface) measure in R 2 .Using the tangential relations in (2.5) and the weak convergence of Bn , we simplify Taking φ ≡ 1 leads to the relation Allowing φ ∈ C ∞ ((−ξ, 1)), which means that ∂ 2 φ = 0, leads to the equation This is well-defined as an L 2 function since ∂ 2 ki ∈ L 2 (L ξ ).Using (4.19) and the fact that φ only depends on x 1 , we can simplify the relation (4.20) further as By our convergences, we also have that (4.4) passes to the limit.To be precise, (4.25) 4.2.Coincidence of traces.We show that the compatibility relation from Theorem 4.1 relates the jump of k directly to the limit defect field α.
Proposition 4.2.Supposing that k, k arise as in Theorem 4.1, then the compatibility relation is satisfied, where [[k]] denotes the BV jump, oriented as the trace from {x 2 > 0} minus the trace from {x 2 < 0}.
Proof.We begin by noting that the function k is such that From this it follows that for i = 1, 2 and φ ∈ C ∞ c (Ω), we have Consequently, to prove the theorem, it suffices to show that for all φ ∈ Let {k n } be the sequence from Theorem 4.1.From here, we drop the superscript i but continue to operate component-wise.Directly by the uniform L 2 (Ω; R 2 ) bound on k n and strong convergence away from (−ξ, 1) × {0}, we have that Performing an integration by parts, we also find (4.28)By (4.9), ∇k n χ Ω\L εn,ξ ∇k in L 2 (Ω).The first term on right-hand side of (4.28) converges with (4.29) For the second term, we recall that kn (y 1 , y 2 ) = k n (y 1 , εy 2 ), and perform a change of variables (recall Given the regularity of φ, the map φ ε (y) := φ(y 1 , εy 2 ) converges strongly in L 2 (L ξ ) to φ(y 1 , 0).By (4.12) and (4.13), it follows that ∂ 2 kn ∂ 2 k in L 2 (L ξ ).Passing to the limit in (4.30) as n → ∞, applying Fubini's theorem, and recalling the definition (4.8) for [ k], we find Finally, to obtain (4.26), we pass to the limit in (4.28) using (4.27), (4.29), and (4.31), thereby concluding the proposition.
Corollary 4.3.Letting k be as in Theorem 4.1, curl ∇k is a finite Radon measure.
Proof.We treat curl ∇k row-wise.
We start from the relation curl (Dk i ) = 0 in the sense of distributions.Using the decomposition of derivatives for SBV functions, we can write for any where the brackets denote the duality pairing of distributions.On the left hand side, by definition, curl (∇k i L 2 ), φ = curl ∇k i , φ .
On the right hand side, we also unwrap the duality and use the area formula to find (4.34) Thus combining the previous equations, we conclude that Therefore, curl ∇k i is a finite Radon measure, and furthermore the integration by parts argument shows that curl
Conjectures about the Limiting Energy
Though Theorems 3.1 and 1.3 address the behavior of the energy E ε,ξ , they leave the Γ-limits of in the singularly perturbed regimes (ε, ξ → 0) unresolved.In the constrained setting, the principal challenge to characterize the limiting energy is to understand the coupled term If we rewrite this term in a rescaled Helmholtz decomposition Bn = ∇ εn z n + p n , where p n will be rescaled divergence free, using orthogonality of p n with respect to ∇ εn z n , this coupled term will become So we see that if this has bounded energy, then we will have p n → 0 strongly and the limit of Bn comes purely from the rescaled gradient term.Using this, one can heuristically argue for the structure of the limiting energy as follows.Considering the terms which depend on only Bn , we can view the relaxation in the double well-function as similar to the dimension reduction where we fix the so called bending vector 1 ε ∂ 2 z n − ∂ 2 k.Such a relaxation has been considered in the 3D-2D case in [BFM09], and gives a cross convex-quasiconvex envelope (see also [FKP94]).Using the convergences given in Theorem 4.1, one could imagine leveraging the limiting structure of B and weak lower semicontinuity of cross convex-quasiconvex envelopes (denoted here by Q * ) with respect to rescaled gradients Since the envelope (4.6) we are considering should not depend on α, k, we may optimize with respect to a lower bound achieved.In particular recalling definition 4.8 and Proposition 4.2, we see through Jensen's inequality and the definition of cross quasiconvexity a possible lower bound is In this setting, this equation is quite similar to a Modica-Mortola functional for the vertical jump of the director with a transition layer on the order of the core length ξ.This is the type of picture predicted by the numerical experiments in [ZANV21].
However, if the correct energy is as in (5.1), recalling Remark 1.2, we see that the disclination layer (−ξ, 1) × {0} allows for any jump of the unit-length director field.To counter this, most likely the energy should be modified so to obtain strong convergence (in at least L 1 ) of Bn , so that the quasiconvexification of W (| • |) doesn't occur.In principle, this could be done via the inclusion of a smaller order term Ω ε 3 |divB| 2 dx within the energy.Given the smaller order of ε, it would most likely be negligible in the limit ε → 0. However, at the ε > 0 fixed level, it will require that B = 0 in the sense of traces on the boundary of the layer.This has the effect that ∇k will still see some small energy contribution within the layer. | 7,494.2 | 2023-01-23T00:00:00.000 | [
"Mathematics"
] |
Regulation of virulence in Chromobacterium violaceum and strategies to combat it
Chromobacterium is a rod-shaped, Gram-negative, facultatively anaerobic bacteria with a cosmopolitan distribution. Just about 160 Chromobacterium violaceum incidents have been reported globally, but then once infected, it has the ability to cause deadly septicemia, and infections in the lungs, liver, brain, spleen, and lymphatic systems that might lead to death. C. violaceum produces and utilizes violacein to kill bacteria that compete with it in an ecological niche. Violacein is a hydrophobic bisindole that is delivered through an efficient transport route termed outer membrane vesicles (OMVs) through the aqueous environment. OMVs are small, spherical segments detached from the outer membrane of Gram-negative bacteria. C. violaceum OMV secretions are controlled by a mechanism called the quorum sensing system CviI/CviR, which enables cell-to-cell communication between them and regulation of various virulence factors such as biofilm formation, and violacein biosynthesis. Another virulence factor bacterial type 3 secretion system (T3SS) is divided into two types: Cpi-1 and Cpi-2. Cpi-1’s needle and rod effector proteins are perhaps recognized by NAIP receptors in humans and mice, activating the NLRC4 inflammasome cascade, effectively clearing spleen infections via pyroptosis, and cytotoxicity mediated by IL-18-driven Natural killer (NK) cells in the liver. In this paper, we attempt to interrelate quorum-controlled biofilm formation, violacein production, violacein delivery by OMVs and T3SS effector protein production and host mediated immunological effects against the Cpi1 of T3SS. We suggest a research path with natural bioactive molecule like palmitic acid that can act as an anti-quorum agent by reducing the expression of virulence factors as well as an immunomodulatory agent that can augment innate immune defense by hyperactivation of NLRC4 inflammasome hence dramatically purge C. violaceum infections.
Introduction
Chromobacterium violaceum is a beta proteobacterium that forms violet-colored colonies with smooth surface.C. violaceum is a facultative anaerobe and a non-sporing bacillus (Kumar, 2012).They are motile with a single flagellum at a pole, one to four lateral flagella as well as pili all over the bacterium liable for the motility (Ravichandran et al., 2018).Trehalose, gluconate, glucose, and N-acetylglucosamine are fermentable by C. violaceum.C. violaceum is positive for oxidase, which converts oxygen to water in the presence of cytochrome c and proton, and also catalase, which converts hydrogen peroxide to water and oxygen (Antony et al., 2013).Saprophytic C. violaceum can be figured in soil and water across the planet with predominant presence in the tropical and subtropical regions (Batista and da Silva Neto, 2017).Malaysia, Brazil, Japan, Sri Lanka, Taiwan, United States, Singapore, Argentina, Nigeria, Vietnam, Australia, Canada, Cuba, and India reported numerous C. violaceum infection cases.They exist as natural microbiota of water and soil, but the most exemplary way for them to enter the bloodstream and cause systemic infections is through crippled wounds or cuts in the skin, where the bacterium invades from a polluted surface or water (Alisjahbana et al., 2021).Various case studies involving C. violaceum cognate complications such as bacterial hemophagocytic syndrome, brain abscess, chronic cellulitis, conjunctivitis, chronic granulomatosis, diarrhoea, endocarditis, internal jugular vein thrombophlebitis, meningitis, neutropenic sepsis, orbital cellulitis, osteomyelitis, pneumonia, puerperal sepsis, retropharyngeal infection, septic spondylitis and urinary tract infections, are mostly reported in immunocompromised individuals (Lin et al., 2016).The genetic material of bacteria is a single circular chromosome of 4751.8 kb in which the percentage of GC is roughly 64.83.C. violaceum's genome has a broad but incomplete array of open reading frames (ORFs) that code for mammalian pathogenicityassociated proteins.This could be the cause of the high lethality rate but with infrequent pathogenicity in humans.The size of C. violaceum bacterial cells may vary from 0.6 to 0.9 μm & 1.5 to 3.0 μm.Temperatures of 30-37 degrees Celsius in both aerobic and anaerobic environments are optimum for promoting growth in vitro, yet, the anaerobic state causes the depletion in pigment violacein synthesis.The appropriate pH is 4, however, pH below or equal to 3 is decidedly inimical to the bacteria (Castro et al., 2015).Apart from the familiar antibiotic pigment violacein, they are capable of producing extended antibiotics such as aerocyanidin, aerocavin which are highly decisive against both Gram positive and negative organisms and aztreonam which is decisive against Gram negative bacteria (Parker et al., 1988).In addition to humans, it can infect other mammal species such as pigs, sheep, dogs, buffaloes, and monkeys (Liu et al., 2012).Ciprofloxacin is the most effective antibiotic, followed by norfloxacin and perfoxacin against C. violaceum (Nayyar et al., 2019).
The first episode of C. violaceum in India was observed in an 11 months-old boy who died of septicemia within 48 h.The second instance is a 2 years-old male infant who died barely 48 h well before the antibiotic susceptibility test was completed.Both of these fatal cases describe the severity of the C. violaceum infection.It can also cause a moderate infection which is not detrimental to life, such as in the third case, which involves a 12 years-old female with urinary tract infection.The infected region is characterized by ulcerated lesions that exude a bluish purulent fluid and are mobbed by inflammation.More than 10% of C. violaceum infections occur in patients suffering from chronic granulomatous illness.Hepatitis, or liver inflammation, is perhaps the most common human manifestation of C. violaceum infection (Alisjahbana et al., 2021).
Since multidrug resistance (MDR) is ubiquitous amongst harmful microbes, treating infectious microbes like C. violaceum has become a tedious task.Anti-quorum sensing and immunomodulation can be combined to combat bacterial infections, such as those caused by C. violaceum.The aim of this review is to shed light on the dual mode therapy, where a single drug molecule should be utilized for halting virulence of infectious microbes as well as host immunomodulation that can boost the immune cells to clear the infection efficiently.Thus, the development of antibiotic resistance can be lessened to some extent.C. violaceum is a model organism for anti-quorum experiments as they produce violacein pigment.So, we focussed on C. violaceum alone and proposed a possible mechanism to fight against them which can be further extended to other pathogenic microorganisms.
Clinical significance
C. violaceum infections are clinically significant as they can lead to fatal outcomes if not diagnosed and treated promptly.The diagnosis of C. violaceum infection is based on the isolation and identification of the bacterium from clinical specimens, such as blood, pus, urine, or cerebrospinal fluid.The treatment of C. violaceum infection requires a combination of antibiotics, surgical drainage, and supportive care.The mortality rate of C. violaceum infection is high, ranging from 20 to 60%, depending on the severity and location of the infection.As discussed earlier there were more than 160 cases reported globally in humans.But if we notice the infection pattern from the year of 1927 to 2000 in the span of nearly 70 years the infection cases were only 64.But in the decade of 2001 to 2010 the infected cases were 42 which further escalated in the last decade to 49 cases (Figure 1A).The antibiotic resistant pattern of C. violaceum depicts that the infection rate might further increase in the following years over irregular treatment strategies.The antibiotic sensitivity/ resistance pattern is depicted in Table 1.Among the 37 antibiotics screened in our laboratory and from previous literature, C. violaceum was found to be resistant to 15 antibiotics, intermediate resistance to 6 antibiotics which explains the difficulty in treatment strategies.Moreover the C. violaceum bacteria is of high research interest because of the ability to serve as a quorum sensing model organism for various other severe pathogenic bacterial species because of its violacein pigment production.If we look into the publication count in the databases such as Scopus and PubMed on C. violaceum, the number drastically increases over the last as well as the previous decade, (Figure 1B).Thus targeting this bacterium to find novel therapy can be beneficial in many ways to treat C. violaceum and other clinically significant pathogenic species.
Quorum sensing regulations
Quorum sensing allows bacteria to communicate and coordinate its behavior according to their population density.It includes synthesis and detection of chemical signals called autoinducers (AI).Autoinducers can diffuse through the cell membrane and bind the corresponding receptors (Rutherford and Bassler, 2012).Gram negative bacteria use acyl homoserine lactone (AHL) as autoinducers which are derived from S-adenosylmethionine (SAM).AHLs varies in structure and length in its acyl chain which determine its activity and specificity in binding.AHLs can regulate various functions such as bioluminescence, virulence production, antimicrobial resistance (AMR) and biofilm formation.Quorum sensing involve two types of receptors called cytoplasmic transcription factors and membrane bound histidine sensor kinases.The transcription factors are often the (Papenfort and Bassler, 2016).
The first identified quorum sensing pathway in Gram-negative bacteria is the LuxI-LuxR system in Vibrio fischeri, a marine bacterium that displays luminescence.The LuxI protein is an AHL synthase that converts S-adenosylmethionine (SAM) into N-(3-oxohexanoyl)-Lhomoserine lactone (3OC6-HSL), which is the autoinducer for this system.The LuxR protein is a transcriptional regulator that binds to 3OC6-HSL and activates the expression of the lux operon, which encodes the enzymes for bioluminescence.The LuxI-LuxR system is an example of autoinduction, where the detection of the autoinducer stimulates its own production, creating a positive feedback loop (Lupp and Ruby, 2005).
Another example of a quorum sensing pathway in Gram-negative bacteria is the LasI-LasR system in Pseudomonas aeruginosa, an opportunistic pathogen that is clinically important.The LasI protein is an AHL synthase that converts SAM into N-(3-oxododecanoyl)-Lhomoserine lactone (3OC12-HSL), which is the autoinducer for this system.The LasR protein is a transcriptional regulator that binds to 3OC12-HSL and activates or represses the expression of various genes, including those involved in virulence and antibiotic resistance.The LasI-LasR system is an example of signal integration, where multiple autoinducers and receptors work together to coordinate the bacterial behavior.For instance, P. aeruginosa also produces another AHL called N-butanoyl-L-homoserine lactone (C4-HSL) through the RhlI-RhlR system, which interacts with the LasI-LasR system to fine-tune the expression of quorum sensing-regulated genes (Miranda et al., 2022).The RhlI and RhlR quorum system regulates rhamnolipid synthesis and virulence protein expression in the infected host cytoplasm.The Rhl system regulates the biofilm formation and development.The third QS system called Pseudomonas Quinolone System (PQS) is associated with the biosynthesis of 2-heptyl-3hydroxy-4-quinolone signal by ABCDE operon.PQS signaling molecule transported via outer membrane vesicles promotes iron sequestration.PQS system also govern the formation of rhamnolipid and associated biofilm.The recently identified IQS quorum sensing system produces 2-(2-hydroxyphenyl)-thiazole-4-carbaldehyde as its (Dal'Molin et al., 2009;Kothari et al., 2017) and data obtained from our lab.
signal molecule.However, the molecular mechanism behind its signaling and the genes regulated by the IQS system has not yet discovered.Thus, P. aeruginosa interlinked quorum systems such as Las, Rhl, PQS and IQS cause the greatest of threats in human infection cases (Vadakkan et al., 2024).
Quorum sensing in Chromobacterium violaceum
C. violaceum incorporates a huge number of virulence mechanisms, one of which is the quorum-sensing system (Ciprandi et al., 2013).Quorum sensing is a type of cell-to-cell communication that aid bacteria to communicate with one another (Castro-Gomes et al., 2014).The basic quorum sensing mechanism in C. violaceum is represented in Figure 2. Autoinducers are tiny chemical molecules responsible for the transmission of quorum signals between them (Høiby et al., 2010).Quorum sensing in C. violaceum is mediated by two genes, cviI and cviR.They are analogous to the luxI and luxR homologues of V. fischeri, and responding with high affinity towards AHL (acyl homoserine lactone).CviI is an AHL synthase gene that governs the biosynthesis of N-decanoyl L-homoserine lactone (C10-HSL), a signaling molecule.CviR is a transcriptional regulator protein that governs gene expression following binding to CviI product.These two genes are adjacent neighbours; however, transcription tends to take place on different DNA strands that express overlapping regions up to 73 bp in length.When coupled to CviR, the chain length of C4 to C8 AHL molecules stimulates vioA transcription responsible for the synthesis of violacein.On the other hand, AHL with chain lengths of C10 to C14 is an inactive antagonist (Høiby et al., 2010).The AHL molecule attaches to the cognate receptor, which as a complex regulates the expression of any of the target genes as the bacterial population increases (Stauff and Bassler, 2011).When there is no threshold population, AHL remains at a low concentration, delaying the onset of signal receptor complexes.At the same time, the unbound unstable CviR dissociates and therefore unable to bind to its palindrome sequence binding site CTGNCCNNNNGGNCAG (Castro-Gomes et al., 2014).When compared to TraR, a receptor protein for 3 oxo C8 HSL in Agrobacterium tumefaciens, CviR domains are fairly similar, but the organization is clearly distinct.Unlike TraR, CviR and chlorolactone (CL) an inhibitor of CviR have a cross subunit architecture wherein every monomer's DNA binding domain is stationed underneath the ligand-binding domain of the opposite monomer, showcasing LBD-DBD interactions.They result in a 60-degree separation between two DNA binding helices.The space required for efficient operator binding is 30 Å, so the declined DNA binding affinity of the CviR: CL complex is fairly understood.CviR full-length structure is complex with its antagonist so this cross-subunit structure may result from that interaction.Thus the proposed hypothesis of CviR inhibition by CL bound to autoinducer cavity and the induction of closed confirmation caused the inability of DNA to bind is acknowledged (Brumbach et al., 2007).
Biofilm formation and signal transduction mechanism regulated by quorum sensing Biofilms
Depending entirely on the bacterial population, biofilms are formed and regulated in the same population-dependent manner by C. violaceum's hmsHNFR gene (Becker et al., 2009).Biofilm images of compound light and scanning electron microscope (SEM) portrayed in Figure 3 were data from our lab.The synthesis of virulence factors in multilayer biofilm aids infection and makes bacteria immensely harmful.Biofilms facilitate safeguarding microorganisms from a range of environmental and artificial stress and strain, such as pH, temperature, antibiotics, antimicrobials, and many more.Biofilm greatly help to establish bacterial adhesion and allows survival in harsh habitat.As a result, destroying it, will be a pivotal stage in the
Violacein
Violacein, a purple pigment, is a hydrophobic moiety with antimicrobial characteristics that gives the typical violet-coloured colonies of C. violaceum.Its synthesis is constrained by the quorumsensing machinery of C. violaceum, which is population-dependent. Violacein production is influenced by a multitude of genes, including vioA, vioB, vioC, vioD and vioE, all of which are transcribed in almost the same way and encoded by 7.3 kb DNA segments (McClean et al., 1997).The vioABCDE operon governs the biosynthesis of violacein from the amino acid tryptophan.The CviI synthase enzyme contributes towards the conversion of fatty acids or S adenosyl methionine to AHLs, which further forms a complex with the CviR and stimulates the vioABCDE operon (McClean et al., 1997).In C. violaceum, the regulatory domain of CviR regulates vioA and other specific promoters (Swem et al., 2009;Høiby et al., 2010).
In the first step of the violacein synthesis, the enzyme VioA (flavin-dependent tryptophan 2-monooxygenase), an orthologous enzyme of StaO and RebO, oxidizes L-tryptophan to generate indole-3 pyruvic acid imine (IPA imine) and reduces cofactor FAD to FADH.The IPA imine is then dimerized utilizing the enzymes hemecontaining oxidase VioB, StaD, and RebD to form an unstable molecule imine dimer.Then, in a series of closely related initial steps, imine dimer can lead to the production of a variety of final compounds.By behaving as a catalytic chaperone employing a foldrelated lipoprotein transporter, the enzyme VioE converts this unstable molecule into protodeoxyviolaceinic acid (PDVA) without the need for any cofactors or metals.VioD, a flavin-dependent oxygenase, hydroxylates the C-5 position of the indole ring, yielding protoviolaceinic acid, which is then transformed into violacein by VioC, which hydroxylates the C-2 position of the second indole ring and then undergoes oxidative decarboxylation (Hall and Mah, 2017).It can convert proto deoxy violacein to deoxyviolacein when just VioC is present (Antony et al., 2013).In a spontaneous reaction, an imine dimer can produce chromopyrrolic acid, which is converted to rebeccamycin, another kind of antibiotic by the enzymes RebP, RebC, RebG, RebM, and staurosporine, another antibiotic by the enzymes StaP, StaC, StaG, StaN, StaMA, StaMB (Devescovi et al., 2017).Violacein biosynthesis is well represented in a flow diagram (Figure 4).
Violacein activated the essential pathways related with immune and inflammatory response in toll like receptor (TLR) transfected HEK cell lines.hTLR8 receptor mediated signaling pathway is activated and not the hTLR7.In silico analysis depicted interaction of violacein with hTLR8 whose interaction is similar to imidazoquinoline compounds.CU-CPTga, an antagonist of hTLR8 shown to counteract the immunostimulatory effects of violacein (Venegas et al., 2019).
Outer membrane vesicles
Outer membrane vesicles are the nano sized spherical component of the bacterial outer membrane that can range in size from 20 to 200 nm, dependant on the strain and environmental conditions (Reimer et al., 2021).OMVs are released from the bacterial outer membrane; thus, they actually contain proteins native to the outer membrane and periplasmic chemicals which are prevalent in the space between the two membranes (Rollauer et al., 2015).They are designated as OMV cargo since they serve the goal of a bacterial strain's transport system.They are multipurpose cargo that benefits bacteria in a multitude of ways other than just transport.OMVs effectively remove hazardous toxins from bacterial cells when they are exposed to environmental or artificial stress (Jan, 2017).C. violaceum uses its OMV to thwart its rivals and kill them in an ecological niche using OMV-derived violacein, one of its potent antibiotics.OMVs can easily be incorporated into other organisms due to its rapid permeabilization into lipid bilayers (Wettstadt, 2020).The outer membrane of Gram-negative bacteria competitors, on the other hand, opposes the entry of violacein and prevents it from reaching the inner membrane.Because violacein is a hydrophobic molecule, C. violaceum transports it to competing bacteria through an aqueous media by using OMVs (Choi et al., 2020).C. violaceum CviI/CviR quorum system tailor the OMV Chromobacterium violaceum biofilms formation is regulated by quorum sensing (A) compound microscopy (B) scanning electron microscopy.The images were obtained in our laboratory using a compound light microscope and a Carl Zeiss Evo/18 scanning electron microscope.
Other virulence factors
In addition, the QS system regulates a multitude of virulence factors, comprising chitinase which breaks down chitin for the carbon source (Devlin and Behnsen, 2023) and target immune system components such as mucins and surface glycans, collagenase, cytolytic toxins (hemolysin and leukotoxins) which are detrimental to host cellular functions, exopolysaccharides of bacterial biofilms, flagellar proteins, lipases, metalloproteases, swarming motility, exoprotease synthesis, T2SS and T3SS (Miller et al., 1988;de Oca-Mejía et al., 2015).C. violaceum does indeed have type IV pili machinery, which is crucial for twitching mobility, bacterial aggregation and also host adhesion in addition to a single flagellum and as an outcome, escalating pathogenicity.The type IV pili machinery include known genes such as pil B, C, D and some other unknown genes (Galán et al., 2014).The functions of these genes were identified based on the comparative study with the P. aeruginosa type IV pili genes as they were nearly similar in assembly and characteristics.C. violaceum's type VI secretion system (T6SS) which is regulated by the quorumsensing mechanisms, is crucial for inter-competition amongst bacterial population.In the T6SS, there are roughly 14 core components.VgrG is one of the proteins that create holes in the host cells or competitive bacteria.Six vgrG genes are dispersed between vgrG islands and T6SS clusters in C. violaceum.The T6SS system is necessary for inter-bacterial competition but not for host infection.
CviR, but not CviI, is an important QS protein that regulates T6SS.VgrG3 is the most important of the six vgrG genes for regulating inter-bacterial competition (Previato-Mello et al., 2017).Other findings confirm that OhrR, a sensor of organic hydroperoxides, which is a component of the MarR, a winged-helix turn helix transcriptional family, is likewise significant for the pathogenicity of mice (Miki et al., 2011).In virulent strains, there will be an abundant amount of superoxide dismutase and catalase enzymes, which protect them from phagocytic attack and render them exceptionally virulent compared to avirulent strains (Du et al., 2016).Other proteins involved in C. violaceum pathogenicity include hemolysin which helps in the lysis of blood cells in the systemic infections; outer membrane protein; collagenase which can cleave multiple sites at triple helical structure of collagen and destroys denatured collagen; flagellar protein which helps in the motility and pathogenicity; metallopeptidases which cleaves peptide bonds and helps in the degradation (Miller et al., 1988).
T3SS system in Chromobacterium violaceum
Genomic sequencing for strain C. violaceum ATCC12472 was used to ascertain the pathogenicity.The results emphasize the existence of a numerous pathogenic components causative for C. violaceum infections in humans.The type 3 secretion system (T3SS), a multiprotein needle-like system that is exceptionally crucial in introducing the bacteria's effector proteins into the host, which stimulates damages resulting from infection (Liu et al., 2022).The type 3 effector protein's genomic organization was well described in Figure 5.Further investigation into the type 3 secretion system divulged that there are two primary types of T3SS namely C. violaceum pathogenicity island Cpi-1 and C. violaceum pathogenicity island Cpi-2.Cpi-1 and Cpi-2, are homologous to Salmonella pathogenicity islands Spi-1 and Spi-2 which encompass genes involved in harmonizing two kinds of T3SS proteins (Galán et al., 2014).Typically, these two islands would indeed be found on an adjacent location in C. violaceum genome.Cpi-1 genes lie as two distinct clusters with one cluster coding for needle complex and the other gene makes up the cluster.Cpi-2 genes are all grouped together in a single region.The primary reason for virulence is cpi-1 and cpi-1a, as brought to light by deletion studies of the cpi-1, cpi-1a, and cpi-2 secretion systems.The T3SS effector proteins Cpi-1 and Cpi-1a mediate the translocation of genes encoding other T3SS effectors.Although the functions of most T3SS-specific proteins are unknown, minimal research has been carried out to define the functions of Chromobacterium outer protein E (CopE), CivB, a putative chaperone specific for copE which were regulated by cilA [14].Five putative regulators such as CilA, CivF, ArmR, SrB, SrC located within pathogenicity islands Cpi-1 and Cpi-2, whose mutagenesis and expression analyses further explained that CilA is master transcriptional activator for the significant number of the genes found in cpi-1 and cpi-1a highlighting that it is a key regulator of T3SS genes (Batista and da Silva Neto, 2017).
The cpi-1 and cpi-1a encode T3SS and are indispensable for hepatocyte cytotoxicity and cell death.Cpi-1/1a encodes 16 effector proteins that get translocated into hepatocytes, but their role is not clearly understood, leaving researchers with emerging research prospects (Alves de Brito et al., 2004).CopE, one of the effector proteins, functions as a guanine exchange factor (GEF) in Hela cells, activating NADH dehydrogenase subunit 3 homolog Rc1 and cell division control protein 42 homolog (Cdc42), allowing for actin configuration rearrangement and subsequent nonphagocytic epithelial cell invasion, which are responsible for C. violaceum pathogenicity in mice (Alves de Brito et al., 2004).Cpi-2's precise and intricate function is uncertain, probably it may be involved in C. violaceum's persistence after being engulfed by macrophages like that of Salmonella, given that most of the activities of T3SS in these two organisms are identical.Once phagocytosed C. violaceum successfully breakout from the phagosome to permeate the cytosol in epithelial cells, and the mechanism involves Cipc, a Cpi-1 translocating protein (Alves de Brito et al., 2004).The multiprotein complex of T3SS traverses the outer and inner bacterial membranes and use ATP as an energy source to release effector proteins to the cell's exterior.Through a translocation system, T3SS has the ultimate capacity to transport those proteins straight into the cytoplasm of the host eukaryotic cells.C. violaceum T3SS effector protein Cpi-1 is located downstream of Cpi-2, which constitutes 26 genes from putative ORF 2615 to 2642.In a tRNA-leu gene area, the island terminates precisely at 183 bp downstream of gene 2642.Cpi-1 encodes for 31004 base pairs, with a G + C percentage of 67, which is extremely close to C. violaceum's total GC content of 64.8 percent (Reimer et al., 2021).Cpi-1a is sandwiched between 2416 and gst 2424, which is 200 kb upstream of cpi1 and encrypts 4190 bases with a GC content of 66 percent.Upstream of cpi-1, there are 39 putative open reading frames ranging from 2,574 to 2,614, which have been well delineated by the GC content reduction (54 percent) and is known as cpi-2.It contains 40,291 bp of genetic information.The cpi-1 gene codes for the Inv-Spa transcriptional regulator and basal components, as well as the sis-sip spi-1 translocator operons.
Immunological response to Chromobacterium violaceum's type 3 secretion system
Cpi-1a corresponds to the spi-1 of prg-org operon, which encodes T3SS needle-like components which acts as a ligand to activate human immune responses.CilA is found to be the master regulator of cpi1 and cpi1a expression, identified by transcriptional profiling of CipB DNA microarray.Cpi-1 and Cpi-1a regulatory molecule, is engaged in translocator-mediated pore construction in the host cell membrane, which is a vital element in C. violaceum cytotoxicity (Miki et al., 2010;Zhao et al., 2011).T3SS system can induce similar pathways in a few additional organisms, notably Pseudomonas, Salmonella, shigella and Legionella (Coburn et al., 2007).In humans and mice, the NAIP (NLR family apoptosis inhibitory protein) receptor recognizes C. violaceum T3SS Cpi-1 needle and rod proteins, and activate the NLRC4 (Nod-like receptor) inflammasome.Inflammasomes are cytoplasmic complexes that recognize bacterial infection and contribute to eradicating pathogens.Unlike other inflammasomes, such as NLRP1 and NLRP3, which have a wide range of activators, NLRC4 inflammasomes are operated by a limited number of activator molecules, most of which originate from bacteria, such as flagellin, T3SS, and a few T4SS (Yang et al., 2013).
Human NAIP protein recognizes the Cpr1 needle subunit of C. violaceum T3SS, identical to mouse NAIP2 and NAIP5, which activates NRLC4 inflammasome in human macrophages, effectively combating C. violaceum infection (Vladimer et al., 2013;Zheng et al., 2020).In NLRC4, the signaling caspase activating and recruitment domain (CARD) is found in the N-terminal region, the NACHT/ NOD domain is placed in the center part, and the leucine rich repeat (LRR) is located in the C-terminal region; therefore NLRC4 inflammasomes share a similar physical property with NLRP1 and NLRP3 inflammasomes.NACHT refer to a group of subdomains of the NLRC4 inflammasome which include the nucleotide-binding domain (NBD) and distinct helical domains.The interplay of the NACHT domain's NBD and WHD keeps the NLRC4 inflammasome in a closed state (Duncan and Canna, 2018).The assembly of the NLRC4 inflammasome is similar to that of the apoptosome, which is an oligomer produced by apoptotic peptidase activating factor (APAF)-1.When NAIP binds to its ligand, three BIR domains are exposed, and its interaction with NLRC4 relieve LRR's auto-inhibition resulting in pyroptosis (cell scorching), a type of programmed cell death (Vance, 2015).The full-length NLRC4 inflammasome is inert, but NLRC4 lacking the c terminal LRR is active, allowing caspase 1 and the pyroptosis pathway to be activated for efficient bacterial infection eradication (Sundaram and Kanneganti, 2021).Contrary to caspase 1, the caspase 11 pathway is not essential for C. violaceum clearance (Maltez et al., 2015).When the bacterial protein effector attaches to the human or mouse NAIP receptor protein, the NLRC4 inflammasome is activated along with another quiescent NLRC4 protein forming a self-propagating oligomer with a disc-like structure.As a result of caspase 1 activation, CARD-CARD interactions recruit adaptor protein ASC to NLRC4 and then bind caspase 1 to ASC, triggering pyroptosis.Caspase 1 cleaves and activates the poreforming Gasdermin D which causes the inflammatory cell death of the infected host cell through pyroptosis by compromising membrane integrity.The PYD domain is absent in NLRC4 unlike NLRP3 and thus unaffected by K + efflux.However, the CARD domain can interact with procaspase 1, causing pyroptosis and making the ASC molecule redundant in the NLRC4 inflammasome pathway.When the zymogen procaspase one is cleaved, it becomes activated into protease caspase 1, and it cleaves pro-IL-1β and pro-IL-18 into its active forms, which are then liberated outside the cell through a pore formed by Gasdermin D. Pyroptosis on its own is adequate for bacterial clearance in the spleen, whereas both pyroptosis and IL 18-driven NK cellmediated clearance are required in the liver.In the liver, IL-1β is not as protective as IL-18.Perforin-mediated cytotoxicity is mediated by NK cells in the liver, and interferon is not required.Because of the diverse types of cells in the spleen and liver, different mechanisms are understandable.The inflammasome response mediated by innate immunity is sufficient for defense against C. violaceum; hence an adaptive immune response is not required (Maltez et al., 2015;Sundaram and Kanneganti, 2021).The immunological response in depicted with flow diagrams in Figure 6.
Can palmitate act as an anti-quorum agent and immuno modulator simultaneously?Some of the currently used medications against C. violaceum includes common antibiotics such as gentamycin, erythromycin and ciprofloxacin but the problem is the increasing antibiotic resistance over repeated irregular usage of antibiotics.Natural commodities are a renewable element that can be used for a multitude of purposes, (Veeresham, 2012).Half of the pharmaceuticals licensed for various ailments in the previous 20 years have a natural product backbone (Dias et al., 2012).The natural environment encompasses a broad range of bioactive metabolites that can act as antioxidants.Traditional medicine, which is primarily based on plant products, is now used by three-quarters of the world's population.They can function as antimicrobial drugs, quorum sensing inhibitors, anti-inflammatory agents, and immunomodulators.Various strategies to intervene quorum sensing systems are collectively termed quorum quenching by which bacteria are made unable to communicate with one another.Quorum quenching can affect various processes that bacteria employ to establish infection, such as biofilm formation, toxin production, spore formation, and outer membrane vesicles (OMVs) release (Vadakkan et al., 2018).
Only 10 percent of total plant species have been investigated to date, therefore conducting research in that segment will improve the likelihood of enhancing the living standards of humans (Borges et al., 2016;Wylie and Merrell, 2022).The leaf extracts of Ocimum sanctum, the fruit extracts of Passiflora edulis, and the pseudo-stem extracts of Musa paradisiaca are all proved to be a versatile quorumsensing inhibitors against C. violaceum.P. edulis bioactive molecule hexadecanoic acid, 2-hydroxy-1-(hydroxymethyl) ethyl ester interfered with quorum sensing system CviI and CviR of the C. violaceum and negatively regulated the quorum system (Musthafa et al., 2010;Venkatramanan et al., 2020).In C. violaceum, pigment and biofilm synthesis were reduced by hydroalcoholic extracts of Tribulus terrestris roots, with the highest effect at 2.5 mg/mL.The main compound was ß-1, 5-O-dibenzoyl ribofuranose, which interfered with the signaling activity of AHL molecules that regulate quorum sensing, rather than AHL production (Vadakkan et al., 2019).The root extract of Desmodium gangeticum showed antiquorum sensing activity against C. violaceum by inhibiting the production of violacein when exposed at 300 μg/mL, thrice a day without any effect on bacterial viability.The main quorum quenching compound was identified as cis-3,5-dimethylocta-1,6diene with significant bacterial silencing activity (Vadakkan et al., 2022).In a study by Vargas et al. (2021), found palmitic acid and phytol with efficient binding affinity for cviR receptor insilico, Chromobacterium violaceum or effector proteins infiltrate the cytoplasm of the host.The initial step of the inflammasome activation is when the type 3 secretion system's CprI releases needle and rod protein, binding to the human NAIP.Then, activated NLRC4 cleaves and triggers caspase 1 in response to the exposed BIR domain.When NLRC4 and Caspase 1 engage, the Card domain attracts the ASC adapter, which together forms the inflammasome complex.Pro IL 1 and IL 18 are then converted to their active forms by the complex.By cleaving Gasdermin D, the inflammasome also activates it, establishing the plasma membrane pores via which interleukin and bacteria leave the cells.Other immune cells, such as neutrophils and natural killer cells, are drawn to the area by the interleukin, which acts as a signal.These cells eliminate the bacterial load in addition to the infected cell., 2018).These studies provide evidence for the potential of anti-quorum sensing compounds to overcome of C. violaceum infection in various animal models.
Immune system functions and efficiency are influenced by a variety of exogenous and endogenous substances known as immunomodulators.Alkaloids, diterpenoids, flavonoids, glycosides, lactones, and polysaccharides derived from plants can act as immunostimulants, immunoadjuvants, and immunosuppressants by enhancing the efficacy of immune system components or mediators, improves the efficacy of vaccines or drugs, and downregulate the immune system, respectively (Di Sotto et al., 2020).Cytotoxic synthetic medications can also be used for this purpose, but they are accompanied by a number of adverse side effects for the host, as well as being prohibitively expensive for commercial reasons.Palmitic acid is one of the natural bioactive which is majorly found in plant species.Using palmitate, Lie et al. described non-pathogenic NLRC4 inflammasome activation.Palmitate promoted apoptosis in astrocytes via the NLRC4 inflammasome.Caspase is activated, and an inflammasome complex containing CARD and ASC are recruited, resulting in the development of IL 1β in astrocytes (Wen et al., 2021).Palmitate treatment in Hep G2 human hepatomal cells significantly increased the production of pro inflammatory cytokines such as IL-1β, IL-18, TNF-α and MCP-1.The observed active form of above mentioned cytokine was assumed to be because of activated NLRC4 inflammasome which was further proved by elicited mRNA expression of NLRC4 protein.The above-stated higher expression was dose-dependent in nature, i.e., in the lower dose of palmitate the NLRC4 inflammasome is expressed less than in the higher dose.To elicit increased NLRC4 mediated defense against C. violaceum palmitate can be used in a lower dose (Luo et al., 2012).Palmitic acid at the concentration of 1 mM is claimed to suppress violacein synthesis in C. violaceum by around 50% while only disrupting growth by about 5% (Pérez-López et al., 2018).In vivo studies monitoring this palmitic acid-containing essential oils delayed the death of C. violaceum infected mice.This stimulates the research notion that palmitic acid can be employed in dual mode treatment for C. violaceum infections, possibly through its antiquorum and NLRC4 upregulation abilities, which will be efficient in clearing the infected cells.Here the proposed idea is to use palmitate as dual therapeutic strategy against the C. violaceum and similar kind of infections.The concentration to attain both the effects should be standardised using experimental procedures.
Future perspectives and conclusion
It is reported that, when NLRC4 inflammasome is overexpressed in macrophages for a small load of S. typhimurium, it is very efficient in clearing the pathogen, but when the bacterial load increases, overexpression of NLRC4 inflammasome causes cell toxicity which is detrimental to the host (Wen et al., 2021).For any microbial infection, we can represent a particular or a combination of natural compounds to restrict the microorganisms' quorum sensing, rendering them less virulent thereby enabling our immune system to efficaciously eradicate before any harm.Furthermore, if that combination of natural products is effective in immunomodulation, i.e., boosting inflammasome-mediated pyroptosis, the natural product's efficacy will explode.If we are just reduce pathogenicity by blocking signals in the anti-quorum sensing treatment, bacteria still exist in the host with the possibility of gaining virulence due to host susceptibility for other infections or mutation.Natural productbased therapeutics will be immensely beneficial if we can effectively prevent virulence development by blocking QS and also by promoting NLRC4 inflammasome-mediated cell lysis of infected cells.Palmitate is a compound reported to have both anti-quorum activity and NLRC4 inflammasome activating properties (Liu and Chan, 2014;Pérez-López et al., 2018).Hence compounds like it can be effectively utilized to do both jobs simultaneously for better treatment which will be a novel treatment strategy to fight against antimicrobial resistance development.The idea is represented by a flow diagram in Figure 7.The advantage behind this anti-quorum mediated proposed therapeutic technique is that this can be an alternative to antimicrobial therapy which may bypass the resistance development.Anti-quorum sensing reduces the expression of virulence factors without affecting the bacterial survival, thus avoids strong Darwinian selective pressure exhibited by antibiotics.These therapeutics can render any pathogenic bacteria less virulent hence reduce the severity of the infections.The take home point we are trying to convey here is we need to identify a single phytochemical compound with multiple treatment efficiencies against any infectious pathogen.Here the proposed palmitate is one such phytochemical with this potential, because palmitate can trigger NLRC4 inflammasome pathway which will heighten the pathogen clearance mechanism along with antiquorum propensity against C. violaceum.The limitation to this proposed idea exists in identifying a specific phytochemical against individual pathogenic organisms.
FIGURE 1 (A) Number of human infection cases of C. violaceum infections.(B) Number of articles related with C. violaceum in Scopus and PubMed.
FIGURE 2
FIGURE 2The canonical positive feedback loop is employed by C. violaceum.Autoinducer C10-HSls were secreted by CviI synthase which when reached the threshold level detected by the neighboring bacteria's CviR, which is a DNA binding transcriptional regulator which controls the quorum sensing regulated systems such as violacein production, biofilms and outer membrane vesicles.
FIGURE 4
FIGURE 4In the first step of the violacein synthesis, the enzyme VioA oxidizes L-tryptophan to generate indole-3 pyruvic acid imine.The IPA imine is then dimerized utilizing the enzymes heme-containing oxidase VioB, StaD, and RebD to form an unstable molecule imine dimer.VioE converts this unstable imine dimer into protodeoxyviolaceinic acid.VioD, a flavin-dependent oxygenase, hydroxylates the C-5 position of the indole ring, yielding protoviolaceinic acid, which is then transformed into violacein by VioC.It can convert proto deoxy violacein to deoxyviolacein when just VioC is present.In a spontaneous reaction, an imine dimer can produce chromopyrrolic acid, which is converted to Rebeccamycin, another kind of antibiotic by the enzymes RebP, RebC, RebG, RebM, and staurosporine, another antibiotic by the enzymesStaP, StaC, StaG, StaN, StaMA, StaMB.
TABLE 1
Antibiotic resistance/sensitivity pattern of C. violaceum.
(Batista et al., 2020)depending on the cell population.Two vesiculation pathways, violacein biosynthesis and the VacJ/Yrb system act in opposite directions to modulate OMV secretion.Both vesiculation channels are QS-dependent, meaning they are triggered whenever the population density is high.However, the effect of vesiculation is totally reversed.The deletion of the vioABCDE operon resulted in a twofold reduction in vesiculation, demonstrating that violacein stimulates OMV biogenesis for delivery reasons.VacJ and yrbE deletion, on the other hand, resulted in an overabundance of vesicles.By regulating the vioABCDE operon and yrbFEDCB/vacJ, cviI and cviR regulate OMV synthesis.Other elements that control these vesiculation pathways include the bacterial envelope's stress response protein and the peptidoglycan layer outer membrane binding responsive protein(Batista et al., 2020).Antibiotic resistance can be established by OMVs far more adequately than resistance mechanisms established at the genetic level, because OMVs can operate as a decoy by adhering to antibiotics and preventing it from reaching bacterial populations.
Kanekar and Devasya (2022)ytol as an effective molecule inhibiting quorum sensing.C. violaceum infection in different animal model were efficiently treated with different antiquorum strategies.The survival of planarian flatform was enhanced by lactonase-mediated quorum quenching (QQ) against C. violaceum infection.Lactonase degraded the acyl-homoserine lactone (AHL) molecules that mediate quorum sensing (QS) in C. violaceum, thereby disrupting the bacterial communication and virulence.Planaria Schmidtea mediterranea succumbed to C. violaceum infection at a high dose of 4 × 10 9 CFU/mL.However, QQ by lactonase significantly reduced the bacterial toxicity and increased planarian survival to 100 percent at the same load of C. violaceum, as reported byMion et al. (2021).Kanekar and Devasya (2022)reported an increased survival rate of C. violaceum infected nematodes when they were subjected to the anti-quorum sensing molecule linalool.The nematodes, C. elegans, were pre infected with the pathogen C. violaceum and then administered with various concentrations of linalool, from 40 to 80 μg/mL.The findings indicate that linalool disrupted the quorum sensing mechanism of C. violaceum and attenuated its virulence factor secretion, thereby increased the viability of C. elegans.The survival of mice infected with C. violaceum was significantly enhanced by oral administration of essential seed oils from sunflower, chia and amaranth, according to a study by Macrina et al. through anti-quorum sensing activity by interfering with the bacterial communication and virulence.The mice treated with sunflower essential oil (EO) had a median survival of 18 h, followed by 16 h for chia EO and 14 h for amaranth EO.In contrast, the PBS control group had a median survival of only 10 h (Pérez-López et al. whereas | 9,170 | 2024-01-24T00:00:00.000 | [
"Biology",
"Medicine",
"Environmental Science"
] |
Evaluation of the HDLRuby Hardware Description Language by implementing an 8-bit RISC Processor
HDLRuby is a new hardware description language (HDL) based on Ruby created for improving the productivity of HW designers. This paper presents a study of the implementation from scratch with HDLRuby of an 8-bit RISC processor called MEI8. This implementation required only little effort and its code is more than twice shorter than the equivalent VHDL code. The resulting processor was mapped onto a Virtex7 FPGA where it ran at 100MHz and was estimated to run at 28MHz when implemented as a 0.5μm IC.
Introduction
Register-transfer level (RTL) is the long de-facto model used for describing HW. Yet, the design productivity of RTL stagnating, huge efforts have been spent to create new methods for synthesizing HW from more and more abstract representations. Yet, these methods are still limited, and in the end, designers still heavily rely on RTL synthesis. Thence, the time-to-market requirements pushed to adopt processor-centric devices even though their energy and power efficiency is much poorer than pure HW (1,2) .
When observing the academic and industrial works for improving HW design, it can be noticed that the main goal has been to use more and more SW-like models for describing HW. This culminates with High Level Synthesis (HLS) (3) that tries to synthesize HW directly from SW code. The idea of getting closer to SW is indeed attractive, but it proved to be difficult to synthesize efficient HW using SW-oriented models of computation (4) .
Conversely, even though the design productivity of SW increased a lot, its model of computation remained mostly unchanged, based on the sequential execution of imperative instructions. Therefore, it can be advocated that the choice of model of computation may not be relevant for improving the productivity. In this context, we proposed HDLRuby (5,6) , an RTL-based HDL built upon the Ruby programming language (7) for including paradigms independent of the model of computation that are efficient in SW design. Namely, we focused on the followings: object-orientated programming, genericity, metaprogramming and reflection. It must be noticed that the proposed approach is orthogonal with the traditional ways of abstracting HW, and that HDLRuby can be extended to support HLS-like algorithms.
The goal of this paper is to evaluate the benefits of using HDLRuby for designing from scratch a full-fledged circuit. For that purpose, the paper presents the implementation from scratch of a full processor with HDLRuby, compares its complexity with the corresponding VHDL implementation, evaluates the effort that has been required for obtaining the final result, checks its validity on an FPGA and an IC targets and estimates the final design productivity in gates. The processor is called MEI8 and is an 8-bit RISC Harvard processor including 8 general purpose registers, 37 different instructions and 2 external interrupt ports. In the present implementation, the instruction memory is an on-chip 256-bytes ROM. More details about this processor, and the corresponding source code are available online (8) .
The rest of the paper is organized as follows: section 2. presents some related works, section 3. presents the HDLRuby language, and section 4. details the study and its result and gives a discussion about its significance. Finally, section 5. concludes the paper.
Related Works
A main improvement in HW design has been the adoption about 25 years ago of the RTL model of computation (9) . While successful, RTL design is still much more time consuming than SW design. For this reason, tremendous efforts have been spent for improving further the design productivity of HW. Among these efforts we can cite: the early works on behavioral synthesis (10) , trying to synthesize HW from clock-free sequential code, component-based design (11,12) generating wrappers for low-level HW/SW components allowing easy composition of complex systems, the introduction of SW-based HDL like SystemC (13) or SpecC (14) , and the recent efforts for generating HW directly from SW code, like HLS approaches (3) or HW synthesis from Matlab models (15) . Most of these approaches have in common that they try to synthesize HW using a model of computation closer to SW than to HW, which proved to be difficult in practice (4) . By contrast, with HDLRuby we do not try to change the model of computation, but instead we focus on improving the quality of the RTL code.
A few approaches are closer to ours. Some works tried to introduce object-orientation to HW design. For instance, SystemVerilog (16) and SystemC (13) include classes and high-level control constructs but limited to data types or non-synthesizable code. SystemC is also remarkable from being implemented on top of the C++ SW programming language. More recently, P. Tomson (17) presented the draft of a HDL based on the Ruby language. However, this latter language did not evolve past the proof of concept.
There is little work about evaluating the productivity of HDL. The usual approach is to provide metrics like the number of gates produced per designer a day (1,2) . Such metrics are however product-dependent and give reliable results only when applied on a wide range of designs. For estimating a single design as it is the case of this paper, more relevant metrics can be found in SW design, e.g., source lines of code (SLOC), cyclomatic complexity (18) , or code churn (19) . Section 4.1 gives more details about the metrics considered for this paper.
The Core of HDLRuby
HDLRuby (5) is a HW description language based on the Ruby (7) programming language. A preliminary version of this language has also been presented in 2018 (6) . The goal is to increase the productivity of the HW designers by adapting to HW successful SW paradigms not or only partially used in existing HDL like VHDL or Verilog HDL, while ensuring that the language remains synthesizable register transfer level (RTL). The main paradigms imported to HDLRuby are the followings: • object-oriented programming: the elements of a description are considered as objects, i.e., collections of attributes and methods (algorithms), that interacts through messages. • generic programming: elements of a description can be parameterized and reused in different contexts. • reflection: the elements of a description can examine and modify their own structure and behavior. • metaprogramming: code can be treated as data and be generated during execution, in our case, portions of code can be used as parameters for other elements of the description. For that purpose, HDLRuby has been designed as a two-level language: a high-level generative language, used by the designer, whose execution produces a low-level set of data structures representing RTL constructs. The high-level generative layer is implemented on top of the Ruby programming language. In term of syntax, HDLRuby includes all the syntactic construct of the Ruby language, with additional ones for handling HW-oriented descriptions. These new constructs include for instance HW-specific literals (e.g., the "Z" state), and new constructs for describing signals, processes, instances or modules.
Synthesizability of the language is ensured by keeping an RTL model of computation, while the productivity-oriented features act only at the generative level, similarly to the implementation of object-orientation and genericity in the C++ language. In details, the SW paradigms were adapted to HW as follows: • object-oriented programming and reflection: all the elements of an HW description are Ruby objects, and therefore include standard as well as reflection-oriented methods. However, these methods are programs that generate RTL code and not programs to be executed by the final circuit.
• generic programming and metaprogramming: they are inherited by construction from the underlining RTL code generation engine implemented in Ruby.
Since the previous publication (6) , HDLRuby has been improved and the syntax slightly changed. The left part of Fig. 1. gives an up-to-date example of a HDLRuby description for a shift register whose structure is defined from a generic argument that can be a bit width, a range or an explicit data type. In the figure, the first line declares the HW module named sreg with typ as generic parameter. The second and third lines analyze this parameter and convert it to a vector type in case it is not already a type. This statement makes usage of the reflection-oriented Ruby method is_a? that checks the class of an object. Lines 5-7 declare the input and output signals of the register, namely, clk for the clock, rst for the reset, d for the data input and q for the data output. The type of d and q is the type of one element of the register and is obtained by typ.base. If the data type does not support sub elements, a compile error will be raised. Line 9 declares the storage of the register named buf with typ as data type. It can be seen from these initial declarations that HDLRuby is object-oriented and reflection-centric: the data type of the elements of a vector type is obtained directly from it through the base method, and the declaration of a signal is done similarity through the respective input, output and inner methods of the relevant type element. For the case of clk and rst the type is implicitly set to bit (i.e., single bit). Lines 11-18 describe the process handling the update of the register. Line 11 indicates that the process's statements are non-blocking (par) and activated on the rising edge of clk (clk.posedge). The next line checks if there is a reset (hif and helse are keywords describing HW 'if' and 'else'). In case of reset, buf is set to 0, otherwise, each of its elements is linked in chain using the range objects of Ruby: [-1..1] and [-2..0] represent respectively the range from the second (1) to the last element (-1) and the range from the first (0) to the second (-2) last element of buf. The equivalent VHDL or Verilog HDL would look similar, however a generic type argument would not be supported the only possible genericity would be the width of the register. To see the difference, the right part of Fig. 1 gives a few examples of instantiation of this register. The first instance is an 8-bit shift register, the second is a shifting buffer of 16 characters, and the last is a shift register containing a floating-point value. Those three instances would require each a different HW description with traditional HDL, i.e., about three times more code.
HW Design Patterns
Several kinds of circuits can be described as sets of finite state machines (FSM), decoders, and arithmetic and logic units (ALU). Components like FSM or decoders may look quite generic but are in practice target-specific and difficult to include in an HDL without losing generality. As a matter of fact, explicit constructs for such components are not present in the standard HDL like VHDL or Verilog HDL. The approach for HDLRuby, is to keep a very general language core, and to provide libraries of template components that can be parameterized and grafted into general RTL descriptions. Such libraries are possible thanks to the metaprogramming capability of the language (6) . We present here the FSM and the decoder templates that have been used in the description of the MEI8 processor.
a. The FSM Template This template allows to describe synchronous, asynchronous, mixed and single or double-edge FSM by simply specifying the states, the corresponding actions and a few optional configuration parameters. Fig. 2. gives an example of a globally asynchronous FSM with only a few states that are synchronous. This figure is a simplified version of the MEI8 main FSM where interrupts, IO bus accesses and specific instructions handling have been removed. In the figure, the first line is the header of the FSM and indicates that by default its output signals are generated asynchronously (:async), that the state transitions are performed on the rising edge of signal clk and that the reset is done on signal rst. Line 2 gives the default actions and line 4 gives the action to perform in case of reset (here, setting the program counter and the instruction register to 0). The first actual state is described from line 6 and is named :re. Here, state is used for defining a state. This state, asynchronous by default, has also a synchronous part added line 7 through sync. For this state, no transition is specified and therefore the FSM will go by default to the next declared state, i.e., :fe. State :fe is only synchronous and is therefore added through sync. This state does not have any transition specified either and therefore goes to the following :ex state. An explicit transition can be specified using goto, like in line 16 where the next state is set to :fe. It is also possible to set multiple alternative next states depending on a condition as it is done line 13 where depending of whether signal branch is 1 or 0, the next state will be :br or :fe (goto is implemented like a multiplexer so that the number of possible target states is not limited).
For comparison, Fig. 3. gives a the equivalent VHDL code. The code is significantly longer and complex (e.g., it includes two processes, two case statements).
The Decoder Template
This template allows to describe a decoding circuit by providing a list of decoding formats and the corresponding actions. Fig. 4. gives an example of a decoder with three different formats. This figure is also an example taken for the description of the MEI8 processor, more precisely it is a part of its instruction decoder. In the figure, before the decoder is described, line 1 sets accumulator a (index 0 in the register file) to be the default destination register of the ALU by assigning its index (0) to signal dst. Line 2 is the header of the decoder and indicates that signal ir (instruction register) is to be decoded. The remaining lines describe the behavior of the decoder as a list of entries, the first one having the highest priority. The circuit of this example being an instruction decoder, the action of each entry is mainly to set up the links between the arithmetic and logic unit (alu) and the registers. The first entry of the decoder describes the case where all the bits of ir are 0. It corresponds to the nop instruction (no operation) and sets signal wr to 0, indicating that the destination register should not be written to. The next entry describes the other cases where the two upper bits of ir are equal to 00. It corresponds to the register moves (copy between general purpose registers), or to the assignment of 0 to the destination register. For this entry, ir is decomposed into two fields, one three-bit field x and one three-bit field ynotice: a field name is always one character long, its occurrences in the entry indicating the bits used for the field. If both x and y are equal, the ALU of the processor is set to produce a 0, otherwise, the ALU is set to transfer the value of register number x. Finally, destination register index dst is set to y. The last entry of the example describes the case where the first two upper bits of ir are equal to 01. It corresponds to the standard arithmetic and logic operations. For this entry, ir is decomposed into field o (3-bit) that indicates the operation code, and field y (3-bit) that gives the number of the second source register. Line 14 sets up the links to the ALU circuit with respectively the operation (field o) the first source register (accumulator a) and the second source register (whose index is obtained from field y).
Equivalent code in Verilog HDL or VHDL will require several if and case statements, additional signals declaration for assigning the fields of the ir register, and the connections to the ALU circuit would also require extra signals and statements since in such HDL, function call-like connection to instances is not supported. The sample code is omitted for the sake of conciseness but a VHDL version can be found at the MEI8's code repository (8) .
Methodology
In its current state, the HDLRuby toolchain can automatically compile and convert an HDLRuby description into Verilog HDL (20) or VHDL (21) . For our experiments the toolchain has been used to produce VHDL code compatible with both FPGA and IC RTL synthesis. The resulting MEI8 cores have been tested for executing a program including all the instructions of the processor and routines for handling interrupts 1 and system calls.
In order to estimate the potential of HDLRuby for improving the design productivity, we compared the code describing the MEI8 processor with the corresponding VHDL code using several code metrics. However, while the HDLRuby code has been written from scratch, the VHDL code is based on the code generated by the HDLRuby design tool. Please refer to section 4.3 for more details about this choice. Several metrics exist for estimating the quality of software code (22) . While there is a lack of metrics for estimating the quality of HDL descriptions, the similarity in structure between HW and SW descriptions makes it possible to use the existing SW metrics for comparing HDLRuby with VHDL. In this paper we considered the following metrics: • Lines of Code: the number of lines of code (LOC, also called SLOC, for Source Lines of Code). • Variables: the number of variables and signals.
• Assignments: the number of signal assignments and connections. • Operations: the number of operations (arithmetic and logic operations, bit selection, moves, and casts). 1 Both interrupts have been raised during the test.
• Controls: number of control statements.
• Cyclomatic complexity: the number of independent decisions in the code, i.e., the number of independent alternatives in 'if' and 'case' statements. In addition to these SW-oriented metrics we added the following HW-oriented metrics: • Processes: the number of explicit processes.
• Bit literals: the number of bit vector literals.
• Bits in bit literals: the total number of bits in bit vector literals.
There is a lack of research about metrics for estimating the quality of HW code. However, the number of processes can be a source of errors since they make it more difficult to track the state of a signal. The number of literals is usually not considered as an issue when estimating the quality of SW. Yet, in HW descriptions there are often a lot of bit vector literals that are much more error-prone than the literals used in SW. Since, the probability of an error increases with the size of the literals, the total number of bits in bit literals is also used as complexity estimator.
In addition to the code complexity, we estimated the effort required for the implementation from scratch of the HDLRuby code. This estimation has been done using code churn-based metrics. More precisely, we counted the lines of code added and deleted for each commit to the repository and extract from these data the following metrics (19) : • Number of commits: the total number of commits to the code repository. In order to summarize the total effort required for designing the processor with HDLRuby, the following metric have also been added: • Written LOC: the total number of LOC written when designing the HDLRuby description of MEI8. • Written rate: the rate between the written LOC and the final LOC. These code churn-based metrics have not been used for the VHDL code because it has been written based on the already designed HDLRuby code. Table 1. compares the quality metrics for the HDLRuby and the VHDL code respectively, the lower part being dedicated to the HW-specific metrics. In the table, "Ratio" is the ratio between the VHDL metric and the corresponding HDLRuby one. On average on all the SW-oriented metrics, the VHDL code is 2.33 times more complex than the HDLRuby code, with a standard deviation of 0.77. For the HW metrics, the number of processes is consistent with the SW metrics whereas the bit literal metrics are much more in favor of the HDLRuby description. Globally, the complexity of the VHDL description is more than twice the one of HDLRuby. Fig. 5. gives the number of lines added and removed for each commit to the code repository of the HDLRuby description of the processor. From these raw data, Table 2. gives the resulting churn metrics. In the table, the small number of commits, and the small rate between the total written LOC and the final LOC of 3:1 tend to show that the design effort was indeed small, even so relatively to the small LOC of the HDLRuby implementation. In total, the estimated design time of the processor is about 20 hours 2 .
Results
HDLRuby has been designed to be synthesizable RTL so that the increase in productivity should have little impact 2 Only limited time a day could be assigned to this design. on the performance of the result. For evaluating if this assumption holds, the processor has been mapped onto the Xilinx Virtex-7 FPGA VC707 Evaluation Kit board (23) (28nm technology) using the Vivado (24) tool chain and implemented down to DRC check using the Alliance (25) tool chain targeting a 0.5µm CMOS technology with average cell area of 1837µm 2 (the default for the tool chain). Table 3. gives the synthesis reports for the FPGA, and Table 4. gives the reports for the IC mapping. As it can be seen in the tables, the resulting processor is very small. This was expected, since it is an 8-bit RISC processor. But it is also fast enough to run at 100MHz for the FPGA implementation and 28MHz for the post-synthesis simulation for the IC implementation. Since the processor is able to execute one instruction per two cycle, with an extra cycle required for branches and other extra cycles required when accessing the external memory, its average performance is between 50 and 40MIPS (Million Instructions Per Second) for the FPGA implementation and between 12 and 14MIPS for the IC implementation.
At last, the productivity in term of gates per designer a day can be estimated using the number of gates of the IC of Table 4. and the design time in days (assuming 8-hours working days) as follows: 9199 ÷ (20 ÷ 8) ≈ 3680 gates, i.e., an order of magnitude of 3000 gates a day. While coarse, this value is significantly higher than the average one for RTL design, i.e., about 800 gates per designer a day (1) . Fig. 5. The code churns in LOC for each commit.
Discussion
How to estimate the productivity of a language is a difficult research topic and as far as we know has never been addressed in the context of HW design -please note that we are not talking here about the performance of the synthesis tools. In this paper we used estimators that could be questioned. For instance, the LOC is often said to be a poor estimator of code quality (26) . Hence, several other estimators have been used. It can also be objected that SW estimators are not relevant for HW design, and so we also used a few HW-oriented ones. Still, such estimators have not been studied so they are to be taken with caution. Another difficulty for estimating the productivity of HDLRuby, is that some of its features (e.g., reflection) are mostly overlooked by the code quality estimators. The evaluation of the design effort has been made using the code churns. This is to our knowledge the first time that such metrics have been used in HW design, so that even though the 3:1 ratio of written over final LOC is a lower than average effort, it is hard to draw a definite conclusion from this result. By contrast, the evaluation of the designer productivity in number of gates a day is commonly used, and our result, indicating a significant increase (3680 against about 800 on average (1) ) is promising. However, this estimate depends on the target circuit. For a processor, the productivity can be low, since such a circuit lacks the regularity that would greatly benefit from generic programming. Moreover, the choice of making a design from scratch forbade the use of IP components while this is usual the case with recent designs.
Ideally, the HDLRuby and the VHDL code should have been written in parallel by experimented HW designers. Such a setup was difficult in practice for our research structure where only one person was available for the task. Then, a same person reimplementing an already existing circuit would have suffered from influences from the existing code and design choices. That is why a less time-consuming compromise has been selected: since the design in HDLRuby would anyway bias the other implementations, it has been decided to use the code generated by the HDLRuby tool chain as basis for writing the VHDL code. The core of the work was then to improve the compactness and the style of the VHDL code. Further, it has been decided to count as line of code the line of comments in the HDLRuby code, while the corresponding VHDL code have been left without any comment in order to avoid an artificial increase of the code size. The largest drawback with this approach is that it was not possible to estimate the design time using VHDL for the processor. Another drawback is that the resulting VHDL code may be in fact shorter than VHDL code written from scratch due to a heavy usage of compact logic expressions generated by the HDLRuby engine that are usually avoided by the designers in favor of control flow-like constructs.
Conclusions
This paper presented the implementation from scratch of MEI8, an 8-bit RISC processor using the HDLRuby language. Then, it compared the resulting code with an equivalent VHDL implementation and gave an evaluation of the design effort required for making the HDLRuby implementation. Finally, it gave details about the resulting circuit in FPGA and IC versions and used these figures for obtaining an estimate of the average productivity in number of gates produced a day. The comparison showed that the HDLRuby code's length was less than a half of the VHDL's. Moreover, with HDLRuby, the required design effort proved to be low while the productivity was about four times higher than a standard RTL approach.
While this study shows some of the potentials of HDLRuby, the limitations of the used estimators invites to perform evaluations with several other kinds of circuits. Moreover, not all the features of HDLRuby have been evaluated in this paper, especially the extensive generic programming and reflection features of HDLRuby have not been fully addressed. Preliminary such evaluations have been made (27) , but more throughout works are required.
Regarding HDLRuby, we plan to take advantage of the plasticity of the language for providing libraries for supporting IP, the dynamic partial reconfiguration capabilities of FPGA-based devices, the description of SW executed on processor cores, and the description of high-level communication protocols. | 6,301.4 | 2020-01-20T00:00:00.000 | [
"Computer Science"
] |
Evidence of Neutralizing and Non-Neutralizing Anti-Glucosaminidase Antibodies in Patients With S. Aureus Osteomyelitis and Their Association With Clinical Outcome Following Surgery in a Clinical Pilot
Staphylococcus aureus osteomyelitis remains a very challenging condition; recent clinical studies have shown infection control rates following surgery/antibiotics to be ~60%. Additionally, prior efforts to produce an effective S. aureus vaccine have failed, in part due to lack of knowledge of protective immunity. Previously, we demonstrated that anti-glucosaminidase (Gmd) antibodies are protective in animal models but found that only 6.7% of culture-confirmed S. aureus osteomyelitis patients in the AO Clinical Priority Program (AO-CPP) Registry had basal serum levels (>10 ng/ml) of anti-Gmd at the time of surgery (baseline). We identified a small subset of patients with high levels of anti-Gmd antibodies and adverse outcomes following surgery, not explained by Ig class switching to non-functional isotypes. Here, we aimed to test the hypothesis that clinical cure following surgery is associated with anti-Gmd neutralizing antibodies in serum. Therefore, we first optimized an in vitro assay that quantifies recombinant Gmd lysis of the M. luteus cell wall and used it to demonstrate the 50% neutralizing concentration (NC50) of a humanized anti-Gmd mAb (TPH-101) to be ~15.6 μg/ml. We also demonstrated that human serum deficient in anti-Gmd antibodies can be complemented by TPH-101 to achieve the same dose-dependent Gmd neutralizing activity as purified TPH-101. Finally, we assessed the anti-Gmd physical titer and neutralizing activity in sera from 11 patients in the AO-CPP Registry, who were characterized into four groups post-hoc. Group 1 patients (n=3) had high anti-Gmd physical and neutralizing titers at baseline that decreased with clinical cure of the infection over time. Group 2 patients (n=3) had undetectable anti-Gmd antibodies throughout the study and adverse outcomes. Group 3 (n=3) had high titers +/− neutralizing anti-Gmd at baseline with adverse outcomes. Group 4 (n=2) had low titers of non-neutralizing anti-Gmd at baseline with delayed high titers and adverse outcomes. Collectively, these findings demonstrate that both neutralizing and non-neutralizing anti-Gmd antibodies exist in S. aureus osteomyelitis patients and that screening for these antibodies could have a value for identifying patients in need of passive immunization prior to surgery. Future prospective studies to test the prognostic value of anti-Gmd antibodies to assess the potential of passive immunization with TPH-101 are warranted.
Staphylococcus aureus osteomyelitis remains a very challenging condition; recent clinical studies have shown infection control rates following surgery/antibiotics to be~60%. Additionally, prior efforts to produce an effective S. aureus vaccine have failed, in part due to lack of knowledge of protective immunity. Previously, we demonstrated that antiglucosaminidase (Gmd) antibodies are protective in animal models but found that only 6.7% of culture-confirmed S. aureus osteomyelitis patients in the AO Clinical Priority Program (AO-CPP) Registry had basal serum levels (>10 ng/ml) of anti-Gmd at the time of surgery (baseline). We identified a small subset of patients with high levels of anti-Gmd antibodies and adverse outcomes following surgery, not explained by Ig class switching to non-functional isotypes. Here, we aimed to test the hypothesis that clinical cure following surgery is associated with anti-Gmd neutralizing antibodies in serum. Therefore, we first optimized an in vitro assay that quantifies recombinant Gmd lysis of the M. luteus cell wall and used it to demonstrate the 50% neutralizing concentration (NC 50 ) of a humanized anti-Gmd mAb (TPH-101) to be~15.6 mg/ml. We also demonstrated that human serum deficient in anti-Gmd antibodies can be complemented by TPH-101 to achieve the same dose-dependent Gmd neutralizing activity as purified TPH-101. Finally, we assessed the anti-Gmd physical titer and neutralizing activity in sera from 11 patients in the AO-CPP Registry, who were characterized into four groups post-hoc. Group 1 patients (n=3) had high anti-Gmd physical and neutralizing titers at baseline that decreased with clinical cure of the infection over time. Group 2 patients (n=3) had undetectable anti-Gmd antibodies throughout the study and adverse outcomes. Group 3 (n=3) had high titers +/− neutralizing anti-Gmd at baseline with adverse outcomes. Group 4 (n=2) had low titers
INTRODUCTION
Osteomyelitis is the bane of orthopedic surgery, and there is a great need for novel interventions . Most severe cases involve Staphylococcus aureus (Darouiche, 2004), primarily methicillin-resistant S. aureus (MRSA) in some regions (Kaplan, 2014), and multidrug-resistant strains are emerging (Assis et al., 2017). Thus, there is a great need for non-antibiotic immune-based approaches to treat these deep infections, as loss of the few remaining antibiotics due to drug resistance is a serious public health threat (Miller et al., 2019). Sadly, infection rates following total joint replacement and trauma surgery have remained largely unchanged over the last 50 years . This is not due to lapses in technique, as adherence to rigorous prophylactic and surgical protocols [e.g., Surgical Care Improvement Project (SCIP) (Stulberg et al., 2010)] failed to reduce infection rates for elective surgery below 1%-2% (Cram et al., 2012). Based on this, the field has concluded that host factors represent an essential role in orthopedic infections (Ricciardi et al., 2020).
Regrettably, 19 S. aureus immunizations have been evaluated in Food and Drug Administration (FDA) registration trials, and all failed to demonstrate efficacy (Proctor, 2015;Miller et al., 2019). Acknowledged reasons for these failures include the inability to predict the protective role of staphylococcal immune responses in humans based on animal data (Proctor, 2015;Miller et al., 2019). Thus, we aimed to develop an immunotherapy based on osteomyelitis epidemiology data and monoclonal antibodies (mAb) that have dual-acting mechanisms of action: (1) direct inhibition of critic al S. aureus en zymes a nd (2) immunomodulatory activity to stimulate the host response and bacterial clearance (Varrone et al., 2011;Varrone et al., 2014). Based on results in a murine tibial osteomyelitis model that recapitulates several features of implant-associated osteomyelitis (Li et al., 2008), we identified the glucosaminidase (Gmd) protein subunit of S. aureus autolysin (Atl) as our lead target for passive immunization (Varrone et al., 2011;Gedbjerg et al., 2013;Varrone et al., 2014;Yokogawa et al., 2018). Of note, other groups also identified Atl as an immunodominant and protective antigen in various animal models (Holtfreter et al., 2010;Brady et al., 2011;Gotz et al., 2014). Atl is also known to be critical for cell wall biosynthesis and degradation during binary fission Sugai et al., 1995;Yamada et al., 1996) and functions as an adhesin (Heilmann et al., 2005) and a biofilm enzyme (Brady et al., 2006), and facilitates host cellular internalization/immune evasion (Hirschhausen et al., 2010). Of the various surface proteins we investigated, only deletion of Atl results in a defective cell division phenotype in vitro (Masters et al., 2021). Most importantly, it has been shown that anti-Gmd passive immunization synergizes with vancomycin therapy in rabbit and murine models of infection (Brady et al., 2011;Yokogawa et al., 2018;Kalali et al., 2018). Moreover, our clinical studies of patients with osteomyelitis from prosthetic joint infection (PJI), trauma, and diabetic foot ulcers have found anti-Gmd antibodies in patients that recover from these serious infections (Gedbjerg et al., 2013;Oh et al., 2018). Hence, anti-Gmd antibodies might be a long sought-after biomarker of protective immunity against S. aureus (Miller et al., 2019).
In our initial screening for candidates, we utilized an in vitro Micrococcus luteus cell wall digestion assay to identify anti-Gmd mAb that inhibits recombinant enzyme activity (Gedbjerg et al., 2013). The results showed that mAb can be either neutralizing or non-neutralizing and that most neutralizing mAb bind to the R3 domain of Gmd (Varrone et al., 2011;Varrone et al., 2014). Based on this initial in vitro and in vivo research, we derived a mouse IgG1 anti-Gmd mAb (1C11) with high affinity and 1:1 stochiometric neutralizing activity (Gedbjerg et al., 2013;Varrone et al., 2014). We also showed that 1C11 mediates S. aureus megacluster formation and opsonophagocytosis in vitro (Varrone et al., 2011;Varrone et al., 2014) and had favorable safety and pharmacokinetics in a sheep model of passive immunization (Lee et al., 2020).
We also completed several clinical studies to assess endogenous human anti-Gmd antibodies in osteomyelitis patients and healthy controls (Gedbjerg et al., 2013;Oh et al., 2018;Muthukrishnan et al., 2021;Owen et al., 2021). These studies included the analysis of sera collected in a unique biospecimen registry of 297 patients with culture-confirmed S. aureus osteomyelitis (AOTrauma CPP Bone Infection Registry (Kates et al., 2019)]. The results demonstrated that anti-Gmd antibody levels ranged from undetectable (<1 ng/ml) to 300 mg/ml, and the mean concentration was 21.7 mg/ml (Lee et al., 2020). We also addressed critical questions regarding the relationships between the endogenous anti-Gmd antibodies in these patients and their clinical outcome following standard of care surgery and postoperative treatment. The results showed that all patients had measurable humoral immunity against some S. aureus antigens, but only 20 (6.7%; p<0.0001) had basal levels of anti-Gmd antibodies (>10 ng/ml) in their serum at the time of surgery (baseline). Of these patients, 194 (65.3%) completed the 1-year follow-up and were divided into groups based on their anti-Gmd antibody level at baseline, namely, low (<1 ng/ml, n=54; 27.8%), intermediate (<10 ng/ml, n=122; 62.9%), and high (>10 ng/ml, n=18; 9.3%), and the infection control rates were 40.7%, 50.0%, and 66.7%, respectively. The incidence of adverse outcomes in these groups was 33.3%, 16.4%, and 11.1%, respectively. While high anti-Gmd titers were not the only deciding factor in infection control, as 21 out of 194 patients (10.8%) had low titers and achieved a favorable outcome at 1-year post-surgery, by assessing anti-Gmd level as a continuous variable, we found that for every 10-fold increase in concentration, there was a 60% reduction in adverse event risk (p=0.04). Furthermore, patients with low anti-Gmd titer demonstrated a highly significant 2.7-fold increased risk in adverse outcomes (p=0.008). However, a few of these patients had high titers of anti-Gmd antibodies at baseline and had adverse outcomes following surgery, which was not due to IgG4 class switching to non-functional immunoglobulin . Therefore, to further understand this endogenous anti-Gmd immune response, here, we describe an optimized in vitro assay to quantify the autolysis-neutralizing activity of anti-Gmd antibodies and the presence of neutralizing and non-neutralizing anti-Gmd antibodies in the AOTrauma CPP Bone Infection Registry.
Human Subjects
All human subject research was performed with informed consent under Institutional Review Board (IRB)-approved protocols (HM20009308, 20006017, and NCT01677000). Specific serum samples from the AO Trauma Clinical Priority Program (CPP) Bone Infection Registry were selected for study based on their known anti-Gmd physical titer and the patient's clinical outcome .
TPH-101 mAb
A humanized IgG1 anti-Gmd mAb derived from 1C11 was generated by transiently transfecting the heavy-and light-chain immunoglobulin genes into ExpiCHO cells as previously described (Brannan et al., 2019), and the secreted mAb was purified from the culture supernatant via protein-A affinity chromatography (Supplementary Figure S1). These quality control studies confirmed the purity of the mAb to be >99% and its specificity for native Gmd. Specificity of the GMD protein and TPH-101 antibody is further confirmed by running the Western blot assay using bacterial culture supernatant (Supplementary Figure S1) and in GMD protein (Supplementary Figure S3).
Optimization of Cell Wall Digestion Assay
Heat-killed Micrococcus luteus (ATCC No. 4698; Sigma-Aldrich, Catalog # M3770-5G) was used as a substrate for recombinant His-Gmd at final concentration of 0.075% (750 mg/ml) in phosphate-buffered saline (PBS) as we previously described (Gedbjerg et al., 2013). Triton X-100 (Sigma, Catalog # T8787-250ML) was used as a substrate-solubilizing agent. Briefly, 50 ml of 200 mg/ml of Gmd was diluted twofold in 96-well plate and 50 ml of 0.15% M. luteus containing various concentrations of cell wall solubilizing agent Triton X-100 was added and incubated at 37°C, and OD 600 was measured after 5, 60, and 120 min of incubation. Percentage of lysis was calculated by subtracting OD 600 of M. luteus treated with various concentrations of Gmd from OD 600 of M. luteus treated with Triton X-100 and dividing the product by OD 600 of M. luteus treated with Triton X-100, expressing it as a percentage.
Optimization of Neutralization of GMD by TPH-101
Purified TPH-101 (1 mg/ml) was serially diluted in PBS, and equal volume of 40 mg/ml Gmd was added in a 96-well plate and incubated at 37°C for 15 min. After incubation, equal volume of 0.15% heat-killed M. luteus treated with 1% Triton X-100 was added. OD 600 was measured after 30 and 60 min of incubation at 37°C. Percentage of neutralization was calculated by subtracting OD 600 of M. luteus treated with Gmd (10 mg/ml) from OD 600 of M. luteus treated with Gmd neutralized by various concentrations of TPH-101 antibody and dividing the product by the OD 600 obtained from subtracting OD 600 of M. luteus treated with Gmd (10 mg/ml) from OD 600 of M. luteus (bacteria only). IBT produced antibody c21D10 (IgG1 isotype; Catalog # 0200-003, Lot # 1811002) (6.643 mg/ml) was used as negative isotype control. The neutralizing concentration (NC 50 ) value was determined using Sigmoidal 4PL, where X is concentration, and the least squares fit was used to quantify the 50% (NC 50 ) of TPH-101 in the 30-and 60-min incubation.
Determination of Gmd Neutralizing Human Serum Titers
Human serum samples were heat inactivated for 30 min at 56°C. Equal volume of heat-inactivated human serum samples (neat) and Gmd (40 mg/ml) was incubated at 37°C for 15 min. After incubation, equal volume of 0.15% M. luteus treated with 1% Triton X-100 was added. OD 600 was measured after incubating at 37°C for 30 and 60 min, and percentage of neutralization was calculated as above.
Spiking of Human Serum Samples
Human serum sample without physical titers against Gmd (ELISA) from the AO Clinical Priority Program (AO-CPP) cohorts were diluted 1:20, and 1 mg/ml of TPH-101 was added and incubated at 37°C with equal volume of Gmd (40 mg/ml) for 15 min. After incubation, the substrate was added, and OD 600 was measured after 30 and 60 min. TPH-101 was used as positive control, c21D10 was used as negative control, and non-spiked serum was used as non-neutralizing control.
Optimization of M. luteus Cell Wall Lysis Assay by Recombinant Gmd
Prior to assessing the Gmd autolysis-neutralizing activity of the patient sera (hereafter referred to as "neutralizing"), we aimed to optimize the M. luteus cell wall digestion assay by solubilizing the substrate in varying concentrations of a non-ionic detergent (Triton X-100). Figure 1 shows the results of a representative experiment in which the concentration of the enzyme, detergent, and incubation time were varied to identify the condition that achieved the greatest percentage of lysis. Based on these results, we established 10 mg Gmd/ml and 0.5% Triton X-100 in a 30and 60-min incubation period as ideal for M. luteus digestion.
Neutralizing Efficiency of Humanized Anti-Gmd mAb TPH-101
To determine the concentration of purified TPH-101 required to neutralize 50% (NC 50 ) of Gmd enzyme activity in the M. luteus cell wall lysis assay, we performed a dose-response study using the optimized in vitro conditions described in Figure 1. The results confirmed the high efficiency of TPH-101 vs. an irrelevant control IgG (c21D10), which demonstrated that the humanized anti-Gmd mAb has an NC 50 of~15.5 mg/ml (Figure 2). To confirm that this anti-Gmd neutralizing activity of TPH-101 was functional against live bacteria, we repeated this assay on cultured M. luteus and assessed cytolysis via colony-forming unit (CFU) assay, which demonstrated a similar NC 50 of~12.5 mg/ml ( Figure 3). Briefly, CFU was assayed by adding 100 ml of treated ML to 900 ml of PBS and serially diluted 10-fold across 6 points (10 −1 to 10 −6 ), and 100 ml was plated on tryptic soy agar (TSA) plates, and colonies were counted by incubating at 37°C for 48 h.
To exclude the possibility that a factor in the serum is interfering with the assay, we evaluated the efficiency of TPH-101 complementation of human sera deficient in anti-Gmd antibodies. We performed Gmd inhibition assays with purified TPH-101 and TPH-101 added to sera from patients that had no detectable titers of anti-Gmd antibodies (Figure 4). The results showed~100% complementation efficiency, as no differences in the percentage of neutralization was observed at any antibody concentration. These data indicated that those sera are truly lacking anti-Gmd neutralizing activity.
Characterizing Physical and Neutralizing Anti-Gmd Antibodies in a Select Cohort of Osteomyelitis Patients With Known Clinical Outcome
Although we have previously described the association of anti-Gmd antibody physical titers with clinical outcome of the patients in the AOTrauma CPP Bone Infection Registry , the Gmd neutralizing titers were unknown. Therefore, we used the M. luteus cell wall digestion assay to quantify the Gmd neutralization activity in a small subset of patients that (1) had high physical titers of anti-Gmd antibodies at baseline (>10 ng/ml), (2) never had detectable anti-Gmd titers throughout their treatment, or (3) developed high titers of anti-Gmd at some point during their treatment. The results are presented with the clinical outcomes in Table 1 and contain two interesting observations. The first is that only 5 out of the 11 patients studied develop anti-Gmd neutralizing antibodies, which correlated with anti-Gmd physical titers >9,000 mean fluorescence intensity (MFI), which equates to >10 ng/ml in serum. The second observation was made by associating the patients' antibody response with their clinical outcome over the course of treatment, which revealed that these patients can be characterized into four groups. Group 1 patients (n=3) had high physical titers and neutralizing anti-Gmd at baseline that decreased with clinical cure of the infection over time. patients (n=3) had undetectable anti-Gmd antibodies throughout the study and adverse outcomes. Group 3 (n=3) had high titers +/− neutralizing anti-Gmd at baseline with adverse outcomes. Group 4 (n=2) had low titers of nonneutralizing anti-Gmd at baseline with delayed high titers and adverse outcomes. Collectively, these findings demonstrate that both neutralizing and non-neutralizing anti-Gmd antibodies exist in S. aureus osteomyelitis patients and that screening for the types of antibodies could have value for identifying patients in need of passive immunization prior to surgery. We were also interested to know if anti-Gmd antibody physical titers correlate with Gmd neutralizing activity in the patient sera. Thus, we performed a linear regression analysis on the five sera samples that contained Gmd neutralizing activity, and our negative findings are presented in Supplementary Figure S2.
DISCUSSION
Development of an effective immunotherapy against S. aureus would be transformative for orthopedic surgery and many other infections caused by this pathogen. Here, we have focused on the hypothesis that an ideal mAb would act both directly via antimicrobial effects through inhibition of a critical S. aureus target and have immunomodulatory activity to enhance the host response and bacterial clearance. From non-biased antigen discovery, in vitro, animal model, and clinical research, we
A B A A A
FIGURE 3 | Quantification of the neutralizing activity of humanized anti-Gmd mAb (TPH-101) via M. luteus killing assay. The indicated concentration (mg/ml) of purified anti-Gmd TPH-101 mAb or irrelevant control mAb (c21D10) was added to 10 mg/ml of recombinant Gmd prior to incubation with live M. luteus in the presence of 0.5% Triton X-100 at 37°C for 30 min (A) or 60 min (B). The percentage of neutralization was determined as described in Material and Methods. Sigmoidal, 4PL, X is concentration least squares fit to quantify the 50% neutralizing concentration (NC 50 ) of TPH-101, which is 12.70 mg/ml in the 30-min incubation (A) and 12.25 mg/ml in the 60-min incubation (B), respectively. Dotted red and green lines are ± SD.
A B
FIGURE 2 | Quantification of the neutralizing activity of humanized anti-Gmd mAb (TPH-101) in vitro. The indicated concentration (mg/ml) of purified anti-Gmd TPH-101 mAb or irrelevant control mAb (c21D10) was added to 10 mg/ml of recombinant Gmd prior to incubation with 0.075% heat-killed M. luteus cell wall extract in the presence of 0.5% Triton X-100 at 37°C for 30 min (A) or 60 min (B), and the percentage of lysis was determined as described in Figure 1. These data were reanalyzed using Sigmoidal, 4PL, X is concentration least squares fit to quantify the 50% neutralizing concentration (NC 50 ) of TPH-101, which is 14.1 mg/ml in the 30-min incubation (A) and 17.0 mg/ml in the 60-min incubation (B), respectively. Dotted red and green lines are ± SD. identified Gmd as a validated target for immunotherapy (Varrone et al., 2011;Gedbjerg et al., 2013;Varrone et al., 2014;Oh et al., 2018). Based on this, we developed a lead anti-Gmd mAb (1C11) from over 36 candidates, based on its superior in vitro characteristics (Varrone et al., 2011;Gedbjerg et al., 2013;Varrone et al., 2014;Nishitani et al., 2020) and its safety and efficacy in animal models (Varrone et al., 2014;Yokogawa et al., 2018;Lee et al., 2020). As might be anticipated, we found that this antibody, which interferes with an enzyme expressed on the surface of the bacteria that is critical for cell wall biosynthesis, synergizes with the standard of care antibiotic therapy (vancomycin) in a one-stage exchange model of MRSA via distinct mechanisms of actions. Vancomycin decreased the bacterial burden on the implant, while anti-Gmd mAb inhibited Staphylococcus abscess communities (Yokogawa et al., 2018). We also showed feasibility of anti-Gmd mAb passive immunization by demonstrating safety and favorable pharmacokinetics following a clinically relevant dose in sheep (Lee et al., 2020). Results from clinical research to define native humoral immunity against S. aureus in osteomyelitis patients also supports the hypothesis that passive immunization with anti-Gmd mAb may be an effective treatment (Gedbjerg et al., 2013;Oh et al., 2018;Owen et al., 2021). Most notable are the results from the AOTrauma CPP Bone Infection Registry, which showed that only 6.7% of patients with lifethreatening S. aureus osteomyelitis have basal levels of anti-Gmd antibodies (>10 ng/ml) in their serum at the time of surgery, and that for every 10-fold increase in anti-Gmd antibody concentration in sera, there is a 60% reduction in adverse event risk . Furthermore, low anti-Gmd titer patients have a highly significant 2.7-fold increased risk in adverse outcomes within 1 year of surgery . However, in contrast to our hypothesis of passive immunization with anti-Gmd mAb, we found that a few patients had high titers of anti-Gmd antibodies at baseline and had adverse outcomes following surgery, which was not due to IgG4 class switching to non-functional immunoglobulin . Thus, we aimed to determine if this was due to non-neutralizing antibodies. By optimizing the M. luteus cell wall digestion assay and validating the neutralizing activity of TPH-101 in human sera (Figures 1-4), here, we show the potential of a companion diagnostic with the sensitivity and specificity necessary to assess neutralizing and non-neutralizing anti-Gmd antibodies in sera from patients. Current research is directed towards formal validation of this assay as a clinical diagnostic.
Effective humoral immunity against an infectious agent posits that high titers of neutralizing antibodies are induced, and these antibodies disappear over time after the pathogen is cleared from the host. Indeed, this is the humoral response that we observed in Group 1 patients cured of their S. aureus osteomyelitis and illustrates what effective anti-Gmd mAb therapy would look like (Table 1). Additionally, our finding that S. aureus osteomyelitis patients never develop neutralizing anti-Gmd antibodies (Group 2) or develop them too late in the disease process (Group 4) indirectly supports our hypothesis of anti-Gmd mAb therapy. It was also interesting to see that some patients who develop high titers of non-neutralizing antibodies also succumb to serious adverse events from S. aureus infection (Group 3) and that anti-Gmd physical titer does not correlate with Gmd neutralizing activity (Supplementary Figure S2). Taken together, these results provide the first evidence that only neutralizing antibodies are helpful in fighting off S. aureus bone infection and that patients who are unable to mount this specific humoral response may benefit from passive immunization with mAb like TPH-101.
As a small clinical pilot, there are several major limitations that need to be noted. In addition to the minimal numbers of patients studied, which are too small to make formal conclusions other than both neutralizing and non-neutralizing anti-Gmd antibodies existing in S. aureus osteomyelitis patients, our It is also important to note that some clinical outcomes do not have a straightforward interpretation. For example, patient 8 had high titers of neutralizing anti-Gmd antibodies at baseline and had a knee fusion that we scored as an "Adverse" outcome based on our prospective criterion. However, this successful infection control, potentially aided by the patient's anti-Gmd antibodies, may have been the best possible outcome based on the patient's global health (76 years old with Class III obesity) and the damaged bone and soft tissue at the time of surgery. Finally, while the M. luteus cell wall digestion assay proved very useful for these research studies, we do not suggest that it can be translated into a clinical diagnostic due to the technical demands of the assay. Thus, efforts to develop a lateral flow assay to assess anti-Gmd as a biomarker are warranted.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Virginia Commonwealth University HRPP. The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by University of Rochester.
AUTHOR CONTRIBUTIONS
All authors were directly involved in designing the experiments, data analysis, and drafting the manuscript. SS, GM, TK, and JO performed experiments. SK is the principal investigator of the IRB-approved clinical research. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by research grants from the National Institutes of Health (R44 AI155309, P30 AR69655 and P50 AR72000, and NCATS 1UL1TR002649) and the AOTrauma Clinical Priority Program.
ACKNOWLEDGMENTS
We thank Grant Liao and Roger Ortines for running SEC-HPLC. We also thank Sergey Shulenin for production and purification of TPH-101 antibody. Finally, we thank Thomas Kort for production and purification of Gmd protein.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcimb.2022.876898/ full#supplementary-material Supplementary Figure S1 | Quality controls of purified TPH-101 anti-Gmd mAb. ExpiCHO cells were transiently transfected with TPH-101 heavy and light chain genes, and the secreted mAb was purified from culture supernatant via protein-A affinity chromatography as previously described (Brannan et al., 2019). The total yield of TPH-101 mAb was 400 ug/ml, and the purity was determined to be >99% via Coomassie-stained denatured SDS-PAGE and SEC-HPLC chromatograph. No degradation of the mAb was observed. The specificity of TPH-101 binding to Gmd was confirmed via western blot of total S. aureus USA300 protein extract as previously described (Varrone et al., 2014).
Supplementary Figure S2 | Lack of correlation between anti-Gmd antibody physical IgG titer and Gmd neutralizing activity in human sera. A linear regression analysis of anti-Gmd physical IgG titer determined by Luminex vs. Gmd neutralizing activity determined by M. luteus cell wall digestion was performed on the five patient sera that contained Gmd neutralizing activity described in Table 1. No significant association was found by the Spearman's rank correlation coefficient. | 6,259.4 | 2022-07-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
Association of CASR, CALCR, and ORAI1 Genes Polymorphisms With the Calcium Urolithiasis Development in Russian Population
Kidney stone disease is an urgent medical and social problem. Genetic factors play an important role in the disease development. This study aims to establish an association between polymorphisms in genes coding for proteins involved in calcium metabolism and the development of calcium urolithiasis in Russian population. In this case-control study, we investigated 50 patients with calcium urolithiasis (experimental group) and 50 persons lacking signs of kidney stone disease (control group). For molecular genetic analysis we used a previously developed gene panel consisting of 33 polymorphisms in 15 genes involved in calcium metabolism: VDR, CASR, CALCR, OPN, MGP, PLAU, AQP1, DGKH, SLC34A1, CLDN14, TRPV6, KLOTHO, ORAI1, ALPL, and RGS14. High-throughput target sequencing was utilized to study the loci of interest. Odds ratios and 95% confidence intervals were used to estimate the association between each SNP and risk of urolithiasis development. Multifactor dimensionality reduction analysis was also carried out to analyze the gene-gene interaction. We found statistically significant (unadjusted p-value < 0.05) associations between calcium urolithiasis and the polymorphisms in the following genes: CASR rs1042636 (OR = 3.18 for allele A), CALCR rs1801197 (OR = 6.84 for allele A), and ORAI1 rs6486795 (OR = 2.25 for allele C). The maximum OR was shown for AA genotypes in loci rs1042636 (CASR) and rs1801197 (CALCR) (OR = 4.71, OR = 11.8, respectively). After adjustment by Benjamini-Hochberg FDR we found only CALCR (rs1801197) was significantly associated with the risk of calcium urolithiasis development. There was no relationship between recurrent course of the disease and family history of urolithiasis in investigated patients. Thus we found a statistically significant association of polymorphism rs1801197 (gene CALCR) with calcium urolithiasis in Russian population.
Kidney stone disease is an urgent medical and social problem. Genetic factors play an important role in the disease development. This study aims to establish an association between polymorphisms in genes coding for proteins involved in calcium metabolism and the development of calcium urolithiasis in Russian population. In this case-control study, we investigated 50 patients with calcium urolithiasis (experimental group) and 50 persons lacking signs of kidney stone disease (control group). For molecular genetic analysis we used a previously developed gene panel consisting of 33 polymorphisms in 15 genes involved in calcium metabolism: VDR, CASR, CALCR, OPN, MGP, PLAU, AQP1, DGKH, SLC34A1, CLDN14, TRPV6, KLOTHO, ORAI1, ALPL, and RGS14. High-throughput target sequencing was utilized to study the loci of interest. Odds ratios and 95% confidence intervals were used to estimate the association between each SNP and risk of urolithiasis development. Multifactor dimensionality reduction analysis was also carried out to analyze the gene-gene interaction. We found statistically significant (unadjusted p-value < 0.05) associations between calcium urolithiasis and the polymorphisms in the following genes: CASR rs1042636 (OR = 3.18 for allele A), CALCR rs1801197 (OR = 6.84 for allele A), and ORAI1 rs6486795 (OR = 2.25 for allele C). The maximum OR was shown for AA genotypes in loci rs1042636 (CASR) and rs1801197 (CALCR) (OR = 4.71,OR = 11.8,respectively). After adjustment by Benjamini-Hochberg FDR we found only CALCR (rs1801197) was significantly associated with the risk of calcium urolithiasis development. There was no relationship between recurrent course of the disease and family history of urolithiasis in investigated patients. Thus we found a statistically significant association of polymorphism rs1801197 (gene CALCR) with calcium urolithiasis in Russian population.
INTRODUCTION
Kidney stone disease (KSD) has been known to be one of the most excruciating chronic diseases. It is estimated to affect nearly 5% of women and 12% of men during their lifetime, and is considered to be the third most frequent urological disorder (Worcester and Coe, 2010). Multiple studies have revealed that genetics alter the risk of KSD development alongside the environmental factors. It is believed that the vast majority of cases are, in fact, multifactorial.
Deciphering the molecular substrate for the etiopathogenesis of urolithiasis is of outmost importance for developing diagnostic tools and therapy strategies. In most cases, KSD is caused by the formation of calcium concrements, supplying grounds for research into the calcium metabolism impairments in those affected by the disease. Numerous works have been published that elucidate hidden associations between polymorphisms in genes of calcium metabolism and the development of KSD (Filippova et al., 2020). To our knowledge, very few investigations were performed to look into these associations in Russian population (Apolihin et al., 2015;Apolikhin et al., 2016Apolikhin et al., , 2017. According to various reports, the development of calcium urolithiasis has been attributed to polymorphisms in several genes: VDR, CASR, CALCR, OPN, MGP, PLAU, AQP1, SLC34A1, CLDN14, KLOTHO, and ORAI1 (Filippova et al., 2020).
In this article, we examine possible connections between polymorphisms in genes of calcium metabolism and the risk of KSD development in Russian population.
Patients Characteristics
In this case-control study, the experimental group featured 50 patients with KSD, and the control group consisted of 50 healthy individuals aged 1 to 70. All patients suffered from calcium oxalate urolithiasis (as verified with spectral assay of the concrements). The distribution of patients by gender in both groups is following: 13 male patients (26%), 37 female patients (74%). The mean age at onset of the disease in the group of patients with urolithiasis was 29.6 years (median 24 years). The average age of the subjects at the time of participation in the study in both groups was 38.5 years (median -34 years). Family history on KSD was collected from all patients. No family history of KSD was found for any people in the control group.
The study was confirmed by the ethics committee of Sechenov University. All participants signed informed consent prior to entering the research program.
Peripheral blood was used as a source of genomic DNA. The DNA was extracted with DNeasy Blood & Tissue Kit (Qiagen) on QIAcube automated extraction platform (Qiagen) according to the manufacturer's instructions.
Obtained DNA was PCR-amplified with a primer panel specifically developed for this study. The panel featured 68 primers divided into 2 pools to optimize amplification and minimize possible artifacts. Target PCR was conducted with AmpliSens reagents on QuantStudio 5 real-time PCR system (Thermo Fisher Scientific).
NGS libraries were prepared according to an in-house protocol. T4 Polynucleotide Kinase and T4 DNA Ligase (both New England Biolabs) were utilized in compliance with the manufacturer's directions with slight modifications to maximize the output.
PCR products and NGS libraries at any stage of library preparation were purified with Sera-Mag SpeedBeads (General Electric) according to the manufacturer's protocol to ensure recovery of the fragments of an optimal length. DNA concentrations were measured with Qubit 2.0 Fluorometer (Thermo Fisher Scientific) using Qubit TM dsDNA High Sensitivity Assay Kit (Thermo Fisher Scientific). The quality of the final libraries was assessed with on-chip capillary electrophoresis on Agilent 2100 Bioanalyzer (Agilent Technologies) with Agilent High Sensitivity DNA Kit (Agilent Technologies).
The libraries were sequenced on Ion S5 (Thermo Fisher Scientific) high-throughput sequencing platform with Ion 520 & Ion 530 Kit-Chef (Thermo Fisher Scientific) reagents on Ion 530 Chip (Thermo Fisher Scientific).
Statistical Analysis
The final matrix of 28 SNPs obtained after removing SNPs with high linkage disequilibrium, was using for association studies by PLINK v1.90b6.9 (Steiß et al., 2012;Chang et al., 2015). Statistical analyses were conducted with the standard functions of the R environment and packages (R Core Team, 2019). Differences in allelic and genotypic distributions were estimated by Fisher's exact test with Lancaster's mid-p adjustment (Lancaster, 1961). Hardy-Weinberg equilibrium (HWE) in controls was calculated using the chi-squared test with continuity correction (Graffelman, 2015). To estimate the association between each SNP and KSD risk the odds ratios (OR) and 95% confidence intervals (CIs) were calculated using exact methods (medianunbiased estimation (mid-p), maximum likelihood estimation (Fisher) and small sample adjustment) by Epitools package (Tomas, 2017). Differences with the p-value less than 0.05 were considered statistically significant.
For detecting multilocus genotype combinations which may predict disease risk, multifactor dimensionality reduction (MDR) approach was used by MDR 3.0.2 (build 2) software package (Hahn et al., 2003). MDR is a non-parametric data mining method that assumes no genetic model and has been supported by numerous studies of gene-gene and gene-environment interactions (Ritchie et al., 2001;Cho et al., 2004;Andrew et al., 2006;Brassat et al., 2006). Cross-validation and 1000fold permutation testing were used to find optimal models for defining disease risk.
RESULTS
Data analysis revealed a statistically significant association between the development of calcium KSD and polymorphisms in the following genes: CASR (rs1042636, OR = 3.18), CALCR (rs1801197, OR = 6.84), ORAI1 (rs6486795, OR = 2.25). Association between SNP rs219780 of the CLDN14 gene and urolithiasis was characterized by borderline p-value (OR = 2.03; p = 0.05). After the adjustment by Benjamini-Hochberg procedure we found only CALCR (rs1801197) significantly associate with the risk of calcium urolithiasis development ( Table 1). For other studied loci of the gene panel, no statistically significant differences in allele frequencies were found between the experimental and control groups.
More than a half of the patients from the experimental group (26 patients, 52%) had a family history of KSD. In the group of patients with KSD 26 persons (52%) suffered from recurring urolithiasis. Among patients with recurring urolithiasis, 14 people (53.9%) had a family history of KSD. Among those with non-recurring urolithiasis 12 patients (50%) had a family history of the disease. Thus, no relationship was found between the recurrent course of the disease and the family history of the patients (Pearson's Chi-squared test with Yates' continuity correction, χ 2 = 0, p-value = 1).
Comparative characteristics of genotype frequencies of genes loci CASR (rs1042636), CALCR (rs1801197), ORAI1 (rs6486795), and CLDN14 (rs219780), affecting the risk of KSD in our study, is shown in Table 2. Genotype distributions for all loci were compatible with HWE in controls. For the loci of the CASR and CALCR genes, a statistically significant difference was shown between the experimental and control groups, both in the frequency of the alleles and in the frequency of genotypes (p < 0.05). Table 3 shows the significance of the dominant and recessive models for the studied polymorphisms of the CASR (rs1042636), CALCR (rs1801197), ORAI1 (rs6486795), and CLDN14 (rs219780) genes regarding the development of calcium urolithiasis in the Russian population.
Under the recessive model of inheritance, carriers with the AA genotypes of CASR (rs1042636) and AA genotypes of CALCR (rs1801197) had a 4.71-fold and 11.8-fold increased risk of KSD respectively. Moreover the carriers of CC/CT genotype of rs6486795 (ORAI1) have 2.54-fold increased risk of KSD comparing to carriers of TT genotype. After the adjustment by Benjamini-Hochberg procedure no statistically significant differences between KSD patients and controls were found for the rs219780 (CLDN14) and rs1042636 (CASR).
Gene-gene interaction analysis using MDR approach showed that a two-locus model consisting of rs1042636 (CASR) and rs1801197 (CALCR) might have a non-linear association with the susceptibility to the KSD development. This model had an overall accuracy test of 78%, a consistency of cross-validation of 9/10, and a 1000-fold permutation p-value = 0.003 (Table 4). Figure 1 summarizes the twoway gene interaction showing the high-risk genotype [AA + AA] of the rs1042636 (CASR) and rs1801197 (CALCR) associated with an increased KSD risk (OR = 2.59, 95% CI = 1.78-3.86).
DISCUSSION
We detected an association between the polymorphisms of CASR (rs1042636), CALCR (rs1801197), and ORAI1 (rs6486795) genes and the development of calcium urolithiasis in the Russian population. However after adjustment by Benjamini-Hochberg FDR we found only CALCR (rs1801197) was significantly associated with the risk of calcium urolithiasis development.
It is known that these genes products are involved in calcium metabolism. Thus the CASR gene encodes a calcium-sensing receptor which senses changes of calcium concentration in an organism and controls a parathyroid hormone secretion. Activation of the parathyroid hormone synthesis stimulates the calcium release from bone tissue into the bloodstream and decreases the phosphates and calcium reabsorption in the proximal renal tubules (Vezzoli et al., 2013). The CALCR gene is attributable for a calcitonin receptor synthesis. CALCR interacts with the hormone calcitonin which is a functional antagonist of a parathyroid hormone and inhibits the activity of osteoclasts in the bone tissue. This in turn decreases calcium release into the bloodstream and also regulates the phosphates and calcium reabsorption in the renal tubules (Shakhssalim et al., 2014). The ORAI1 gene encodes calcium release-activated calcium modulator type 1. This protein is required for transmembrane calcium metabolism. It is usually activated upon the depletion of internal calcium stores (Chou et al., 2011).
The association between the CASR, CALCR, and ORAI1 genes polymorphisms and the urolithiasis development has been shown in a number of studies conducted in Italian, Indian, and Iranian, Chinece populations by different researchers. The data obtained in this study are generally consistent with the data of the world literature (Corbetta et al., 2006;Scillitani et al., 2007;Shakhssalim et al., 2010;Chou et al., 2011;Vezzoli et al., 2011Vezzoli et al., , 2015Guha et al., 2015;Apolikhin et al., 2016;Qin et al., 2019). Some differences in the results of the investigations most likely can be explained by the specificity of the genetic characteristics of the Russian population, as well as by the peculiarities of the experimental group formation by different researchers. An association between the rs1801197 polymorphism of the CALCR gene and urolithiasis was shown in the study of Qin et al. (2019). As a result of the meta-analysis (494 patients and 536 healthy individuals) performed by mentioned authors allele A of the locus rs1801197 was significantly associated with the risk of calcium urolithiasis development (OR for allele A was 1.987). According to our data, in the Russian population the OR for the A allele of the rs1801197 locus was 6.84 (p < 0.0001).
The relationship between the locus rs1042636 of the CASR gene and KSD was studied in populations of Italy, India, and Iran (Corbetta et al., 2006;Scillitani et al., 2007;Vezzoli et al., 2011). Vezzoli et al. investigated an association between polimorphism rs1042636 (Arg990Gly) of the CASR gene and the risk of KSD development in Italian patients with primary hyperparathyroidism (OR for allele G (Gly) was 3.3) (Vezzoli et al., 2015). Guha et al. showed the influence of the rs1042636 (Arg990Gly) polymorphism of the CASR gene at the development of urolithiasis in Indian population (OR for allele G (Gly) 2.21) (Guha et al., 2015).
The data on the role of rs1042636 (Arg990Gly) polymorphism of the CASR gene in urolithiasis development obtained by Shakhssalim et al. on the Iranian population are in a good agreement with the results of our study (Shakhssalim et al., 2010). In the mentioned study authors showed that patients with the AA genotype (Arg/Arg) at the rs1042636 locus showed a significantly higher serum ionized calcium compared to the patients with the Arg/Gly or Gly/Gly genotypes (OR for the Arg allele was 8.06).
The frequency of the rs1042636G allele according to dbSNP data 1 in Europe varies from 7 to 10%, which corresponds to the data obtained in this study (the allele rs1042636G frequency in the control group in the current investigation was 12%). According to the results of our study, in Russian population the rare G allele of the locus rs1042636 may have a protective effect in relation to the KSD development. Thus, to date, in different populations different alleles of the rs1042636 locus of the CASR gene demonstrate an association with the risk of the urolithiasis development.
A number of studies in different countries were devoted to the investigation of the association between ORAI1 gene polymorphisms and KSD development (Chou et al., 2011;Apolikhin et al., 2016). Thus, a study conducted in the Russian population by Apolikhin et al. revealed an association between the G allele of the ORAI1 rs7135617 locus and an increased risk of a recurrence-free urolithiasis development (OR = 1.049). However, in the mentioned study the role of other polymorphisms of the ORAI1 gene in the KSD was not investigated (Apolikhin et al., 2016).
In a study performed in Thai population Chou et al. studied the effect of 5 polymorphisms of the ORAI1 gene (rs12313273, rs6486795, rs7135617, rs12320939, and rs712853) on the risk of the calcium urolithiasis development. As a result of their investigation, the higher risk of KSD development was established for the rs12313273 and rs6486795 polymorphisms carriers. For the C allele of the rs12313273 polymorphism, the odds ratio turned out to be the most significant (OR = 2.10). At the same time, the maximum risk of the nephrolithiasis development was demonstrated for the combination of C/T/C alleles at the rs12313273/rs7135617/rs6486795 polymorphic loci (OR = 2.54) (Chou et al., 2011).
In the current study all three mentioned above polymorphisms (rs12313273, rs6486795, and rs7135617) of the ORAI1 gene were tested. Our results suggest an association of the C allele of the rs6486795 locus (OR = 2.25) with KSD development. The difference in the frequency of the alternative C allele of the rs12313273 locus between the experimental and control groups was pronounced, but did not reach a statistically significant level (25% versus 15%, χ2 = 3.125, p = 0.078). This is possibly due to the size limitation of the studied groups. When applying a comprehensive assessment of the cumulative effect of the rs12313273, rs6486795, and rs7135617 polymorphisms of the ORAI1 gene on the risk of urolithiasis development, no significant data for their cumulative effect were obtained.
Thus, the data presented in the current study are suggestive for an association between the rs6486795 polymorphism of the ORAI1 gene and the risk of calcium urolithiasis development in Russia. The results of our investigation do not contradict the data obtained by the above mentioned authors.
The analysis of the dominant and recessive inheritance models of the polymorphisms CASR (rs1042636), CALCR (rs1801197) and ORAI1 (rs6486795) genes is important for assessing the risk of calcium urolithiasis development, and therefore it is important for the prevention of KSD. The recessive model for the CASR (rs1042636) and CALCR (rs1801197) polymorphisms, which was confirmed for this loci in the current study, allows us to predict a higher risk of urolithiasis development in patients homozygous for the risk alleles of these genes (rs1042636A in CASR and rs1801197A in CALCR).
Studying of the gene-gene interactions and investigating of the complex impact of gene polymorphisms are not less important for determining of the KSD risk development. In our study, the relationship between the loci rs1042636 of the CASR gene and rs1801197 of the CALCR gene was shown. This phenomenon requires, further, investigation.
Analysis of the association between the rs219780 polymorphism of the CLDN14 gene and calcium urolithiasis in Russian population showed borderline p-value. Further, study of this association is needed to confirm the effect of rs219780 on the risk of KSD development.
CONCLUSION
Thus, we showed the strong association between polymorphism rs1801197 of the CALCR gene and the risk of calcium urolithiasis development in Russian population. Further, investigation of the risk loci is necessary in order to assess molecular pathogenesis of calcium urolithiasis and will help to identify additional genetic factors of KSD development for better diagnostics of this complex disease.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of Sechenov University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
ML and TF: development of the concept and design of the study, collection of the samples, counseling of the patients, data analysis, statistics analysis, supervision, writing the text, and approval of the final version of the article. DS: collecting the samples for the study. KK, AS, AMa, and DK: molecular genetic testing and bioinformatic analysis. VK: statistics analysis and visualization. SN: participation in the article text preparation and analysis of the genetic results. AMo: review and editing. All authors contributed to the article and approved the submitted version. | 4,618.6 | 2021-05-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
Bioactive diterpenoids impact the composition of the root-associated microbiome in maize (Zea mays)
Plants deploy both primary and species-specific, specialized metabolites to communicate with other organisms and adapt to environmental challenges, including interactions with soil-dwelling microbial communities. However, the role of specialized metabolites in modulating plant-microbiome interactions often remains elusive. In this study, we report that maize (Zea mays) diterpenoid metabolites with known antifungal bioactivities also influence rhizosphere bacterial communities. Metabolite profiling showed that dolabralexins, antibiotic diterpenoids that are highly abundant in roots of some maize varieties, can be exuded from the roots. Comparative 16S rRNA gene sequencing determined the bacterial community composition of the maize mutant Zman2 (anther ear 2), which is deficient in dolabralexins and closely related bioactive kauralexin diterpenoids. The Zman2 rhizosphere microbiome differed significantly from the wild-type sibling with the most significant changes observed for Alphaproteobacteria of the order Sphingomonadales. Metabolomics analyses support that these differences are attributed to the diterpenoid deficiency of the Zman2 mutant, rather than other large-scale metabolome alterations. Together, these findings support physiological functions of maize diterpenoids beyond known chemical defenses, including the assembly of the rhizosphere microbiome.
microbiome. Recently, benzoxazinoids were shown to influence the maize response to stress via altered rhizosphere microbial communities mediated by 6-methoxy-benzoxazolin-2-one (MBOA) 21 . Additional benzoxazinoid mutant studies showed changes to fungal and bacterial communities as a result of benzoxazinoid deficiency 22 . Furthermore, benzoxazinoid-deficient mutants were shown to feature substantially altered root metabolomes, suggesting alterations in the microbiome may not be solely attributed to a lack of specific benzoxazinoids, but rather global changes in root metabolites in mutant plants 23 . Across different studies, there further is a large degree of variation in the observed differences in alpha and beta diversity in benzoxazinoid mutants, largely dependent on developmental stage, environmental conditions, and mutational and genetic background [21][22][23] .
The diverse group of terpenoid metabolites also has shown to be critical in mediating above-and belowground interactions between plants and other organisms, including microbes 24 . Kauralexins and dolabralexins are two major diterpenoid groups in maize that have demonstrated or predicted roles in biotic and abiotic stress responses [16][17][18]25 . Kauralexins show stress-elicited accumulation in several tissues, including stems and scutellum, and mediate quantitative defenses against fungal pathogens such as species of Fusarium, Aspergillus and Cochliobolus, as well as insect pests including the European corn borer (Ostrinia nubilalis) 17,[26][27][28] . The more recently discovered group of dolabralexins shows pathogen-inducible accumulation predominantly in roots and, like kauralexins, have strong growth-inhibitory activity against Fusarium pathogens 18 . In addition to their defensive potential against biotic stressors, both kauralexin and dolabralexin production was shown to increase in response to abiotic stress, such as drought or oxidative stress 16,18 . Kauralexins and dolabralexins derive from a common precursor, ent-copalyl pyrophosphate (ent-CPP), which is also shared with the gibberellin (GA) biosynthetic pathway critical for plant growth ( Fig. 1) 17,29 . Two catalytically redundant diterpene synthase (diTPS) enzymes, ANTHER EAR 1 (ZmAn1) and ANTHER EAR 2 (ZmAn2) control ent-CPP formation in maize 29,30 . Genetic studies revealed that ZmAN1 is critical for GA biosynthesis, whereas ZmAN2 feeds ent-CPP into kauralexin and dolabralexin biosynthesis, thus enabling a pathway partition separating precursor flux toward primary and secondary (i.e. specialized) diterpenoid pathways [29][30][31] (Fig. 1). This pathway separation is supported by the phenotype of the Zman2 mutant, which features a loss of function in the Zman2 gene through a stable Ds insertion from the Activator (Ac) and Dissociation (Ds) system 16,29 . Zman2 has kauralexin and dolabralexin deficiency but normal GA levels 16 . Zman2 has a normal growth phenotype, but is more susceptible to biotic and abiotic stress than its wild type (WT) sibling, which is consistent with the protective bioactivity of kauralexins and dolabralexins 16 and suggests a possible role of these metabolites in broader plant-microbe interactions, including root microbiota. To test this hypothesis, we combined microbial 16S rRNA sequencing and metabolite profiling of the diterpenoid-deficient Zman2 mutant compared to its WT sibling to investigate the role of diterpenoids in shaping the root microbial communities.
Results
Dolabralexins are secreted from of maize roots. To examine a possible role of maize diterpenoids in plant-microbiome interactions, we utilized the Zman2 mutant genotype in comparison to its isogenic WT sibling 16,29 . Previous studies showed that 30-day-old WT maize plants significantly accumulate dolabralexins, and to a lesser extent kauralexins, in roots 16 , whereas the Zman2 mutant genotype is almost completely devoid of these diterpenoids [16][17][18] . Despite the deficiency of both kauralexins and dolabralexins in Zman2 root tissue, mutant plants did not show an apparent phenotype under well-watered conditions ( Fig. 2A), consistent with prior reports describing largely unaltered root and shoot weight, developmental features, and GA and zealexin levels in the Zman2 mutant 16 . Zman2 plants were used as a control for analyzing root metabolite exudation due to the deficiency of diterpenoids, and subsequent analysis of the microbiome and metabolome.
To determine a possible ability of maize diterpenoids to affect the rhizosphere microbiome, we first tested if diterpenoids can be exuded from maize roots. For this purpose, 38-day-old Zman2 and WT maize plants were grown on soil, then gently cleaned and suspended for 48 h in nutrient water. After removing the plants, metabolites were extracted from the nutrient water using an equal volume of ethyl acetate and analyzed by LC-MS/MS against authentic metabolite standards. As a positive control, the benzoxazinoid 1,3-benzoxazol-2-one (BOA), known to be secreted from maize roots, was measured and used as a standard to detect BOA and other benzoxazinoids metabolites with similar mass spectra. Benzoxazinoids were found to be present in both Zman2 and Maize root microbial communities are distinct by compartment. The Zman2 mutant genotype and its corresponding WT sibling serve as a tool to investigate the effect of diterpenoids, or the lack thereof, on the maize root microbial community 16,18 . Based on the growth conditions of previous research on the mutant genotype, showing an enrichment of dolabralexins and to a lesser extent kauralexins, we used one-month-old Zman2 and isogenic WT sibling plants to comparatively examine the impact of diterpenoid-deficiency on the maize root microbiome. Representative plant images are shown in Fig. 2A.
Microbiomes of the rhizosphere (1-2 mm of soil outside the root) and endosphere (inside the root), the latter representing root samples after removal of rhizosphere and rhizoplane microbes through washing and sonication of the roots, were analyzed. Bulk soil without plants was used as a control to examine background soil microbial communities. The 16S rRNA gene (V4 region) was sequenced using Illumina MiSeq and sequences were clustered into operational taxonomic units (OTUs) using the QIIME pipeline and the Greengenes database 32 . After filtering to remove mitochondrial and chloroplast OTUs, 4,259 distinct OTUs remained. Following standard protocols for analyzing data with a binomial distribution, such as the MiSeq data generated in this study, OTU counts were normalized by relative abundance, in which the counts were taken as a percentage of the total number . * represents significant difference between two sample types, p ≤ 0.05. Two-tailed t-tests were used for benzoxazinoids; one-tailed t-tests were used for dolabralexins, since they are predicted to be enriched in WT and deficient in Zman2. Error bars represent standard error. n = 5 (WT, Zman2); Control represents one extraction of nutrient water, and demonstrates LC-MS/MS background.
Scientific Reports
| (2021) 11:333 | https://doi.org/10.1038/s41598-020-79320-z www.nature.com/scientificreports/ of OTU counts for a sample [33][34][35] . This method was used rather than rarefaction methods, which are at risk for discarding so as not to discard low abundance OTUs 36 . The DESeq2 package in R, which is used to analyze data with a binomial distribution such as the data generated in this study, was used to determine which OTUs were enriched or depleted in the WT and Zman2 samples 37 . Consistent with previous research in maize and other plant species 7,38 , the microbial communities of the two plant compartments and the bulk soil were all statistically distinct. The alpha diversity, as measured by the Shannon index, showed the greatest diversity of microbes in bulk soil, with reduced diversity in the rhizosphere and further reduction in the endosphere (Fig. 3, statistics reported in Supplemental Table 2). Sequencing depth was used as a covariate to account for variance due to experimental variables and not any possible variance due to sequencing depth. A permutational multivariate analysis of variance (PERMANOVA) was used to measure the diversity between samples (beta diversity) and showed that, when accounting for all factors, compartment accounts for 31% of the variation between samples (p < 0.001) (Supplemental Table 1). This was confirmed by a principle coordinate analysis (PCoA), in which compartment was the greatest source of variation (Fig. 4A).
A total of 547 OTUs, in 19 phyla (out of 34 total phyla represented by all data points), were enriched in the rhizosphere as compared to the endosphere, whereas 63 OTUs in 8 phyla were enriched in the endosphere as compared to the rhizosphere, as determined using the DESeq2 package in R 37 . Among the 10 most abundant phyla plotted for each sample type, some phyla were found to be enriched in both compartments, whereas the rhizosphere was predominantly enriched for OTUs in the phyla Actinobacteria, Acidobacteria, and Alphaproteobacteria (Fig. 5).
The endosphere did not demonstrate significant differences in beta diversity or alpha diversity attributed to genotype, as determined by PERMANOVA and PCoA or Shannon index, respectively (Fig. 4B). No individual OTUs were significantly enriched or depleted in regards to genotype in the endosphere as analyzed by generalized linear models using the DESeq2 package in R 37 . Because there was no apparent difference, all further analysis focuses on the rhizosphere compartment only, and the bulk soil and endosphere samples were omitted from further analyses.
The Zman2 mutant features a distinct microbial community composition. Next, the impact of genotype on the rhizosphere microbiome composition was assessed. Significant differences in the microbiome composition were observed between WT and Zman2 plants, with genotype accounting for 10.7% of the variation in the rhizosphere (p < 0.05) (Supplemental Table 1). Zman2 plants harbor a more diverse microbiome as determined by a greater alpha diversity compared to the WT sibling in the rhizosphere (Fig. 3, Supplemental Table 2). Six OTUs were more abundant in WT plants, whereas none were enriched in Zman2 samples in the rhizosphere (Supplemental Fig. 1; OTU abundances by sample type plotted in Supplemental Fig. 2). Of the six OTUs, all were assigned to Alphaproteobacteria belonging to the order Sphingomonadales, three of which were assigned to the genus Sphingobium, whereas the remaining OTUs were unclassified at the genus level.
Wild type and mutant plants have largely indistinguishable metabolomes.
To verify that differences in microbiome composition can be attributed to a deficiency in diterpenoids in the roots of Zman2 plants, metabolite profiling using both targeted and untargeted LC-MS/MS analysis was performed on the same root samples used for microbial composition analysis. Targeted metabolite analysis of the major dolabralexin metabolite, trihydroxydolabrene (THD), confirmed via a standard that THD was near absent in the Zman2 mutant while present in WT (Fig. 6). Epoxydolabranol was not found in either mutant nor WT plants (Supplemental Fig. 3), while epoxydolabrene was found to be lowly abundant in both mutant and WT plants (Fig. 6), Table 2. n = 5 (bulk soil) or n = 6 (all plant samples).
Scientific Reports
| (2021) 11:333 | https://doi.org/10.1038/s41598-020-79320-z www.nature.com/scientificreports/ presumably because of their conversion to THD. Mirror plots demonstrate these identifications, as well as their absence in the mutant plants (Supplemental Fig. 3). This observation is consistent with previous research, showing low levels of dolabralexin and kauralexin metabolites in Zman2, predictably due to ent-CPP derived from ZmAn1 activity 16 . As a control, benzoxazinoid abundance was calculated using BOA as a standard, and found to be present in both Zman2 and WT roots without significant differences in abundance. Using BOA for LC-MS/ MS generates multiple peaks with similar mass spectra due to various benzoxazinoids compounds, and their total area was analyzed here (Supplemental Fig. 3).
Parallel untargeted metabolomics analysis also did not indicate significant variance in the global metabolite profiles of the WT and Zman2 plants, determined by principle component analyses (Fig. 7A,B). Furthermore, PERMANOVA analysis based on all features (dominant mass ions and corresponding specific retention times) demonstrated genotype did not significantly impact the metabolome using either positive or negative ionization modes (Supplemental Table 3). Although the PERMANOVA and PCoA demonstrated no significant difference overall between genotypes, a generalized linear model was used to identify individual metabolites that may be significantly enriched or depleted. Performing analyses in both positive and negative ionization modes, a total of 102 and 46 metabolites were enriched in WT as compared to 79 and 38 enriched in Zman2, respectably by ionization mode (Fig. 7C,D, Supplemental Table 4). Annotation of the remaining metabolites with distinct abundance in WT and Zman2 roots by comparison to mass spectral databases identified significant matches for 16 compounds (Supplemental Table 4). Consistent with the targeted metabolite profiling, THD was identified among the 10 most abundant compounds in the 102 enriched metabolites in WT roots (ID positive-2360). The remaining metabolites could not be annotated with high confidence, but probably represent so far uncharacterized Table 4). The metabolite group containing THD (ID positive-2360), represented as connected nodes in the metabolic network generated in Cytoscape, is connected to two other features that were not significantly enriched or depleted and unannotated, but may represent dolabralexin-type molecules given the similarity of their mass spectra. An additional compound enriched in WT was annotated as mesterolone, a triterpenoid, yet features mass spectra highly similar to . * represents significant difference between two samples p ≤ 0.1. Two-tailed t-tests were used for benzoxazinoids; one-tailed t-tests were used for the remaining metabolites, since they are predicted to be enriched in WT and deficient in Zman2; n = 4 (Zman2) or n = 5 (WT). Error bars represent standard error.
Scientific Reports | (2021) 11:333 | https://doi.org/10.1038/s41598-020-79320-z www.nature.com/scientificreports/ that of epoxydolabrene (Supplemental Table 4). The remaining metabolites were at least one order of magnitude less abundant as compared to THD in WT plants (Supplemental Fig. 4, Supplemental Table 4). Although the linear models show some metabolites enriched or depleted in WT versus Zman2 plants (Fig. 7C,D), the global metabolome is not significantly altered and THD represents one of the most significantly different metabolites in its abundance.
Discussion
The dynamic interrelations between plants and their species-specific root microbiota directly influence plant health and stress tolerance 4 . Despite the importance of these mutualistic relationships, the complex chemical mechanisms coordinating inter-organismal interactions remain largely elusive. In particular, limited knowledge exists on how specific metabolites, blends thereof, and the corresponding pathways impact plant-microbe interactions and microbial community assembly. For example, recent maize studies illustrated that mutant genotypes deficient in benzoxazinoid metabolites (specifically MBOA) showed an altered stress response mediated by the influence of MBOA on the below-ground microbial community, thus underscoring the importance of these metabolite-guided plant-microbe interactions on plant health [21][22][23] . The microbiome and metabolome analyses performed in this study support the hypothesis that specific groups of bioactive diterpenoids in maize, namely dolabralexins and/or kauralexins, contribute to the assembly of the rhizosphere microbiome.
Although the underlying secretion mechanisms require further study, presence of dolabralexins in maize root exudates supports a role of these compounds in below-ground plant-microbe interactions (Fig. 2). Microbiome analysis of the root microbial communities showed no significant influence of genotype on endosphere communities using distance-based methods (Fig. 4B). It appears plausible that diterpenoids do not impact endophytic microbes due to the spatial separation of endophytic microbes that predominantly colonize the apoplast 39,40 , whereas functionalized diterpenoids accumulate intracellularly as demonstrated in several plant species, and are exuded into the rhizosphere as shown here [41][42][43] . Our results showing distinct rhizosphere microbial communities of Zman2 and its WT sibling with a more diverse root microbiome alpha diversity in Zman2 (Fig. 3), provide evidence supporting a role of diterpenoids in the microbiome assembly by reducing the community diversity. This difference in diversity supports the hypothesis that dolabralexins directly inhibit the growth and/or propagation of specific rhizobia bacteria. This is further supported by the distinct beta diversity between Zman2 and WT (Fig. 4C), with genotype accounting for 10.7% of the variation between samples. Notably, the significant differences between the two genotypes were defined by only a few OTUs, most of which were assigned to the order Sphingomonadales (Supplemental Figs. 1 and 2). Sphingomonads have been reported to degrade phenolic www.nature.com/scientificreports/ compounds and utilize them as carbon sources 44 , and were among the OTUs displaying the greatest heritable variation (H 2 ) across maize lines of the NAM (Nested Association Mapping) diversity panel 14 . Considering the variation of dolabralexins across selected maize inbred lines 18 , it can be speculated that not only phenolics, but also maize-specific diterpenoids mediate the interaction with species of Sphingomonadales. The overall increased microbiome diversity in the Zman2 mutant differs from previous research supporting that a greater microbial diversity promotes crop resistance to soil pathogens (Fig. 3) 45,46 , given that previous work has shown Zman2 to be more susceptible to fungal pathogens 29 . Considering these findings in association with the demonstrated anti-microbial activity of both kauralexins and dolabralexins 17,18 , the relationship between the disease-preventative properties of dolabralexins and bacterial diversity remains more complex, and the influence of dolabralexins may not be directly on the bacterial communities, but possibly indirect via fungal communities and other microbe-microbe interactions, in addition to or instead of direct impacts on growth of rhizobia bacteria.
Recent research provided insight into the effect of benzoxazinoids on the microbiome and their importance to plant health. Contrasting the significant role of diterpenoids in determining alpha diversity shown in this study (Fig. 3), benzoxazinoids did not change alpha diversity, but impacted bacterial beta diversity and specific phyla and OTUs to varying degrees depending on the experimental conditions and genotypes used [21][22][23] . Thus, it appears plausible that the impact of plant age, soil type, environmental stimuli, genetic background, and/or sampling on metabolite-microbiome interactions contribute to these contrasting results. The level of variation in beta diversity are comparable to the 10.7% of beta diversity explained by diterpenoids in this study. Moreover, the difference in the impact of benzoxazinoids and diterpenoids on the rhizosphere alpha diversity and community composition point toward distinct functionalities of different metabolite classes in maize-microbiome interactions. Recent analysis of two rice mutants, Oscps2 and Oscps4, deficient in rice-specific diterpenoids showed indistinguishable rhizosphere microbiomes from their WT siblings, suggesting that maize diterpenoids are distinct in their ability to modify the rhizosphere microbiome 47 .
The lack of changes in the global metabolome of WT and Zman2 roots is consistent with WT phenotype of Zman2 mutant plants under healthy conditions (Fig. 7) 16 , and supports a major role of dolabralexin and/or kauralexin bioactivity, rather than other large metabolic perturbations caused by the ZmAN2 loss of function, on the observed microbiome alterations. While the overall metabolomes were not significantly different, PER-MANOVA and PCoA studies identified a number of unidentified metabolites with distinct abundances in WT and Zman2 plants. Here were select metabolites not yet identified as dolabralexins or kauralexins that were altered in their abundance between the mutant and WT. These could be not-yet-identified dolabralexins or kauralexins products, breakdown products of these metabolites, or other fluctuations. Interestingly, several of these significantly enriched or depleted compounds featured mass fragmentation patterns suggesting that they represent yet unknown dolabralexin-and kauralexin-type compounds with possible functions in the plants interaction with the rhizosphere microbiome. Notably, recent research investigating the root metabolome and microbiome in maize mutants deficient in the biosynthesis of selected benzoxazinoid compounds found a different scenario, where mutant plants displayed significant metabolic changes across many pathways in PCoA and other statistical methods 23 . These results underscore the importance of metabolite analysis to understand the broader metabolic implications of pathway mutations especially within complex, branching metabolite networks, so as to not attribute microbiome changes to a single absent metabolite, but possibly due to larger changes throughout the metabolic network in the mutant plants.
Using a defined pathway mutant, the present study supports a role of species-specific bioactive maize diterpenoids in shaping the rhizosphere microbiome diversity and composition. These findings expand our insight into diterpenoid functions in Poaceous crops beyond well-established anti-microbial and anti-feedant bioactivities. Such deeper knowledge of the mechanisms underlying natural plant-microbe interactions will be critical for ultimately enabling broader agricultural applications.
After 38 days, plants were removed from pots and the roots were gently washed with deionized water so as not to cause tissue damage. The nutrient solution pH was 3.36, determined using an Ohaus Starter2000 pH meter. The plants were then placed in 2.8 L Erlenmeyer flasks with the nutrient water previously described and suspended with tape such that only the roots were in the nutrient water. Flasks were wrapped in aluminum foil to prevent light stress to the roots and placed in a growth chamber (conditions as detailed below). After 48 h, plants were removed, and the nutrient water was filtered through a metal strainer to remove any possible tissue debris. Metabolites were extracted from the exudate water by adding 700 mL of ethyl acetate to 700 mL exudate water and leaving at 4 °C for 24 h. The organic solvent layer was then separated and concentrated using a rotary evaporator for metabolite analysis. Nutrient water containing no plants was used as a control. www.nature.com/scientificreports/ fallow and had the stover previously turned under after the summer harvest. The soil at this field site is a silty clay loam, as determined using the standard "texture by feel" method for soil classification 48 . The soil was then mixed in the sterile bags by using the sterile shovel to stir and mix until the soil appeared homogenous. The soil was then distributed to 2.37 L pots that were sterilized using 3% bleach wash. The soil was not sieved, and any large rocks or soil chunks were removed by hand during pot filling. Plants were grown in a growth chamber in the pots in order to control for all other environmental conditions. The growth chamber was set to a 16/8 h day/night cycle, with a 26/22 °C day/night temperature cycle. Seeds of Zman2 and WT plants were sterilized in 3% (v/v) bleach for one hour, then washed five times with deionized water, and planted approximately 3 cm deep in the pots with maize field soil. Pots were distributed in the growth chamber in a block design to mitigate location effects. Zman2 and WT plant microbiomes and corresponding metabolomes were measured for six biological replicates each using bulk soil (no plants) as a control. Pots were watered every other day with 175 mL of nutrient water (see contents in root secretion assay methods), and tissue was harvested on the 45 th day as described below.
Plant growth conditions. Soil was collected at a UC
Microbiome sample collection. Sample collection and processing was adapted from Edwards et al. 7 . In brief, plants were carefully removed from the soil, and gently shaken until ~ 2 mm of soil adhering to the root remained. n = 5 (bulk soil) or n = 6 (all plant samples). The roots were then transferred to a 50 mL falcon tube contained sterile phosphate buffer saline (PBS, 137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 ) and placed on ice. For analysis of the rhizosphere microbiome, these roots in the PBS in falcon tubes on ice were shaken using sterile forceps to remove the soil from the root surface and soil samples were stored at 4 °C until further processing the next day. Gentle shaking with the forceps and careful observation ensured that no roots were broken and in the rhizosphere sample. For analysis of the endosphere microbiome, the above root samples were placed into fresh PBS buffer in a new 50 mL falcon tube and sonicated three 3 times for 10 s each, followed by placing the roots in fresh PBS buffer again to remove any rhizoplane microbes. Using these roots, ~ 4 cm sections of the primary root (beginning 2 cm below the root-shoot junction) was cut, placed in a new tube, frozen in liquid N 2 , and stored at − 80 °C until further processing. Bulk soil samples from soil 2 cm below the surface were collected using a sterile scoop and stored in PBS buffer at 4 °C until sample processing the next day. DNA extraction. All DNA was extracted using the MoBio PowerSoil DNA extraction kit and eluted in 50 µL of DEPC-treated water. The rhizosphere samples were concentrated by pipetting 1 mL of the rhizosphere soil in PBS into a 2 mL tube and centrifuged for 30 s at 10,000×g. The supernatant was discarded and the soil was used for DNA extraction. The endosphere samples were homogenized and ground in liquid N 2 for DNA extraction with the MoBio Powersoil DNA kit.
16S rRNA gene amplification, quantitation, and sequencing. The V4 region (515 to 806 bp of the 16S rRNA gene) of the 16S rRNA gene was amplified according to Edwards et al. 49 . In brief, PCR was performed using Qiagen HotStart HiFidelity polymerase with the following parameters for each mix: 6.25 µL water, 2.5 µL buffer, 1.25 µL of 10 µM forward primer, 1.25 µL of 10 µM reverse primer, 0.25 µL HotStart polymerase, and 1 µL of DNA. Specific primer pairs, containing unique 12 bp barcode adaptors on each end of the forward and reverse primers were used for each reaction. Samples without DNA were used as negative controls. A touchdown PCR program was used with the following parameters: 95 °C for 5 min; 7 cycles of 95 °C for 45 s, 65 °C for 1 min decreasing at 2 °C per cycle, and 72 °C for 90 s; 30 cycles of 95 °C for 45 s, 50 °C for 30 s, 72 °C for 90 s; a final extension of 72 °C for 5 min; and samples were held at 4 °C. Only samples producing single amplicon bands as verified by agarose gel electrophoresis were considered for further analysis, and all samples were maintained for downstream analysis.
Amplicons were purified to remove primers using AmPure XP beads (Beckman Coulter). Here, beads were added to each PCR reaction, incubated at room temperature for 5 min, and placed on a magnet for 2 min to separate the beads. After removal of the supernatant, the beads were washed with 70% ethanol twice. The ethanol was then allowed to evaporate and the beads were resuspended in 50 µL water, mixed well, and placed again on a magnet to remove the supernatant containing the desired PCR products. DNA concentrations were measured using a Qubit, and pooled to reach samples of equimolar concentrations. The pooled samples were cleaned as described above, separated by agarose gel electrophoresis and the 400 bp amplicons were extracted using a NucleoSpin Gel and PCR Clean-up kit (Macherey-Nagel). Libraries were made and sequencing was performed at the UC Davis Genome Center using 250 × 250 paired end, dual index Illumina MiSeq sequencing. Sequence analysis. Sequences were analyzed as previously described by Edwards et al. 7 . In brief, sequences were demultiplexed based on individual barcodes using a custom R script, and assembled into single sequences using Pandaseq. Sequences were then clustered into OTUs with the NINJA-OPS pipeline using 97% pairwise sequence identity referenced against the Greengenes 16S rRNA sequence database (version 13_8) 32 .
In total, 1,802,959 high-quality sequences were obtained with a median read count of 31,204 per sample, and a range of 1699-87,795 (All data is available in Sequence Read Archive, Sequence Read Archive repository, Bio-Project ID PRJNA600272 [https ://www.ncbi.nlm.nih.gov/sra/PRJNA 60027 2]). Using the QIIME pipeline, reads were clustered based on 97% sequence identity into operational taxonomic units (OTUs) and were annotated using QIIME and the Greengenes database 32 , resulting in 7181 microbial OTUs. Chloroplast and mitochondrial OTUs represented 65 OTUs and were removed, along with low-abundance OTUs (less than 5% of the total sample), leaving 4258 total OTUs. OTU counts were then normalized by relative abundance, which was used rather than rarefaction methods so as not to discard low abundance OTUs 36 .
Scientific Reports
| (2021) 11:333 | https://doi.org/10.1038/s41598-020-79320-z www.nature.com/scientificreports/ All statistical analysis of the OTU table generated by QIIME 32 were analyzed using custom R scripts (version 3.6.1) 50 . Alpha-diversity was measured using the "Shannon" method in the R package vegan 51 . Principle coordinate analysis (PCoA) were conducted using unconstrained principles and Bray distances in the R package vegan 51 . PERMANOVA (permutational multivariate analyses of variance) analysis was performed using the R package vegan function adonis to measure beta-diversity 51 . The DESeq2 package 52 was used to identify OTUs and phyla whose abundance was differentially affected by our experimental variables. Phyla counts were derived by aggregating raw counts for OTUs at the phylum level within each sample. After analysis by DESeq2, the results were compiled and tidied using the biobroom package 53 . Plots were visualized using the ggplot2 in the tidyverse package 54 . All scripts generated in this study have been deposited to GitHub (https ://githu b.com/ kmurp hy61/maize micro biome ).
Metabolite extraction. The remaining roots (~ 1 g fresh weight) used for endosphere microbiome analysis (see microbiome sample collection) were homogenized and ground in liquid nitrogen. Because of availability of tissue, the number of plant samples were reduced for metabolite extraction as compared to microbiome DNA extraction; n = 4 for Zman2, n = 5 for WT. Samples were then placed in a 2 mL glass vial and metabolites extracted by incubation in 2 mL methanol overnight at 4 °C with gentle rocking. Samples were centrifuged for 10 min at 4000 × g and the methanol phase transferred to a new vial using a glass pipette, air-dried, and resuspended in 100 µL methanol.
Metabolite analysis. For metabolite analysis by liquid chromatography tandem mass spectrometry (LC-MS/MS), samples were spiked with 4 µM internal standard mixture of deuterium-labeled lipids (Cat# 110899, 857463P, 861809O, 110922, 110922, 110921, 110918, 110579, 110544, Avanti Polar Lipids, Inc) and 1 µg/mL ABMBA (2-Amino-3-bromo-5-methylbenzoic acid, Sigma). UHPLC reverse phase chromatography was performed using an Agilent 1290 LC coupled with a QExactive Orbitrap MS (QE = 139) (Thermo Scientific, San Jose, CA). Chromatography was performed using a C18 column (Agilent ZORBAX Eclipse Plus C18, Rapid Resolution HD, 2.1 × 50 mm, 1.8 µm) at a flow rate of 0.4 mL/min and injection volume varied from 0.9 to 3.5 µL to normalize against sample dry weight. Samples were run on the C18 column at 60 ºC equilibrated with 100% buffer A (100% LC-MS water w/ 0.1% formic acid) for 1 min, following by a linear dilution of buffer A down to 0% with buffer B (100% acetonitrile w/ 0.1% formic acid) over 7 min, and followed by isocratic elution in 100% buffer B for 1.5 min. Full MS spectra were collected ranging from m/z 80-2,000 at 60,000 to 70,000 resolution in both positive and negative mode, with MS/MS fragmentation data acquisition using an average of stepped 10-20-40 and 20-50-60 eV collision energies at 17,500 resolution. For targeted analysis, product identification by comparison to standards was performed where authentic standards were available.
For untargeted analysis, exact mass and retention time coupled with MS/MS fragmentation spectra were used to identify compounds. Features-high intensity signals narrowly contained at a given retention time and m/z-were detected using the MZMine software v 2.24 (http://dx.doi.org/10.1093/bioin forma tics/btk03 9). Data was filtered to remove MS/MS fragment ions within ± 17 Da of the precursor m/z, and subsequently filtered to remove all but the top six ions in the ± 50 Da throughout the spectrum. Precursor and fragment ion tolerance was 0.05 Da. Features that showed a significantly different abundance (peak height) using generalized linear models, calculated using custom R scripts and the lm() function 50 , with statistical analysis results in Supplemental Table 3 and significantly encriched or depleted features listed in Supplementary Table 4. Generalized linear models are linear regression models used to determine if a particular feature is significantly different in abundance between two genotypes. All features were annotated using Global Natural Products Social Molecular Networking (GNPS) [55][56][57][58][59] . In short, a Feature-Based Molecular Networking workflow was used to assign features to a molecular network with a cosine score above 0.7 and more than six matched peaks. The maximum size of a molecular family was 100, and low scoring edges were removed to meet this threshold. The spectra were then searched against GNPS spectral libraries and annotated with the top hit, if there was one. Complete annotations, features present, and Cytoscape visualization networks are available online (https ://gnps.ucsd.edu/Prote oSAFe / statu s.jsp?task=a748e 51975 2249e 2a912 dd3d4 66db9 8d). All scripts are available on GitHub (https ://githu b.com/ kmurp hy61/maize micro biome .git). Lists of significantly different features enriched or depleted in each sample type are available in Supplemental Table 4.
Data availability
The datasets generated during and/or analyzed during this study are available in the Sequence Read Archive repository, BioProject ID PRJNA600272 [https ://www.ncbi.nlm.nih.gov/sra/PRJNA 60027 2]. Complete metabolite annotations and Cytoscape visualization networks are available (https ://gnps.ucsd.edu/Prote oSAFe /statu s.jsp?task=a748e 51975 2249e 2a912 dd3d4 66db9 8d). The code used to analyze these datasets are available in the GitHub repository, https ://githu b.com/kmurp hy61/maize micro biome .git. License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 7,968.8 | 2021-01-11T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Cardiac TdP risk stratification modelling of anti-infective compounds including chloroquine and hydroxychloroquine
Hydroxychloroquine (HCQ), the hydroxyl derivative of chloroquine (CQ), is widely used in the treatment of rheumatological conditions (systemic lupus erythematosus, rheumatoid arthritis) and is being studied for the treatment and prevention of COVID-19. Here, we investigate through mathematical modelling the safety profile of HCQ, CQ and other QT-prolonging anti-infective agents to determine their risk categories for Torsade de Pointes (TdP) arrhythmia. We performed safety modelling with uncertainty quantification using a risk classifier based on the qNet torsade metric score, a measure of the net charge carried by major currents during the action potential under inhibition of multiple ion channels by a compound. Modelling results for HCQ at a maximum free therapeutic plasma concentration (free Cmax) of approximately 1.2 µM (malaria dosing) indicated it is most likely to be in the high-intermediate-risk category for TdP, whereas CQ at a free Cmax of approximately 0.7 µM was predicted to most likely lie in the intermediate-risk category. Combining HCQ with the antibacterial moxifloxacin or the anti-malarial halofantrine (HAL) increased the degree of human ventricular action potential duration prolongation at some or all concentrations investigated, and was predicted to increase risk compared to HCQ alone. The combination of HCQ/HAL was predicted to be the riskiest for the free Cmax values investigated, whereas azithromycin administered individually was predicted to pose the lowest risk. Our simulation approach highlights that the torsadogenic potentials of HCQ, CQ and other QT-prolonging anti-infectives used in COVID-19 prevention and treatment increase with concentration and in combination with other QT-prolonging drugs.
Introduction
The cinchona alkaloid quinine (QUIN), along with synthetically produced chloroquine (CQ), are quinoline compounds which have been used in the treatment of malaria for decades. The more soluble hydroxy derivative of CQ is hydroxychloroquine (HCQ). CQ and HCQ are diprotic bases which accumulate in acid vesicles including lysosomes over time. Many of their multiple biological activities including their antiviral action are associated with increased vesicle pH [1]. Both drugs have been shown to block various cardiac ion channels [2].
HCQ was initially developed as an anti-malarial drug, sold as the sulfate salt under the trade name Plaquenil [3]. HCQ is used for the treatment of a wide variety of conditions with the majority of use being for systemic lupus erythematosus (SLE) and rheumatoid arthritis (RA). For these indications, it is often prescribed for use over months to years, and has had a good safety record including in pregnancy [1].
Early clinical studies reported few toxic side effects across HCQ anti-malarial treatment regimens [4]. Work in the late 1950s [5] explored the possibility of using 4-aminoquinolines in the treatment of cardiac arrhythmias, although the specific action of reducing heart rate was not explored. Sumpter et al. [6] showed evidence for risk of cardiomyopathy during long-term exposure to high doses of HCQ for the treatment of patients with SLE and RA. The present treatment regime for SLE (generally 200-400 mg d −1 ) [7] includes doses lower than the one originally used to treat arthritis or malaria [8]. Irreversible retinal toxicity is rare at current recommended doses [9,10]. Reported side effects have been seen in greater than 1000 mg d −1 dosage ranges [11][12][13]. The risk of retinopathy is increased with large cumulative doses of HCQ (greater than 1000 g) [14]. The arrhythmogenic cardiotoxicity of the quinoline and structurally related anti-malarial drugs are well documented [15]; in particular, effects on hypotension and electrocardiographic QT interval prolongation have been reported. Capel et al. [16] in 2015 showed that HCQ also inhibits the pacemaking current I f and offers the potential of being used as a bradycardic agent. They also noted additional effects on the L-type calcium channels and delayed rectifier potassium channels in isolated guinea pig sinoatrial node cells, indicating multi-ion channel block in cardiac cells. With the increased re-purposing of CQ and HCQ [14], including for the treatment (SOLIDARITY trial, ISRCTN83971151 and UK RECOVERY trial, ISRCTN50189673) and prevention of COVID-19, there is a need to critically assess the cardiovascular safety profiles of these anti-malarials.
Since the advent of mathematical modelling of cardiac cell activity in the 1960s [17], major new insights have emerged in the field, along with approaches to calibrating such models from experimental data [18], and recognition of the need to quantify uncertainty in predictions [19]. Mathematical modelling has since been shown to be useful in elucidating the requirements for reliable risk assessment predictions, such as the need to account for the actions of compounds on multiple ion channels [20], and has helped to guide experimental design considerations for ion channel screening experiments [21,22].
Prolongation of the QT interval on the surface electrocardiogram (ECG) is a surrogate measurement of prolonged ventricular action potential duration (APD). Dispersion of repolarization (DR) is a result of heterogeneous lengthening of APD throughout the ventricular myocardium, often across the ventricular walls. This DR and the tendency of prolonged APD associated with early afterdepolarizations (EADs) provide the substrate of polymorphic ventricular tachyarrhythmia (VT) associated with long QT syndrome (LQTS) [23], Torsade de Pointes (TdP) VT [24]. The vast majority of acquired LQTS cases (which are more prevalent than congenital LQTS cases) are the results of electrolyte abnormalities [25] or adverse drug effects [26], the latter particularly due to interaction with the human Ether-à-go-go-Related Gene (hERG), which encodes the pore-forming subunits (Kv11.1) of the channel carrying the rapidly activating delayed rectifier current, I Kr .
In royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210235 Mirams et al. [20] showed a simulated evaluation of multi-channel effects at the whole-cell level could be used to improve this early prediction of TdP risk. The comprehensive in vitro proarrhythmia assay (CiPA) initiative was later established as a novel cardiac safety screening paradigm that takes into account multichannel drug effects intended to replace the former regulatory strategy which relied on hERG block and QT prolongation, sensitive predictors of TdP which lack specificity [28].
In patients with compromised organ function, such as in COVID-19, understanding drug safety and drug interactions is critical. In this study, we use a computational approach, based on a classifier developed by the CiPA initiative [29], to predict the clinical torsadogenic risk categories associated with CQ, HCQ and other commonly used anti-infective agents known to prolong the QT interval. We also use an independent block model to assess the safety profile of combination therapies and show that torsadogenic potentials of HCQ, CQ and other QT-prolonging anti-infectives used in COVID-19 prevention and treatment increase with concentration and in combination with other QT-prolonging drugs. Although the interest in CQ and HCQ for COVID-19 prophylaxis/treatment has waned, it is hoped that our approach may be considered for the screening of potential future therapies/ combination therapies.
Methods
In 2015, we showed that HCQ inhibits I f , L-type calcium channels, and slow and rapid delayed rectifier potassium channels in isolated guinea pig sinoatrial node cells [16]. Five minutes of exposure to 3 µM HCQ conveyed a statistically significant reduction in I CaL (12 ± 4% reduction in max. conductance, n = 6) and I Kr (35 ± 4% reduction across step lengths rendering maximal current activation, n = 5) at p < 0.05, analysed using repeated-measures ANOVA. We used these data to approximate IC 50 s for the actions of HCQ on I Kr , I CaL and I Ks by fitting the parameters of a Hill curve (assuming a Hill coefficient of 1 [20]). HCQ blocked channels in the following order of potency: I Kr > I Ks > I CaL . IC 50 values for other anti-infective compounds of interest including azithromycin (AZ), chloroquine (CQ), halofantrine (HAL), lopinavir/ritonavir (LOP/RIT), moxifloxacin (MOX) and QUIN simulated in this study were taken from the literature, and are shown in electronic supplementary material, table S1.
Based on the availability of ion channel block data for each compound, changes in up to seven ion currents, namely I Kr , I Ks , I K1 , I CaL , I Na , I NaL , I to , were inputted into the CiPA version of the O'Hara-Rudy (ORd) human ventricle mathematical action potential model [29]. For example, based on the values presented in electronic supplementary material, table S1, AZ was assumed to block I Kr , I NaL , I Ks and I to with IC 50 s of 70.8, 189.1, 470.0 and 88.8 µM, respectively. Drug block was modelled using conductance block where a proportion b i of channel type i are blocked, and the maximal density of the current is then scaled by ð1 À b i Þ [20]. For simulation of combinations of drugs, the Bliss model of independent block was assumed [30], in which the total proportion of channels blocked arising from the combination of the block by drugs 1 and 2 was given by That is, block occurs when one compound, the other, or both are bound to an individual channel, and any of these scenarios leads to complete block of an individual channel. Combinations of drugs were applied at plasma concentrations based on the ratio of free C max for drugs from clinical studies referenced in electronic supplementary material, table S1 (where a range is shown, the highest available C max was used). A torsade metric score was calculated for each compound, computed as the average qNet at 1-4× C max [28]. Briefly, qNet is a measure of the net charge crossing the membrane during a simulated action potential repolarization, calculated as the integral or area under the curve of a net current, I Net , defined as ð2:2Þ A previous study found that estimates of pIC 50 from repeated ion channel screens followed a logistic form, i.e. pIC 50 ∼ logistic(µ, σ) [21], where µ is the mean and σ is a spread parameter. Obtaining a value for σ in each of the affected ion channels allows uncertainty in IC 50 estimates to be propagated through to APD and qNet predictions. We used previously reported values of σ [21] for I Kr , I Ks , I Na , I CaL and I to , or a conservative value of 0.15 for ion channels for which this information was not available (I NaL and I K1 ). A pIC 50 estimate for each channel of interest was subsequently sampled from the corresponding logistic distribution-a process which was repeated 1000 times for each compound, resulting in 1000 distinct concentration-APD and concentration-qNet response curves. Ignoring the upper and lower 5% of royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210235 outputs allowed us to obtain an estimate of the 90% credible interval for simulated response curves. All AP simulations and qNet calculations were performed using ApPredict, a bolt-on extension to Chaste (which also has a web-portal front end [31]). All simulation data and codes required to run the simulations are freely available in the Github repository: https://github.com/CardiacModelling/riskstratification-anti-malarials.
Results
In this study, we used increases in the cellular APD as a surrogate for QT interval prolongation [20]. The effects of each of the drugs on the APD were tested at log-spaced concentrations ranging from 0.001 to 100 µM and plotted in terms of the free C max , in order to measure the % change in APD 90 with concentration. The APD 90 as a function of concentration is shown for AZ, CQ, HAL, LOP/RIT, MOX and QUIN in figure 1a, with APs at concentrations equivalent to 1× and 4× free C max highlighted. The compounds investigated generally had a free C max which is much less than the hERG IC 50 and so produced only a small degree of APD prolongation at 1× free C max , including lopinavir/ritonavir. HAL on the other hand produced substantial APD prolongation at 1× free C max due to comparable values of hERG IC 50 and free C max . QUIN produced smaller but still substantial APD prolongation at 1-4× free C max . At a concentration 4× free C max , LOP/RIT and CQ also produced fairly substantial APD prolongations, whereas this remained comparatively small for AZ and MOX. Figure 1b-d shows APD-concentration curves for HCQ and various drugs administered alone, and in combination with HCQ. While the antibiotics AZ and MOX both had very minor APD prolonging effects when administered alone, they were both predicted to increase the overall degree of APD prolongation slightly when given with HCQ compared to HCQ alone. Both HAL and HCQ had a considerable effect when administered individually, so produced a substantial APD prolongation when combined (especially at 4× free C max ). royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210235 Figure 2 shows the concentration-qNet curves for individual compounds and drug combinations, as well as associated torsade metric scores. Based on APD prolongation at a concentration of 1-4× free C max in figure 1, one would expect lopinavir/ritonavir to be slightly riskier than CQ. The qNet for CQ at this concentration, however, was lower (riskier) than for lopinavir/ritonavir (figure 2a), highlighting the different information provided by qNet; namely, that the balance of currents leading to APD prolongation is altered in a riskier way for CQ.
AZ was predicted to have the safest (highest) median qNet, yet the uncertainties around lopinavir/ ritonavir and QUIN were greater, so they could be the safer compounds/combination. Lopinavir and ritonavir have almost identical hERG IC 50 s, which also coincide somewhat with the I CaL and I NaL IC 50 s for ritonavir, all of which would be expected to influence the degree of APD prolongation in the model. Therefore, as the concentration approaches the hERG (and other) IC 50 values there were several inputs close to the region of maximum uncertainty in the dose-response curve, generating large uncertainty in the output.
Combinations of drugs with HCQ decreased qNet in the order HAL > MOX > AZ (figure 2b). The torsade metric scores shown in figure 2c reveal that HCQ/HAL was predicted to be the riskiest combination of drugs, whereas HAL was predicted to be a highly risky individual drug. At the other end of the scale, AZ was the only compound placed in the low-risk category. However, combining AZ with HCQ resulted in a most likely high-risk outcome. CQ was placed in the intermediate-highrisk category, compared to mostly high risk for HCQ. As a free C max for HCQ treatment for which plasma binding was taken into account was not available for SLE and/or COVID doses, we used a value associated with a malaria dose [32].
Discussion
In this study, we used a previously developed classifier [29] to predict the clinical torsadogenic risk category associated with commonly used anti-malarials and anti-infective agents which prolong the QT interval, as well as an independent block model to assess the safety profile of combination royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210235 therapies that block multiple ion channels. These simulations were based on the earliest pre-clinical data available (measurement of IC 50 s, typically from high-throughput assays), and integrate multi-channel effects to better predict risk classifications of drugs. Drugs such as CQ and HCQ have garnered a lot of attention over many years as they appear to work via multiple mechanisms and hence have shown promise in several disease conditions. Most recently, significant interest in the compounds was shown regarding the treatment and prevention of COVID-19. This interest has waned after cessation of the HCQ arm of the RECOVERY trial [33] and lack of impact, combined with the potential for adverse events [34,35], although support for the use of HCQ, AZ and their combined use continues to appear in recent publications in both the peer-reviewed [36,37] and popular literature [38] and discussions continue as to the merits of adjuvant therapy with zinc supplementation [39,40]. Drug safety requires the ability to predict and assess risk with an acceptable level of certainty including on major target organs [41]. We have taken advantage of recent developments in computational modelling [21,29] and knowledge derived from single-cell electrophysiological measurements of different quinolines (including CQ, HCQ, quinidine, QUIN and HAL) on up to seven major ion currents including I Kr and I CaL to stratify the risk of some of these compounds both individually and in combination with other compounds. Furthermore, as such models require increased scrutiny for use in safety-critical applications [19], we have performed uncertainty quantification. Our modelling showed that combining HCQ with AZ prolonged APD to a greater extent than the use of either individually (albeit this was notable only at 4× free C max ), and placed this combination in the high-risk category associated with measurable/unacceptable incidence of TdP. Furthermore, combining HCQ with MOX or HAL was also predicted to increase TdP risk.
Data in animal models have previously shown HCQ to have multi-channel actions including on calcium and potassium currents [16]. It is known that concurrent block of calcium currents with I Kr inhibition can protect against TdP and our modelling data suggest this is indeed the case for many compounds, including HCQ at low concentrations. Modelling results for a free C max of approximately 1.2 µM indicated that the balance of potassium and calcium current inhibitions observed would not be expected to lengthen the human ventricular action potential significantly at a dose of 1× free C max (figure 1), but did so at 4× free C max , which resulted in an overall high-intermediate-risk category for HCQ. However, we also showed that accounting for variability in our experimental data led to a range of possibilities. With the exception of HAL and HCQ/HAL, all compounds and combinations investigated produced credible intervals for the torsade metric score that spanned at least two risk categories. Gathering more high-quality experimental data regarding the multi-channel inhibition effects of HCQ and other anti-malarials would allow us to predict with more confidence the most plausible APD prolongation range and risk category. We could also have accounted for different Hill coefficients, although it has been noted previously that the associated level of variability in their measurement is so high that it is unclear whether including the Hill coefficient from ion channel screening adds useful information [21]. It should be noted, in addition, that the IC 50 used as a model input has limitations as a measure of drug block, in that its value may depend on the electrophysiology protocol used [42,43], and, relatedly, it is unable to account for dynamic, statedependent effects of a compound. Recent studies have integrated dynamic effects into computational models through the use of drug-binding kinetic schemes with rates inferred from specialized electrophysiology protocols [29] and atomistic scale measurements [44].
Our modelling predicted that CQ at a free C max of approximately 0.7 µM was generally safer than HCQ at a free C max of approximately 1.2 µM (used in the treatment of malaria at 400 mg [32]; figure 2). However, it should be noted that the free C max for HCQ is approximately 2 times greater than that for CQ; if the same free C max is used for both compounds then the risk scores are comparable (electronic supplementary material, figure S1). High doses of CQ are thus still expected to pose a high risk. Furthermore, the free C max we used for HCQ was for a malaria dose, which is a higher dose than for SLE. We expect that HCQ at the lower doses used for SLE thus remains generally safe (towards the intermediate-risk category), whereas higher doses used for COVID-19 will place HCQ unequivocally in the high-risk category (electronic supplementary material, figure S1). This is in keeping with clinical experience where CQ and HCQ are known to cause sudden death in overdose but have an otherwise good safety record in the treatment of malaria and SLE, respectively [15].
The risk categories presented in this study are based on a combination of adverse events, case reports and clinical judgement by an expert panel, more information about which can be found on the CiPA website [45]. The categories should not be interpreted as the risk of developing TdP when taking a particular compound (which would suggest that someone taking HCQ is at high risk of developing TdP), but rather the likelihood with which TdP that arises in a patient (which may remain extremely royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210235 rare) can be attributed to a particular compound. To put some of the risk categories in context, they are compared to quinidine (which has known TdP risk) in electronic supplementary material, figure S2. The torsade metric score (correctly) identified quinidine as high risk, suggesting that it is highly probable that TdP can be attributed to quinidine in cases where it arises in patients (far more so than for other compounds tested), which is consistent with clinical experience and the known safety profile of quinidine.
A comparison of the results in this study with risk scores/categories from previous classifiers and databases (where available) is shown in electronic supplementary material, table S2. While it is hard to compare directly these scores, it can be seen that there is reasonable agreement across the different classification systems regarding which drugs are risky, such as quinidine and HAL, and which drugs pose less of a TdP risk, such as QUIN. This suggests that the qNet metric may capture to a reasonable degree relative differences between compounds in the cellular-level mechanisms that determine TdP. Nonetheless, some discrepancies are apparent. One reason for this is that our model predictions are highly sensitive to the free C max input. This is not ideal as effective free therapeutic plasma concentrations are not easily obtained from the literature. A final crucial point regarding the classifier is that risk category prediction does not necessarily rely on accurate prediction of APD prolongation [20] and indeed it has been tested for predictions without considering APD [28]; as such, here we use the same classifier to assess cardiac risk while making no claims about the accuracy of the degree of APD prolongation in the model. Further, while we have searched the literature for ion channel blocking effects of the listed compounds, we acknowledge that many compounds have active metabolites and that any potential impact of these upon APD has not been modelled in this analysis.
Patient risk factors can also predispose to cardiotoxicity, some of which may be more common in COVID-19, e.g. electrolyte imbalances, renal failure, drug interactions. COVID-19 also appears to affect the heart (e.g. myocarditis), which may additionally increase cardiotoxicity risk [46]. COVID-19 is also associated with acute kidney injury and electrolyte abnormalities [47]. Safety of drug-drug interactions (including combinations of anti-malarials) are an important consideration when being explored as a therapeutic in comorbid patients [48].
Conclusion
The safety profile of both CQ and HCQ are well-established. Since the SARS-CoV-2 outbreak, the ability of CQ/HCQ to inhibit certain coronaviruses has been explored. Although interest in HCQ use for acutely unwell COVID-19 patients has waned, there is a continued interest in potential use at symptom onset or as a prophylactic, with combination therapies with zinc and/or AZ under continued discussion in peerreviewed and popular literature. In this study, we demonstrate an in silico safety assessment with uncertainty quantification based on the CiPA qNet torsade metric score. At a free C max of approximately 1.2 µM as seen in malaria treatment, HCQ would most likely be placed in the highintermediate-risk category for TdP arrhythmia, whereas CQ was predicted to most likely lie in the intermediate-risk category at a free C max of approximately 0.7 µM. Combining HCQ with the antibacterial MOX, or the anti-malarial HAL was predicted to increase risk compared to administration of HCQ alone, increasing the degree of human ventricular APD prolongation at some or all concentrations investigated. Further clinical work will be required in order to assess the cardiac effects of HCQ at different doses as used in specific disease populations. | 5,530 | 2021-04-01T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
Defined Limited Fractional Channel Scheme for Call Admission Control by Two-Dimensional Markov Process based Statistical Modeling
The increasing demand for advanced services in wireless networks raises the problem for quality of service (QoS) provisioning with proper resource management. In this research work, such a provisioning technique for wireless networks is performed by Call Admission Control (CAC). A new approach in CAC named by Defined Limited Fractional Channel (DLFC) is proposed in this work for the wireless networks in order to provide proper priority between the new calls and handover calls. This DLFC scheme is basically a new style of handover priority scheme. Handover priority is provided by two stages in this scheme which helps the network to utilize more resources. The first priority stage is a fractional priority and the second stage is an integral priority. Fractional priority is provided by the uniform fractional acceptance factor that accepts new calls with the predefined acceptance ratio throughout the fractional priority stage. The two significant parameters of QoS: new call blocking probability and handover call dropping probability of single service wireless network have been analyzed under this DLFC scheme. Besides, the results of the proposed scheme have been compared with the conventional new thinning scheme and cut-off priority scheme and we found that our proposed scheme outperforms than the conventional schemes. Integral priority is given to the handover calls by reserving some channels only for handover calls. In this work, it is shown that DLFC scheme proves itself as optimal call admission controlling technique which is concerned about not only the QoS but also the proper channel utilization with respect to conventional thinning scheme and fractional channel schemes. The handover call rate estimation and its impact on QoS provisioning is discussed widely to attain the optimum QoS in the proposed handover priority scheme. We hope this proposed DLFC scheme will contribute to design high performance CAC in the wireless cellular network. of services (QoS) Thinning schemes Acceptance factor Defined
becomes unacceptable, usually, the procedures of the handover call are originated [2]. Therefore, there are two categories of calls those can be commenced in a cell. One is a new call and another is a handover call which comes from the neighboring cell. Recently, a significant propensity in scheming the wireless cellular network is reducing the area of cell size and increasing the mobility of the users. This designing proposition results in frequent call handovers in wireless communication systems [3]. The probability of blocking a new call request by the network due to the lack of resources is called new call blocking probability (NCBP). On the other hand, an accepted enduring call is being terminated due to the lack of recourses is often termed as a handover call dropping probability (HCDP). In addition to that, NCBP and HCDP are two important qualities of service (QoS) parameters in single service cellular networks. According to the survey [4], the HCDP of a cellular network must be less than 2%. That's why for giving the priority to the handover call, an intelligent network should be designed. A call admission control (CAC) is such an intelligent call management scheme that purposes to uphold the delivered QoS to the different calls of the network at the target level by off-putting the number of continuing calls in the system [5][6].
A number of CAC schemes have been proposed by considering different aspects. Among them, providing the priority to the handover calls several CAC schemes have been suggested in [1], [7]- [20]. Most of these propositions [1], [7][8][9][10][11][12][13] considered identical channel holding time regarding both the new and handover calls those specify one-dimensional Markov queue process. On the other hand, the research works proposed in [14][15][16][17][18][19][20] have been claimed that this one-dimensional queue method is not accurate and therefore they proposed different channel holding time approaches which are more suitable to evaluate the QoS of a CAC scheme. It is obligatory to reserve a few channels devoted for the exceptional type of calls like handover call for providing the priority. Since the bandwidth of the cellular network is inadequate, the proper utilization of the channels (or bandwidth) become challenging due to the channel reservation. On the other hand, the non-priority scheme offers maximum utilization of the radio resources but this scheme is completely unable to guarantee the gratified level of QoS. Therefore, there is always a tradeoff relationship between QoS and channel reservation.
Based on several original CAC schemes, some researchers suggested QoS optimization methods in several approaches as thinning scheme I [13], thinning schemes II and new call bounding (NCB) schemes [21], new thinning scheme [21], [22], cutoff priority scheme [20], [23], etc. The NCB and thinning II schemes use the method of restriction over the acceptance of the new calls. The thinning scheme I is designed based on the defined the edge value of occupied channels as well as thinning scheme II works based on the probability of the new call acceptance at different numbers of new calls existed in the cell of the network. The new thinning scheme is another CAC policy that offers to fractionise the acceptance of the new call on only one channel which is basically designed considering the idea of limited fractional channel scheme (LFC) [13] in two dimensional Markov environments. The authors of the LFC scheme claimed that this CAC scheme is optimum with respect to thinning scheme I. On this contrary, the new thinning scheme is optimum with respect to NCB and thinning scheme II. The authors of the research works proposed in [13], [21][22] did not explicate the effects of fractionizing more than one channel. Although for the first time the concerning effect is interpreted in the method named uniform fractional band (UFB) scheme [24], the performance measurement is performed by considering one dimensional Markov process. It is already clarified that among the aforementioned CAC schemes, the LFC and new thinning schemes are optimum CAC schemes in one dimensional and two dimensional Markov process, respectively. Nevertheless, both research works also did not state the effect on QoS parameters in case of fractionizing more than one channel. Therefore, the consequential demand is to determine the effects of fractionizing more than one channel under two dimensional Markov process. Elsewhere, the mathematical model of fractionizing more than one channel under two dimensional Markov process is quite complex because of its curse of dimensionality [25].
Considering the previous scopes, this research work proposes a new CAC policy considering two dimensional Markov process based statistical model entitled by defined limited fractional channel (DLFC) scheme. It is also mentionable that this DLFC scheme was primarily proposed by our conference paper in [26] but the detail performance and mathematical details have been presented in this work. This paper contributes in some specific points those can be mentioned as: (i) In UFB scheme NCBP was reduced where HCDP remained constant. But in DLFC scheme the HCDP has been reduced where NCBP is often constant. In this case, it is analyzed that the QoS in the DLFC scheme is better than that of the UFB scheme. (ii) For the different number of fractional channel, both the HCDP and NCBP have been analyzed and graphical and tabular presentations have been presented in the DLFC scheme which wasn't analyzed in the UFB scheme. (iii) HCDP and NCBP have been examined and presentations regarding graphical and tabular have been presented in DLFC scheme for different values of acceptance factor. (iv) HCDP and NCBP have been also studied for LFC and new thinning schemes and compared with the DLFC scheme. From the analysis, it is shown that the QoS of the proposed DLFC scheme is better than the conventional schemes.
Call Admission Control
Call admission control or CAC is basically an algorithm that regulates the traffic volume in cellular networks. CAC can also be used to maintain QoS by providing priority to a specific class of traffic. Generally, there are two kinds of CAC schemes, in broader senses and those are static CAC and dynamic CAC [7] ▪ Handover Probability: The call handover probability, P h is the probability of a call being handed over from one cell to another. Generally, in the case of a handover call, the call holding time T n is greater than the call dwell time, T h . Since both T n (1/μ) and T h (1/η) are considered as exponentially distributed according to the call arrival characteristics. The handover probability, P h of a call in a particular time can be calculated as, (2)
The significance of CAC and Resource Reservation
CAC and resource reservation (RR) for mobile communication are of the most important issues that guarantee system efficiency and QoS required for different services in a very scarce resource as the radio spectrum. As forced call termination due to the handover call dropping are generally less desirable than blocking a new one, handover calls should have a higher priority than new calls [26].
Mathematical Modeling of CAC schemes
Mathematical model subjected to a CAC scheme can help to indicate the performance of the network. This modeling is based on some probability theory due to its random nature. Therefore, a basic discussion over CAC scheme modeling with possible terminology have been explored and explained, gradually.
▪ Queuing Theory: The queuing theory is a mathematical approach of waiting in lines or queues. In queuing theory a model is constructed so that queue lengths and waiting time can be predicted. Networks of queues are systems in which a number of queues are connected by customer routing. When a customer is serviced at one node it can join another node and queue for service, or leave the network. For a network of m, the state of the system can be described by an m-dimensional vector ) ,..., , , where x i represents the number of customers at each node. Queuing theory maintains the birth and death process.
▪ Markov Process: Markov process is a statistical method which is used to predict the forthcoming behavior of a variable or system whose existing behavior does not depend on its behavior at any time in the past. In other words, this procedure works with random variables. Basically, a Markov process works with a sequence of random variables suppose ,... , , 3 2 1 x x x with the Markov property, such as the current, future, and past states are independent. Formally, ) ,..., , ( If these two conditional probabilities are properly defined, i.e. if, The probable values of X i construct a countable set, S which is known as the state space of the chain. This Markov chain may be either single dimensional or multidimensional. ▪ Multidimensional Markov Model: Suppose that, we have s categorical sequences and each sequence has m possible states in M. In addition, let x n (k) be the state probability distribution vector of the j th sequence at time n. Therefore, if the probability of founding the j th sequence in state j is one at time n, the following relation can be considered.
Furthermore, it can also be assumed that the following relationship exists among the sequences.
This mathematical relationship basically represents that the state probability distribution of the j th chain at the time (n +1) and it totally depends on the weighted average of P(jj)x n (j) and the probability distribution of the state at the other chains at time n. Here P(jj) denotes the one-step transition probability matrix of the j th Sequence. We can write the system in matrix form as, For the relation (7), the following proposition can be considered as a generalized version of the Perron Frobenius theorem [27].
From the above theoretical view of multidimensional Markov chain model, we get the concept of using two dimensional Markov chain model for the purpose of call admission control in a wireless network. It provides desired QoS for handover calls and guarantees that the QoS of new calls still meets the requirements. When congestion occurs, we may lose both of these purposes. The average service rate of the new call and handover call n and h are not the same. Different CAC schemes are designed under 2D Markov process. Among them cut off priority and limited fractional channel CAC schemes are most common those have been discussed here. These methods have also been widely examined to compare with our proposed scheme.
▪ Cut off Priority CAC Scheme: The transition rate diagram of two dimensional Markov chain model for the cutoff priority CAC scheme [20], [23] is given in Figure 1. In this figure, 1 where, 0, 0 0, 1 0, 2 0, 3 0, C [26][27][28]. Then LFC scheme has been discussed under two dimensional Markov process in [22] which was named as new thinning scheme. In this paper, one channel was fractionally used for accepting new calls with an acceptance factor, . But the authors did not discuss the situation while more than one fractional channel are considered. The transition diagram of the LFC scheme under two dimensional Markov process has been given in Figure 2.
The NCBP and HCDP of the new thinning scheme are estimated by the mathematical relation given by (14) and (15)
Mathematical Modeling of the Proposed DLFC Method
This CAC scheme is proposed under two-dimensional Markov process where, . For increasing channel utilization fractional channel scheme is used and the number of fractional channels is considered more than one. That's why this scheme is named Defined Limited Fractional Channel scheme (DLFC). After occupying the M channels, the additional new calls will be admitted with a certain probability "α". Therefore, the new calls will be opposed with probability 1-α in those states. Finally, only handover calls will be acceptable from M+N to C. If all channels are busy, handover calls will be dropped. The proposed scheme with its transition properties is illustrated in Figure 3. In this scheme, we have assumed a different channel holding time for each type of calls. Thus, the different channel holding time ( h n ) offers the state transition rate diagram to be two dimensional. In Figure 3, 1 n and 2 n denotes states new calls and handover calls, respectively. The n and h indicates the new call traffic load and handover call traffic load, respectively. For call admission controlling of DLFC scheme, a flowchart is given below in Figure 4. From this figure, it can be explained that at first, the system will analyze the call type either handover or a new call. If the call is a handover call then the availability of the channel less than C will be checked. If yes then the call will be accepted if not then it will be blocked. If the call is a new call then the availability of channel less than M will be checked. If yes then the call will be accepted and if not then the availability of channel M to N-1 will be checked for their availability. For being available the call will be accepted with predefined acceptance factor α. If not then it will be blocked.
From the Figures 3 and 4 where, M and N denote the threshold channel and priority to the channel, respectively.
The NCBP and HCDP of the DLFC scheme can be found as (18) and (19), respectively.
Numerical Evidence of Optimality for DLFC scheme
For designing a CAC scheme it is important to ensure QoS. In [13] it is stated that cut off priority scheme is the optimal CAC scheme. But DLFC is more optimal than cut off priority scheme because the HCDP is less than cut off priority scheme in DLFC where NCBP is almost constant. As n and h are not linearly related [1] it is too complicated to prove the optimality for DLFC. Here, numerically we have proven the optimality of the NCBP and HCDP based estimation by the proposed DLFC scheme individually.
Numerical Evidence of Optimality for NCBP (PB):
Now for blocking probabilities from equations (10) and (20) where, 1 . From this relation, it is proved that NCBP is almost same in FGB and DLFC. A truth table for this evidence is given in Table 1 where M=80; C=100; i=1 and α=0.5.
Numerical Evidence of Optimality for DLFC by HCDP (PD):
Similarly for HCDP of FGB scheme from equation (11) Finally, it is also proved that HCDP of DLFC scheme is less than that of FGB. A truth table for this evidence is given in Table 2 where M=80; C=100; i=1 and α=0.5. From the table, it is found that the previously claimed condition has been successfully achieved. The value of X corresponding to every value of the Erlang* is greater than 1. This condition is hypothesized by the proved relation given in (23).
[*An Erlang is a unit to measure the traffic load or total traffic volume of one hour in telecommunication systems. Therefore, Erlang = number of calls × duration]
Results and Discussions
In this section, the simulation results have been presented for the assessment of the proposed DLFC scheme with the other conventional schemes under 2D Markov process based on statistical modeling. These results show how much deviation may take place for applying the proposed scheme with respect to the other traditional CAC schemes. On the other hand, the various features of the proposed scheme have been also described gradually. First of all, investigations of all the CAC schemes were carried out considering some basic assumptions. For simulating the proposed methodology regarding CAC scheme designing in 2D Markov process, we assumed some parameters as the numerical values represented in Table 3. These parameters were always considered in a similar manner for all the performance analysis of the conventional schemes as well as the proposed DLFC scheme. In numerical results, the new NCBP's and handover HCDP's of the DLFC cut off a priority, and LFC schemes were examined in two dimensional Markov process. The performances of all the schemes were analyzed with different conditions like fractional channel numbers and different values of acceptance factor. According to the basic assumption of the system parameters, all the mathematical calculation has been performed in MATLAB 2012b. The two dimensional Markov model was prepared by MATLAB code and based on this model all the traffic load was analyzed to estimate the performances of the different schemes.
The HCDP of the wireless network under the conditions of given parameters of Table 3 regarding different traffic load were analyzed and compared the performances. Figure 5 presents such a comparison among the various popular CAC schemes as well as the proposed scheme regarding their HCDP performances. This figure shows that the proposed scheme confirms the lowest HCDP than the others and the highest HCDP occurs in case of the cut off priority scheme. DLFC scheme has a lower HCDP than LFC and cut off priority schemes. It is also observed that by increasing the number of the fractional channels the HCDP can be reduced. But there is a limit to increasing the number of fractional channels. After increasing the value to the limit the HCDP remain almost the same. In Figure 5, it was found that the values of HCDP for 5% and 6% fractional channels of the assumption are almost the same for the high traffic load.
The NCBP of the cutoff priority scheme, LFC, and proposed DLFC schemes have been calculated and presented to show the comparison among their performances and presented in Figure 6.
Here it is observed that the NCBP are almost same for cut off priority, LFC, and proposed DLFC scheme at higher traffic load which is our real concern. DLFC scheme has a marginally higher NCBP than LFC and cut off priority schemes at the lower new call arrival rate but same performance in the higher new call arrival rate. In this analysis, the basic assumptions were the same as in Figure 5.
We assumed that all the parameters became same except acceptance factor and for different acceptance factors the HCDP have been calculated. The results concerning this consideration have been presented in Figure 7. The value of α must be In this analysis acceptance factor is assumed as α=0.2, 0.3, 0.5, 0.75 & 0.9. It is observed that with the increment of acceptance factor the HCDP increased. The less the acceptance factor is presented in the system, we found the less HCDP. Figure 7 presents the HCDP's of DLFC scheme for the different values of α. Here, one thing can be approached that if we choose the acceptance factor to be 0.5 the dropping probability remains at a satisfactory level. For the value of acceptance factor 0.75 and 0.9, the HCDP increases drastically, which is a threat for the QoS. In addition to that, with the similar conditioning, the performances regarding NCBP of the proposed DLFC scheme have been also assessed. The results we found have been presented in Figure 8. From this figure, we can perceive that there are some effects of the acceptance factor at the lower traffic load. On the other hand, at the higher traffic load, this effect does not carry any significance which is very concerning the issue of the proposed scheme.
This scheme offers a system where the HCDP will be decreased without affecting the NCPB which has been achieved by the results given in Figure 8. Therefore, the proposed scheme improves the QoS of the wireless network where we are concerned about the HCDP at the very high traffic load. So, the results clarify that the DLFC scheme reduces HCDP without hampering the performance of the system's NCPB. The additional benefit of the proposed scheme is the regulating properties of the acceptance factor. To achieve a certain level of QoS, we can select our acceptance factor to maintain the HCDP and NCBP at our satisfactory level with respect to the load. In addition, it can be chosen according to traffic load arrived in the cellular network. Extra concerning issue for the QoS associated with the CAC policy is channel utilization. In Figure 9, the channel utilization performance of the proposed DLFC scheme has been presented comparing with the conventional cut off priority scheme and LFC scheme. Channel utilization of all the schemes discussed here remains the same for almost all the cases. Although the cut-off priority scheme shows the highest performance of channel utilization, the proposed DLFC scheme has also almost similar channel utilization performance by tuning the acceptance factor.
In Figure 10, channel utilization of DLFC scheme has been shown under various values of acceptance factor α. This acceptance factor has a nice tuning effect on the performance of the channel utilization. Therefore, it is easier to attain the required system's channel utilization performance by the proper selection of the call acceptance factor. It should be mentioned here that, with the help of reducing the NCBP we can increase the channel utilization but in this case, it will be difficult to maintain the QoS. To maintain the importance of handover call we should take into account not only the blocking probability but also the channel utilization as well. To optimize the system performance, we can regulate the acceptance factor through the DLFC scheme. This DLFC scheme has another beautiful feature which is its utility of hybrid approach of call acceptance rate. If we consider 15% guard channel and 20% guard channel in cutoff priority scheme, there we find a major difference in handover call dropping probability. On the other hand, we can solve this problem with the proposed DLFC scheme. We can reduce the handover call dropping probability to a satisfactory level by using the utility of the regulation of the acceptance factor. Such a result has been given in Figure 11. Here, we can observe that if we divide the 20% guard channel as 15%+5% and choose different acceptance factors for two different divided channel groups, the HCDP reduces significantly. Here, acceptance factor 0.75 has been chosen for the 1 st 15% guard channel and for the rest 5%, the acceptance factor has been chosen 0.25. For both (15% and 5%) guard channel the acceptance factor 0.5 has been used in Figure 11 to show the reference performance of the proposed DLCF scheme.
Conclusions
The radio resource is limited to a system. For this reason, providing a priority to one class in its call admission is a cause to increase the call blocking probabilities of other classes. Since handover call dropping is practically much more annoying than the new call blocking, in this research paper, a new call admission control scheme has been proposed which is termed as a defined limited fractional channel or DLFC scheme. The NCBP and HCDP of the proposed scheme have been estimated from the model designed MATLAB 2012b and also compared to the existing methods under two dimensional Markov process based statistical modeling. In this case, it has been observed that the HCDP is decreased where NCBP is almost the same with respect to the cut off priority scheme and LFC schemes. It has been perceived from the simulation results that the performance of the DLFC depends upon the number of fractional channels and the values of the acceptance factor. So, it becomes a major concern to choose the number of fractional channels and the value of the acceptance factor. | 5,913.6 | 2018-08-01T00:00:00.000 | [
"Computer Science"
] |
Portable Optical Coherence Elastography System With Flexible and Phase Stable Common Path Optical Fiber Probe
Biomechanical properties drive the functioning of cells and tissue. Measurement of such properties in the clinic is quite challenging, however. Optical coherence elastography is an emerging technique in this field that can measure the biomechanical properties of the tissue. Unfortunately, such systems have been limited to benchtop configuration with limited clinical applications. A truly portable system with a flexible probe that could probe different sample sites with ease is still missing. In this work, we report a portable optical coherence elastography system based on a flexible common path optical fiber probe. The common path approach allows us to reduce the undesired phase noise in the system by an order of magnitude less than the standard non-common path systems. The flexible catheter makes it possible to probe different parts of the body with ease. Being portable, our system can be easily transported to and from the clinic. We tested the efficacy of the system by measuring the mechanical properties of the agar-based tissue phantoms. We also measured the mechanical properties (Young’s Modulus) of the human skin at different sites. The measured values for the agar phantom and the skin were found to be comparable with the previously reported studies. Ultra-high phase stability and flexibility of the probe along with the portability of the whole system makes an ideal combination for the faster clinical adoption of the optical coherence elastography technique.
I. INTRODUCTION
Biomechanical properties play an important role in the regulation of cellular functions within biological tissue. For instance, some cancer cell types are less stiffer than normal cells [1] and this information can be used as a diagnostic marker to differentiate cancerous cells from normal ones. Tissue elasticity is routinely used to determine breast cancer tissue margins during intraoperative breast cancer surgery [2], [3]. Similarly, in dermatology, skin elasticity has been used to diagnose systemic sclerosis [4]- [6]. Therefore, the measurement of the mechanical properties of the cells and the tissues have great clinical significance. The mechanical properties of tissues can be quantified by measuring parameters such as Young's modulus, bulk modulus, and shear modulus. However, in vivo measurement of the biomechanical properties of the tissues within clinical settings has been The associate editor coordinating the review of this manuscript and approving it for publication was Xiahai Zhuang . challenging because of the unavailability of compact portable measurement devices.
In current clinical settings, the Rodnan skin score (mRSS), based on skin palpation is still the most widely used method to determine the mechanical properties of skin [7]. However, the quantification and reliability of the mRSS score are highly dependent on the clinician's experience [8]. To address this issue in clinical praxis, several other methods such as Atomic-Force Microscopy (AFM) [9]- [11], Magnetic Resonance Elastography (MRE) [12], Ultrasound Elastography (USE) [13], Brillouin Scattering Microscopy (BSM) [14], and Optical Coherence Elastography (OCE) [15], [16] have been developed. These available techniques provide images, known as elastograms, by mapping the elastic properties of the tissue.
AFM provides stiffness maps at a subcellular level, however, it suffers from the drawback of slow throughput which restricts it from quickly analyzing the tissue mechanical properties over a large surface area. MRE is a non-invasive imaging technique that is based on the principle of the Magnetic Resonance Imaging technique. This technique measures the stiffness of the tissue by calculating the displacement in the tissue caused by the shear waves. USE detects tissue displacement by applying stress on the tissue using ultrasound waves. BSM is a non-destructive and label-free imaging technique to measure the elastic properties of the tissue in 3D. However, because of the long acquisition time of several minutes for 3D measurements, BSM has limited applications in the clinic. Recently, the use of stimulated Brillouin scattering has been proposed to reduce the acquisition time to ∼1 min for 2D measurements [17], but the requirement of two counter-propagating light beams inside the sample makes it unsuitable for the in-vivo measurements.
Optical coherence tomography (OCT) is a micron and submicron level imaging technique that is based on optical interferometry. This technique has better axial and lateral resolution with higher sensitivity than MRI and ultrasound imaging. OCT has been applied in a wide range of applications such as; ophthalmology [18], dermatology [19], oncology [20], cardiology [21], and cell mechanics [22]. OCE is an extension of OCT which is capable of measuring the mechanical properties of the biological samples. Initially, OCE was based on time-domain OCT. Low signal-to-noise ratio (SNR) and unstable phase measurements because of the long acquisition time for the time-domain-based OCE systems hindered their adoption in the clinic. The OCE was revived after the introduction of Fourier-domain OCT which offered better SNR and reduced acquisition time compared to the time-domain OCT systems [23].
Most of the Fourier domain OCE systems are still Michelson-interferometry based. Such systems measure the displacements in the tissue caused by some external stimulus against a fixed reference surface. In Michelsoninterferometry-based system, measured displacement has additional noise because of the mechanical vibrations present between the reference arm and the sample arm. Such mechanical vibrations change the optical path difference between the two arms of the interferometer and appear as noise. The minimum measurable displacement in the tissue in such systems is thus limited by the undesirable optical path difference (OPD) change. The undesirable OPD change in the two arms of the interferometer can be minimized using a common path approach where the reference and the sample signal travel approximately the same optical path within the system. Fiber-based common path probes have been reported previously [24]- [30] and used for the imaging of tissue such as coronary artery [28], eye [31], [32], esophagus [29], etc. Owing to their superior phase measurement abilities, common path probes have also been used for angiographic applications [33], [34].
The potential of the common path approach has also been realized for the optical coherence elastography applications and benchtop [35]- [37] as well as fiber-based [38]- [41] common-path OCE systems have been reported. In clinical settings, fiber-based common path probes for elastography are preferred over their free space counterparts because of their lightweight and ease of handling.
OCE systems can be further classified into two categories; static/quasi-static and dynamic. Static/quasi-static based OCE systems provide better spatial resolution compared to dynamic OCE methods but these systems need careful calibration in order to quantify the results. Dynamic OCE systems using surface acoustic waves (SAW) are much simpler to operate as these systems require only a way to excite the SAW into the sample and a way to measure the speed of the SAW into the sample. The SAW can be excited using simple mechanisms such as piezo transducer [42], air puff [43], acoustic radiation force [44], etc., and the speed of the SAW can be measured using an OCT system.
The previous SAW-based OCE systems have focused on the scanning of the measurement beam over the sample in order to generate the tissue elastograms which leads to increased system complexity. However, for applications such as systemic sclerosis [4], [5], clinicians only need a reliable and repeatable number to quantify the skin elasticity; something similar to skin palpation which is still the standard diagnostic method. For such applications, there is no need to scan the measurement beam over the sample and only one measurement point would suffice. This however would allow the development of much simpler fiber probes and compact OCE systems which are more relevant to the clinical settings.
In this paper, we present a fiber-based common-path flexible OCE probe approach. We demonstrate a portable and handheld common path probe OCE system that can be easily transported to and from the clinic. Our flexible probe can be placed directly at the measurement site for the measurement of the tissue elastic properties. Unlike previously reported common path probes-based OCE systems, which have relied on the compression method [38]- [41], we report a common path probe that is compatible with the simpler SAW method of OCE.
II. MATERIALS AND METHODS
The schematic diagram of our system is shown in Figure 1. The system is based on a commercially available swept-source OCT engine (Axsun Technologies, USA). The OCT system has a central wavelength of 1310 nm, a bandwidth of 130 nm, and a repetition rate of 12.5 kHz. The output power of the laser source is 24 mW. In addition to the illumination unit, the system comes with components for the acquisition and processing of the data. Two photodiodes (D1, D2) capable of balanced detection are connected to a data acquisition card (DAQ), and a field-programmable gate array (FPGA). OCT data is processed on the integrated FPGA module which includes applying Hanning window on the acquired spectrum for side lob suppression, the wavelength to k-space linearization using the Mach Zender interferometer clock signal, and performing FFT on the windowed data. The FFT data is transferred to the host computer over the Ethernet cable. The host computer is composed of a processing unit (Intel NUC Kit NUC8i7HNK Intel Core i7) and a touch screen for display.
The OCE probe is composed of two individual components; a single-mode common path optical fiber assembly for the OCT signal and a piezo transducer for exciting the SAW in the tissue. The OCT part of the OCE probe is based on a common path approach and the schematic of the tip of the fiber is shown in the zoom section of Figure 1 along with the picture of the fiber tip. For the OCT part, the light from the laser source is coupled to the input port of a single-mode fiber (SMF28) based optical circulator (C, CIRC-3-1310, AFW Technologies, Australia). A multimode fiber (FG105LVA, Thorlabs, USA) of length 200 µm and 105 µm core diameter is spliced to the tip of the fiber at the exit port of the circulator. The light exiting the core of the single-mode fiber expands in the multimode fiber section. A 285 µm long GRIN fiber (F-MPD, Newport Corporation, USA) with the core diameter of 100 µm is spliced at the tip of the multimode fiber which focuses the light at a distance of 150 µm approximately from the tip. A part of the illumination is reflected at the tip of the fiber assembly which is used as the reference (Ref.) signal (23 µW) for the OCT and the rest of the light (14 mW) is focused on the tissue. The reflected light from the tissue is collected back by the fiber assembly and is guided towards the detection photodiode D2 via the circulator. The signal from the photodiode is sent to the integrated data processing unit of the OCT engine.
The piezo unit of the OCE probe consists of a stacked piezo-transducer (PK2FMP2, Thorlabs Inc., USA), capable of producing 11.2 µm displacement at a voltage of 75 volts. A small metallic piece of 1 × 5 × 2 mm size is glued at the tip of the piezo such that the 1 × 5 mm side of the piece faces towards the tissue. The distance between the piezo tip and the fiber tip is measured to be 4.2 mm. When the piezo unit driven by a 0-10 V pulsed voltage signal with 0.1 % duty cycle from a signal generator (National Instruments, USA) at 40 Hz, comes in contact with the tissue, it excites SAW in the tissue which is detected by the OCT part of the system. A TTL signal from the signal generator at 40 Hz and phase-shifted by 180 degrees with respect to the piezo driving signal is used as a trigger for the OCT frame acquisition. This way, the piezo transducer, and the OCT acquisition are synchronized with each other and the SAW appears in the middle of the scan. The whole system was assembled in a portable case (Peli Case 1500) and weighed approximately 8 kilograms.
In Figure 2, we show a non-common path Michelson interferometry-based benchtop system. This system is used to compare the performance of the common path probe-based system with the non-common path-based system. In this design, the input port of a circulator (C) is connected to the light source. The light exiting the output port of the circulator is collimated using a collimator (CM). The collimated light is passed through a non-polarizing beam splitter (BS) to form the reference arm and the sample arm. The light in the reference arm is directed towards a reference mirror (M1) and a sample mirror (M2). The reflected light from both arms is coupled back to the circulator through the collimator and detected using one of the photodiodes (D2).
For both, the common path and the non-common path approach, the data processed by the data processing unit of the OCT engine in the form of A-scan's FFT, is accessed by the host computer via a custom-designed LabVIEW software (National Instruments, USA). From the FFT data, phase and intensity values are calculated for a full-frame acquired in M-scan mode which is composed of 312 A-scans of the sample at the same location. The number of A-scans was decided by the repetition rate (12.5 kHz) of the swept-source laser. The measured phase values are converted to the displacement of the tissue using the relation Here z is the optical path difference change, λ is the central wavelength of the source, and φ is the phase change between adjacent A-Scans.
When the measured displacement of the tissue is plotted against time, it appears as a pulse whose peak position is shifted with respect to the piezo excitation pulse. We measured the displacement of the piezo tip directly using a separate common path patch cable. This provides us the real shape and the position in time for the SAW excitation pulse. The delay between the SAW excitation pulse and the measured tissue displacement pulse is used to calculate the SAW velocity. To achieve this, the distance between the piezo transducer which is measured to be 4.2 mm is divided by the delay between the SAW excitation pulse and the measured tissue displacement pulse.
The resolution with which the velocity of the SAW can be measured depends on the temporal resolution of the system. Our system operates at 12.5 kHz, hence the temporal resolution of the system is 80 µs. This limits the minimum measurable change in the SAW velocity. We improved the temporal resolution of our system by first upsampling the data of the SAW excitation pulse and the tissue displacement pulse by 10 times and then by measuring the peak of the cross-correlation of the two data sets. Using this technique we can precisely measure the position of the SAW peak in the tissue with respect to the SAW excitation pulse at the piezo. For this method, the precision with which the delay between the two pulses can be measured is only limited by the noises present in the system. Finally, the measured SAW velocities are converted to the Young's modulus values using the simplified relation [16] where tissue is assumed to be incompressible, ρ is the material density and c S is the SAW velocity in the sample. For soft biological samples such as skin, material density (ρ) = 1060 kg/m 3 , can be used [45].
The developed system was tested on agar phantoms and the hand skin of a healthy volunteer. All the methods carried out in this work are in accordance with relevant guidelines and regulations from the local institutional review board (Ethikkommission der Friedrich-Alexander-Universität, Erlangen, Germany). Testing on hand was performed as a self-test on the authors of the manuscript who signed informed consent to participate. No other human experiments were performed in this work.
A. DISPLACEMENT STABILITY
To check the phase stability and minimum displacement measurable by our swept-source OCE common path (SS-OCEcp) system, we glued a mirror on the probe holder and measured the displacement of the mirror with respect to the tip of the fiber probe which acts as the reference surface. Since the mirror was fixed to the probe holder, we expect a minimum change in the OPD between the reference signal and the mirror signal. In this case, the measured displacement would represent the noise in the system. For comparison, we also measured the phase stability and minimum displacement measured by a Michelson interferometer-based benchtop non-common path (SS-OCEncp) system under the same standard laboratory conditions as SS-OCEcp measurements. For the SS-OCEncp system, we measured the displacement between two fixed mirrors which were used as reference and sample surfaces in the two arms of the interferometer. Also, since the mirrors were fixed, the measured displacement represents the phase stability of the system which is limited by the mechanical vibration present within the system. In Figure. 3 we show the phase stability of the SS-OCEcp and the SS-OCEncp. In Figure 3(a) and 3(b), we have plotted a part of the interference spectrum of 312 consecutive A-scans for the SS-OCEncp and the SS-OCEcp respectively. The degree of the phase stability of the system is measured by measuring the change in the phase of the fringes between consecutive A-scans. Ideally, if all the consecutive fringes overlap with each other, then the system is considered phase stable. From Figure 3(c), we can see that the phase stability of SS-OCEcp system is much better than the SS-OCEncp system. To quantify the measured values, we calculated the standard deviation of the measured phase values for both systems. The standard deviation of the measured phase change between the reference surface and sample surface for the SS-OCEncp and SS-OCEcp was found to be 952 mrad (99.2 nm displacement) and 98 mrad (10.2 nm displacement) respectively.
B. MEASUREMENT OF MECHANICAL PROPERTIES OF TISSUE PHANTOM
To test the feasibility of the system to measure the mechanical properties of the tissue, we measured the velocity of the SAW generated in agar phantoms using our SS-OCEcp system. The agar phantoms were fabricated by mixing agar powder with a concentration of 1%, 2%, and 3% respectively in water and mixed with few drops of milk to increase the scattering of the light for the SS-OCEcp system. We placed the probe on the phantoms and generated the SAW in the sample using the piezo-transducer. The excitation voltage signal of the piezo-transducer was synchronized with the frame trigger of the OCT system. In Figure 4, we show the measured displacement of the phantom surface for different agar compositions along with a typical M-Scan for the agar phantoms. By monitoring the delay between the excitation pulse and the SAW peak at the measurement point, we calculated the velocity of the SAW in the sample. For 1%,2%, and 3% agar phantoms, SAW velocity was found to be 5.25 ± 0.03 m/s, 8.75 ± 0.04 m/s, and 10.5 ± 0.06 m/s respectively which corresponds to Young's modulus of 87.64 ± 1.01 kPa, 243.46 ± 2.23 kPa, and 350.59 ± 4.02 kPa respectively. The obtained results from agar phantoms are found to be comparable with previous works [45].
C. MEASUREMENTS OF THE ELASTIC PROPERTIES OF HUMAN SKIN IN-VIVO
To test the performance of the purposed SS-OCEcp system in biological samples, we measured the elastic properties of the human skin in four healthy individuals. In Figure 5, we show the normalized amplitude of the displacement of the skin tissue at different locations on hand caused by the SAW for one exemplary healthy individual. The SAW velocity for the palm (8.52 ± 0.08m/s) was found to be more than the SAW velocity for the forearm (5.09 ± 0.06m/s) which is expected since the palm skin is much stiffer than the forearm skin.
The obtained results for the SAW velocity in human skin are comparable with previous works [42]. In Figure 6, we show the calculated Young's Modulus for all the 4 volunteer measures at palm and forearm skin as a box chart. It can be seen from Figure 6, that the measured values of Young's modulus for palm are much higher than that of forearm skin demonstrating that our system can differentiate between different skin types with different elastic properties.
D. REPEATABILITY MEASUREMENTS FOR THE MECHANICAL PROPERTIES
An important parameter for the clinical systems is the ability to get reliable and repeatable results. For OCE systems, this is primarily limited by the accuracy with which the delay between the SAW at the excitation point and the SAW at the measurement point can be determined. Parameters such as the system's electronic jitter, the bandwidth of the mechanical wave, system stability will limit the delay measurement accuracy. Although the temporal resolution of our system is 80 µs, we can estimate the peak of the SAW with an accuracy of 2.04µs using the upsampled cross-correlation method. This sets the lower limit with which we can accurately measure the time it takes for the SAW to travel from the excitation point to the measurement point. We also performed repeated measurements on agar phantom and human skin with our common path probe to characterize the inter-measurement variability. To quantify the variations within different measurements, we recorded several measurements for SAW time delay, measured at the same location on the sample. Then we measured the variations in the measured values of the SAW time delay for these samples. The standard deviation in the measured SAW time delay for 3% agar phantom and palm skin was found to be 2.42 µs, 2.68 µs respectively.
IV. DISCUSSION
We have developed a system with an SS-OCEcp approach to characterize the mechanical properties of the biological samples. We used a piezo-transducer and single-mode optical fiber-based common path probe to generate and detect surface-induced acoustic waves respectively. By measuring the phase change between the adjacent A-scans, we calculated the displacement of the tissue caused by the SAW. Due to the common path approach, the undesired OPD change between the reference surface and the sample surface was minimized. In comparison to a non-common path approach, the common path approach achieved an order of magnitude better sensitivity in terms of phase or displacement measurement. The developed system allowed to measure extremely small movements (few nanometers) inside the tissue which were caused by the SAW. A measurement of the delay between the SAW at the excitation point and the measurement point allowed us to calculate the velocity of the SAW in the sample which represents the mechanical properties of the sample. We tested the developed system using tissue phantoms made of agar which is known to possess mechanical properties similar to soft biological tissue. We also measured the mechanical properties of skin at different locations on hand and the measured values were found to be similar to the previously reported values.
One of the limitations of our current system is that we have performed measurements at a single tissue location. Previously, several studies have measured the mechanical properties of the tissue at multiple points along the SAW propagation path. Multiple point measurement allows measuring the dispersion in the SAW as the waves propagate within and on the surface of the sample. Dispersion of SAW can be used to calculate the depth-dependent mechanical properties of the sample as low frequencies represent properties of the deep tissue and higher frequencies represent more of the surface properties [46]. This however should not be seen as a limitation of the system as it allows to greatly simplify the probe design which is highly desired for clinical settings. Nevertheless, non-scanning (single point measurement) systems have been extensively used in previous studies to determine the mechanical properties of the tissue and our common path approach along with the portable system will add immensely to such studies.
In our system, we have used a swept-source laser which usually suffers from poor phase stability [47]. The phase stability of a swept-source system can be improved using fiber Bragg grating [48]. The displacement measurement sensitivity of the common path approach can be further improved using light sources such as superluminescent diodes. However, it should be noted that such sources can only be used for spectrometer-based OCT systems which suffer from degraded signal to noise ratio as the sample signal moves away from the zero optical path difference with respect to the reference surface.
From a clinical point of view, it might be more desirable to use a non-contact approach to excite SAW in the sample whereas in our current system we have used a piezo actuator for this purpose. Our common path OCE probe can be combined with air puff or nano second laser excitation schemes to develop a non-contact OCE system. Nevertheless, our current approach demonstrates the superior performance of common path OCE probes compared to the non-common path systems. The portable OCE system and the handheld probe with improved phase measurement sensitivity make an ideal combination for a clinical system that can be used to measure the mechanical properties of the tissue in vivo. | 5,784.8 | 2021-01-01T00:00:00.000 | [
"Physics"
] |
Hyperkahler Sigma Model and Field Theory on Gibbons-Hawking Spaces
We describe a novel deformation of the 3-dimensional sigma model with hyperk\"ahler target, which arises naturally from the compactification of a 4-dimensional $\mathcal{N}=2$ theory on a hyperk\"ahler circle bundle (Gibbons-Hawking space). We derive the condition for which the deformed sigma model preserves 4 out of the 8 supercharges. We also study the contribution from a NUT center to the sigma model path integral, and find that supersymmetry implies it is a holomorphic section of a certain holomorphic line bundle over the hyperk\"ahler target. We study explicitly the case where the original 4-dimensional theory is pure $U(1)$ super Yang-Mills, and show that the contribution from a NUT center in this case is simply the Jacobi theta function.
Introduction and main results
It is a well-known principle that some aspects of quantum field theories become easier to understand when the theories are compactified to lower dimensions. This principle was exploited in particular in [5], where the wall-crossing phenomenon in N = 2 supersymmetric theories in four dimensions was studied by formulating the theories on S 1 × R 3 , with S 1 of fixed radius R. A crucial input to that analysis was a good understanding of the constraints imposed by supersymmetry [8,11]: they say that the IR Lagrangian of the compactified theory is (around a generic point of its moduli space) a sigma model into a hyperkähler manifold M[R]. The metric of M[R] typically depends in a highly nontrivial way on the parameter R, reflecting the fact that quantum corrections due to BPS particles of mass M scale as e −MR .
In this paper we consider a different but related problem: we begin again with an N = 2 theory in four dimensions, but rather than studying it on S 1 × R 3 , we take our spacetime to be a circle fibration over R 3 , with isolated degenerate fibers. Generically such a compactification would not preserve any supersymmetry, at least without some modification of the theory; however, we consider the special case where the spacetime X is actually hyperkähler (a Gibbons-Hawking space). Thus X has metric locally of the form where x is a coordinate in R 3 , V( x) a function on R 3 (with singularities) and B a 1-form on R 3 . (More globally dχ − B is a connection form in the circle bundle whose fiber coordinate is χ ∈ [0, 4π].) We take V → 1 as x → ∞, so R gives the asymptotic radius of compactification. The hyperkähler condition says that Such an X has holonomy SU(2) rather than the generic SU (2) × SU(2)/Z 2 , and this reduced holonomy admits 4 covariantly constant spinors. Thus the resulting theory should have 4 supercharges.
A deformed hyperkähler sigma model
The first main question we address in this paper is: what could the resulting theory look like from the three-dimensional point of view, after reducing on the circle fiber? Evidently it should be a deformation of the standard hyperkähler sigma model, which depends on the data of V and B, which reduces to the original model when V is constant and B = 0, and which preserves 4 supercharges when (1.2) is satisfied. 1 In §2 below we present a candidate form for such a deformation: for the Lagrangian see (2.22). Our deformation involves some interesting geometry, which we now briefly describe.
• First, the Lagrangian involves a one-parameter family of hyperkähler spaces. More precisely, letting M denote the total space of this family, the Lagrangian involves a bilinear form g on the tangent space T M, which restricts on each fiber to a hyperkähler metric. The appearance of M might have been expected given the fourdimensional origin of the model: the spacetime metric (1.1) says that at different points of R 3 we should see different effective radii.
Being hyperkähler, the fibers of M carry a CP 1 worth of complex structures. In the usual hyperkähler sigma model, all these would be on the same footing, but in the deformed model one of them is preferred.
• Second, the Lagrangian involves one extra coupling, of the schematic form where A represents a U(1) connection in a line bundle L over the family M, and ϕ * A is its pullback to R 3 via the sigma model field ϕ. (We ignore for a moment the global topological issues involved in writing down (1.3).) • We study the conditions under which the deformed theory preserves 4 supercharges, and find the following interesting consequence. The family of manifolds M carries a preferred torsion-free connection ∇ in the tangent bundle, which preserves the preferred complex structures on the fibers M[ϕ 0 ], and agrees with the Levi-Civita connection fiberwise. Moreover, the ∇-covariant derivative of the bilinear form g is constrained in terms of F, as expressed in (2.16)-(2.17) below.
Contributions from NUT centers
The second main question we address is what happens around places where the function V in (1.1) becomes singular, with At these points (sometimes called "NUT centers") the circle fiber shrinks to zero size, and the dimensional reduction procedure needs to be modified. We deal with this by cutting out a small neighborhood of each NUT center; thus the physics very near the NUT center is "integrated out" and replaced by some effective interaction for the fields on the boundary S 3 . After compactification to three dimensions the boundary is an S 2 .
At the lowest order in the derivative expansion, the boundary interaction is roughly a function Q(ϕ) of the value of ϕ along this S 2 , i.e. a function Q : M → R. (1.5) In §2. 6 we work out the constraints imposed by supersymmetry on this kind of boundary interaction. The answer depends on a topological invariant of the situation, namely the degree k of the circle bundle over the boundary S 2 , 2 We find (with respect to the preserved complex structure on the fiber M[ϕ 0 ]) ∂Q + kA (0,1) = 0. (1.7) In particular, for a boundary component around a NUT center the circle bundle is the Hopf fibration S 3 → S 2 , which has degree k = −1, so we get Geometrically, in order for (1.7) to make global sense, Q should not be quite a function -rather, e Q should be a section of L k , where L is the line bundle introduced above (on which A is a connection). (1.7) then says that e Q is actually a holomorphic section of L, with respect to a holomorphic structure on L determined by A (0,1) . This is a very strong constraint on Q, since any two such sections differ by a global holomorphic function, in the preserved complex structure, and M has rather few global holomorphic functions.
Topological issues
Now let us return to some topological issues we ignored above. The term (1.3) is a bit subtle, since as written it only makes sense when B is a connection in a trivial bundle. By integration by parts we could try to move the problem over to A, but in the examples which occur in nature, both A and B are actually connections in topologically nontrivial bundles. The problem is similar to the problems one meets in defining the U(1) Chern-Simons interaction, dA ∧ A, (1.9) except that in (1.3) we have two line bundles with connection involved rather than one, and one of the bundles arises by pullback from the space M.
As with the usual Chern-Simons story, even though the action does not make global sense, the exponentiated action may still be well defined, provided that the coefficient of the problematic term is properly quantized. That is the case here, so (1.3) is not a problem, at least on a compact three-manifold.
However, as we have discussed above, we will want to consider the effective threedimensional theory on manifolds with boundary (obtained by cutting out the NUT centers.) In this case we meet a further subtlety in defining (1.3), again well known from Chern-Simons theory: even the exponential of (1.3) is generally not well defined as a complex number. Rather it must be interpreted as an element of a certain complex "Chern-Simons line" depending on the boundary value of ϕ. How are we to square this with the expectation from the original four-dimensional theory, where the exponentiated action seems to be a number in the usual sense? The resolution is that there is also a contribution from the boundary term Q(ϕ), and we recall that e Q is valued in the line L. Thus, everything will be consistent if the Chern-Simons line where the exponential of (1.3) lives is precisely the dual line L * . This is indeed the case.
The simplest example
In this paper we study in some detail one concrete example of this general story, the simplest possible one: pure N = 2 theory with gauge group U(1). By direct computation we find that the compactified theory is indeed described by a hyperkähler sigma model deformed in accordance with our general recipe. The space M in this case is a 5-manifold, fibered over the real line parameterized by ϕ 0 = V R . Writing the fibers in terms of their preserved complex structure, we have where the T 2 factor has complex modulus τ given by the complexified gauge coupling. (This space is the most trivial example of a Seiberg-Witten integrable system.) The line bundle L has nontrivial topology over each T 2 fiber; more precisely, the possible topologies are classified by an integer (degree), and L has degree 1. Holomorphically it is the famous "theta line bundle," of which the theta function is a holomorphic section. Not surprisingly, then, the boundary term e Q turns out to be a theta function: (1.11) As we have explained, this form is essentially dictated by supersymmetry, but we can also understand the appearance of this theta function directly: it arises from a sum over smooth U (1) instantons supported near the NUT center. We work this out in §5.
Discussion and connections
• Our results in this paper fit in well with the observation in [6] that in compactifications of the (2, 0) SCFT from 6 to 5 dimensions on a circle bundle one gets a 5dimensional supersymmetric Yang-Mills theory, coupled to a 2-dimensional WZW model at each codimension-3 locus where the circle fiber degenerates. Indeed, upon further compactification on a Riemann surface C, this suggests that in the class S theory S[g, C], the contributions from NUT centers should be something like the partition function of the WZW model with group G. The particular case which we consider here is essentially the case G = U(1) and C = T 2 , for which the WZW partition function is an ordinary theta function, indeed matching what we find for e Q . It would be very interesting to understand how to recover the "nonabelian theta functions" by analogous computations in interacting four-dimensional field theories.
• The problem we consider bears some similarity to one described in [1], where the authors consider the dimensional reduction of the (2, 0) superconformal field theory from six to five dimensions on a circle bundle, and obtain a deformed version of five-dimensional super Yang-Mills. It would be interesting to know whether the two constructions fit inside a common framework.
• The original motivation for this work was the results of [4,3], where it was found that the moduli space M[R] which appears in compactification of an N = 2 theory on S 1 carries a natural line bundle V which admits a hyperholomorphic structure (see also [7,2] for mathematical accounts of the same bundle). In particular, it was conjectured in [4] that the contribution to the 3d effective theory from a NUT center would be a holomorphic section of V.
In this paper our formalism is slightly different from that envisaged in [4]. Our deformed sigma model involves a family of moduli spaces M[ϕ 0 ] which have a priori nothing to do with the spaces M[R], since the theory on a general Gibbons-Hawking space has a priori nothing to do with the theory on R 3 × S 1 . Still, the two models can be related to one another, at least when the original four-dimensional theory is conformally invariant. Indeed, by a local conformal transformation we can change the Gibbons-Hawking metric (1.1) to so that when V is slowly varying, the theory looks locally like a compactification on R 3 × S 1 for which the radius of S 1 is R/V. Using this relation we can try to compare our results with the expectations from [4].
In the example considered in §4 and §5, we indeed find that e Q is a holomorphic section of a holomorphic bundle L which is holomorphically equivalent to V.
2 The deformed hyperkähler sigma model
Fields of the undeformed model
The standard hyperkähler sigma model [9] in three dimensions involves a single hyperkähler target space M. Let the dimension of M be 4r. Recall that the complexified tangent bundle of M admits a decomposition where S is the (complex, two-dimensional) spinor representation of Spin(3) ≃ SU (2).
In this note, we use unprimed uppercase Latin letters for Sp (1) indices and primed uppercase Latin letters for Sp(r) indices. Spinor indices will be denoted by lowercase Greek letters while lowercase Latin letters are used to label the local coordinates on the hyperkähler space. Thus in components the fields would be written ϕ i (i = 1, . . . , 4r) and
Data for the deformed model
Our deformed model involves not a single hyperkähler space but a family of them, parameterized by a new scalar which we will call ϕ 0 . Let M denote the full family, which is (4r + 1)-dimensional, with local coordinates ϕ i (i = 0, . . . , 4r).
We emphasize that ϕ 0 is not a field in the deformed sigma model: rather it will be a fixed background function on R 3 . We require that ϕ 0 is harmonic on R 3 (perhaps with singularities), and moreover that there is a line bundle over R 3 (away from the singularities of ϕ 0 ) with connection B and curvature G, such that M carries a bilinear form g, which restricts to the hyperkähler metric on each fiber M[ϕ 0 ], but which need not be nondegenerate on the whole family.
M also carries a line bundle L with connection. This is one of the key new ingredients in our deformed model, with no direct analogue in the ordinary hyperkähler sigma model. 3 Locally we may trivialize this line bundle and thus represent the connection by a 1-form A on M, whose curvature is F. We work in conventions where A and F are purely imaginary.
Being hyperkähler, each fiber M[ϕ 0 ] carries a family of complex structures parameterized by lines in H, i.e. points of the projective space P(H). In the deformed sigma model one point c ∈ P(H) will be distinguished, corresponding to a preferred complex structure on each fiber M[ϕ 0 ]. By a rotation of the complex structures on M we can always choose c A = (0, 1). In what follows we will always make this choice.
Thus we have two 2r-dimensional distributions T 1,0 , T 0,1 on M, consisting respectively of (1, 0) or (0, 1) vectors tangent to the fibers. They induce the structure of a (Leviflat) CR manifold on M.
Hyperkähler identities and their extensions
The supersymmetry of the hyperkähler sigma model depends on certain identities which are part of the standard story of hyperkähler geometry. In our deformed hyperkähler sigma model we will need a slightly different geometric structure, which involves some extensions of these identities. Here we briefly review the relevant identities and state the requisite extensions.
One of the fundamental objects which enters the hyperkähler sigma model is the isomorphism e : T C M → H ⊗ E, represented in local coordinates as e iEE ′ . This isomorphism takes the Levi-Civita connection in T C M to an Sp(r)-connection in E, which we write in local coordinates as q A ′ jB ′ . This statement is expressed by the identity In the standard hyperkähler sigma model, (2.5), combined with the standard formula for Γ k ji in terms of g, ensures that the 1-fermion terms in the SUSY variation of the sigma model action vanish.
In our deformed sigma model, the bundles E and H will be extended over the full M, as will the Sp(r)-connection q in E; moreover the isomorphism e will be extended to a surjection e : T C M → H ⊗ E. We will also extend the Levi-Civita connection to a connection ∇ in the full T C M, given in coordinates by symbols Γ k ji where now i, j, k run from 0 to 4r. Finally, we will define a shifted version of q, of the form for some function f . The key identity (2.5) will then be extended to where now all indices run from 0 to 4r. For i, j = 0, we would like (2.7) to reduce to (2.5); thus we will require Γ 0 ji = 0 for i, j = 0. In addition, we choose Γ 0 More invariantly, this says that the extended connection ∇ preserves the distribution of vertical tangent vectors on M and that ∇(∂ 0 ) is also vertical. The remaining components Γ k 0i and Γ k i0 of ∇ are determined by requiring (2.7). Thus ∇ is completely determined once the extended e and q and the function f are given.
As we will show in §2.5 below, vanishing of the 1-fermion terms in the deformed sigma model action leads to a condition on ∇: for E = 2, we will need (2.9) (2.10) The equations (2.9)-(2.10) constitute one of the main results of this paper.
To assess the geometric content of (2.9)-(2.10) it is convenient to look not at ∇ but rather at a closely related connection ∇. ∇ is characterized by the requirements that it is a real connection and that, for any vector fields Y and Z, (2.11) In local coordinates this says that the coefficients Γ k ji agree with Γ k ji whenever k is a holomorphic direction or the ϕ 0 direction, while the Γ k ji for k an antiholomorphic direction are determined by the requirement that ∇ is real.
but then using (2.11) we have and if X, Y are real then T ∇ (X, Y) is also real, but the only way that something real can lie in T 0,1 is if it actually vanishes, i.e. T ∇ (X, Y) = 0. Next, we consider (2.9). As E ′ varies, with E = 2, the vector field e l EE ′ ∂ l runs over a basis for T 0,1 ; thus the quantity in parentheses in (2.9) vanishes whenever l is an antiholomorphic direction, for any i, j. Moreover, using the fact that Γ 0 ij = 0 and g is Hermitian on each fiber of M, the only terms which contribute in (2.9) when l is antiholomorphic will be those with k holomorphic; for these moreover we haveΓ k ij = Γ k ij , so finally we get for i, j = 0 and l antiholomorphic (2.14) Now since both g andΓ are real, we may simply take the complex conjugate of (2.14) to get the same equation with l holomorphic. Thus (2.14) holds for all i, j, l = 0. This is just the standard formula for the Levi-Civita connection. Thus (2.9) requires that ∇ agrees with the Levi-Civita connection on each fiber of M. Now consider what (2.9) says if we take i = 0, j = 0, and l antiholomorphic: which we could also write as If i = 0, j = 0 and l is antiholomorphic then we get similarly The equations (2.16)-(2.17) are expressing the constraint imposed by supersymmetry on the ϕ 0 -dependence of the bilinear form g. It would be interesting to understand better their intrinsic geometric meaning. Note that if g is fiberwise covariantly constant then (2.16) reduces to the pleasant form ∇ 0 g jl + F jl = 0; this is indeed true in the simple example we consider in §4, but we do not know whether it will be the case generally. Another identity coming from the special form of the curvature for a hyperkähler manifold is The 3-fermion terms in the SUSY variation of the standard hyperkähler sigma model action vanish provided this identity is satisfied. Vanishing of the 3-fermion terms in the deformed model needs a simple extension of (2.18): we simply require that the same equation holds even for i = 0, i.e.
Finally, if we define 20) then for i = 0 we have the Bianchi identity, The identity (2.21) ensures that 5-fermion terms in the SUSY variation of the standard hyperkähler sigma model action vanish. The same identity will suffice for the deformed model as well (said otherwise, the extension of (2.21) to include i = 0 is automatically satisfied, since e 0 AA ′ = 0.)
Action of the deformed model
The action for our deformed sigma model is: In the special case where G = 0 and ϕ 0 is constant, this action reduces to the undeformed hyperkähler sigma model as written in [9]. The SUSY transformations are generated by fermionic parameters ζ α E ,ζ E α as follows: In the undeformed sigma model there is These equations reduce the supersymmetries from 8 to 4. 4 If we choose c A = (0, 1) as mentioned above, the supersymmetries are generated by ζ α 1 andζ 2 α . The parameter ζ α E is not a constant spinor: it may depend on position through ϕ 0 . We will require where f (ϕ 0 ) is a function of the background scalar ϕ 0 only. Finally, since ϕ 0 is a background field we should have δ ζ ϕ 0 = 0: thus we require e 0 EE ′ = 0.
For our purposes it will not be necessary to write the explicit reality condition on the spinors or the SUSY parameters. We will simply treat the barred and the unbarred spinors as independent 2-component complex spinors. We use the following conventions for contracting spinors and gamma matrices in three dimensions: (2.28) 4 Note that the supersymmetries preserved are not those which correspond to an ordinary Kähler sigma model [10] into M with its preferred complex structure, although that model also has 4 supercharges. With our preserved supersymmetries we always have ζ α Eζ E β = 0, reflecting the fact that all translations are broken (as they must be, since the background field ϕ 0 generally has no translation symmetry.) The Sp(r) indices are raised and lowered by antisymmetric matrices ǫ A ′ B ′ and ǫ A ′ B ′ which are covariantly constant. The Sp(1) indices are similarly raised/lowered by constant antisymmetric matrices ǫ AB and ǫ AB .
SUSY verification: 1-fermion terms
The SUSY variation of the deformed sigma model action can be decomposed into terms involving 1, 3, and 5 fermions. The 3 and 5-fermion terms have exactly the same structure as in the undeformed hyperkähler sigma model, and the vanishing of these terms works out in the standard fashion; thus we relegate their discussion to Appendix B. The new contributions come exclusively from the 1-fermion terms; vanishing of these terms leads to an interesting new equation, as we shall now describe.
The terms with a single fermion in the variation of the Lagrangian are: Note that we have only retained terms linear inζ E α in the variation. Terms linear in ζ α E can be formally obtained by complex conjugation. Integrating by parts to put all derivatives on ϕ gives Next, we rewrite the variation as δL =ζ E α δL α E and δL α E = δL α βEE ′ ψ E ′ β . Using the relations 1 2 {γ ν , γ µ } = g µν and e i EE ′ g ij = e jEE ′ as well as ∂ µζ α E + f (ϕ 0 )∂ µ ϕ 0ζα E = 0, the above expression may be reduced to Using this we can eliminate the q j in favor of the relevant components Γ k ij . We split the result in the form δL α Recall that Γ k ij is not necessarily symmetric in its lower indices (the connection ∇ may have torsion.) δL EE ′ involves only the symmetric part of Γ k ij , while the antisymmetric part is contained in δL ρEE ′ . Rearranging the terms in δL EE ′ , we have where we have used dG = 0 and 1 2 ǫ µνρ G µν = ∂ ρ ϕ 0 to consolidate the last two terms into ∂ ρ ϕ 0 ∂ ρ ϕ i F ik e k EE ′ . Thus, finally, the condition for supersymmetry of the deformed hyperkähler sigma model can be summarized as These are precisely the equations which we wrote above in (2.9) and (2.10). We have now shown that they arise naturally by demanding that the 1-fermion terms in the SUSY variation of the deformed sigma model vanish.
Supersymmetry for boundaries
Now suppose we consider the deformed hyperkähler sigma model on a space with boundary, such as we will encounter upon integrating out a NUT center as described in the introduction. The deformed action including the boundary terms should still have the full supersymmetry which we have before integrating out. In this section we work out the condition this imposes on the boundary terms.
Let us consider what the boundary term in the action should look like if we restrict to constant fields (or equivalently consider just the lowest term in the derivative expansion). Then we will have for some function H on M. The supersymmetry variation of this term is This has to be added to extra boundary terms coming from integration by parts in the bulk variation. These arise only in the 1-fermion terms; the terms involvingζ E are where here and below, N denotes a unit normal to the boundary. Collecting all terms involvingζ E we have The above equation can be rewritten as Using the condition that the fields are constant over the boundary, this variation reduces to After integrating over the boundary this condition gives where k is the degree introduced in (1.6), and Q = Hvol(∂). Finally, let W = exp(Q); W is the contribution to the path integral integrand coming from this boundary. Then W obeys Recalling that we require this only for E = 2 (but arbitrary E ′ ), we obtain exactly (1.7) which appeared in the introduction. In §5.3 we will check this equation directly, in the particular example of a deformed 3D sigma model obtained by compactifying U(1) SYM on Taub-NUT space.
U(1) theories compactified on S 1
In this section we consider U(1) super Yang-Mills compactified on R 3 × S 1 and study the corresponding hyperkähler sigma model as a warm up example before taking on the case of U(1) super Yang-Mills compactified on Gibbons-Hawking space in §4. We first discuss the dimensional reduction of a 4-dimensional bosonic U(1) gauge theory and dualization of the resulting 3D action. We then present the analogous computation for U(1) super Yang-Mills and derive the related hyperkähler sigma model.
Bosonic U(1) gauge theory
The metric on R 3 × S 1 is
Action and dimensionally reduced fields
The action of a pure U(1) gauge theory on R 3 × S 1 is given by 5 Note that, in addition to the canonical kinetic term for the gauge field and the usual θ term, we have included an additional boundary term associated with the monopole charge of the gauge field with the coupling θ m . For any field configuration invariant under translations along S 1 , the boundary term reduces to where l is the monopole number. θ m can therefore be interpreted as a potential conjugate to the monopole number. In particular, the exponentiated action is invariant under a shift To dimensionally reduce the theory on S 1 , we impose the condition that the Lie derivative of the fields along the Killing vector field ∂/∂χ vanishes, i.e.
which implies that the curvature 2-form can be written as Invariance under large gauge transformations requires that the scalar σ be periodic:
Star operators
For any 1-form α (4) on the manifold R 3 × S 1 , one has where ⋆ (4) and ⋆ (3) are the Hodge star operators on R 4 and R 3 respectively. Similarly, for any 2-form β (4) on R 3 × S 1 which admits a decomposition as β (4) = β (3) + β ′ ∧ dy (which is the case for the curvature 2-form with the condition of dimensional reduction), we have
Dimensionally reduced action and dualization
Using the above decompositions of 2-forms and star operators, we have (3.10) The 3D action is therefore To introduce a scalar dual to the 3D gauge field, one adds to the Lagrangian a new term Note that the equation of motion for γ imposes the Bianchi identity dF (3) = 0. One needs to make sure that periods of F (3) and dγ over respective p-cycles are adjusted such that the Lagrange multiplier term in (3.12) does not change the path integral. If we require that the γ is periodic, then the periods of F (3) over 2-cycles must be integer multiples of 2π, as desired.
One can then integrate out F (3) (considered as an arbitrary 2-form) using its equation of motion, to obtain the dualized action: The boundary term S (3D) bdry can be made to vanish by appropriately choosing the boundary configuration of the scalar field γ. If θ e denotes the holonomy of the gauge field along the boundary S 1 , the boundary conditions on the scalars γ and σ may be summarized as where n is an integer.
N = 2 U(1) gauge theory
In this section, we consider a N = 2 U(1) SYM on R 3 × S 1 , which we dimensionally reduce along the circle direction to obtain a three dimensional theory. We then dualize the gauge field in exactly the same as shown above to obtain the corresponding hyperkähler sigma model in 3D.
Dimensional reduction from 6D N = 1 super Yang-Mills
where we choose the metric η MN = (−1, 1, 1, 1, 1, 1). The fermionic field ψ a is a symplectic Majorana-Weyl spinor which transforms as a doublet of the SU(2) R-symmetry. The action in (3.17) is invariant under the following SUSY transformation rules: The SUSY parameter ζ a is a Grassman-odd symplectic Majorana-Weyl spinor obeying the Killing spinor equation on R 3 × S 1 × T 1,1 : Further details on our conventions for 6D spinors and SUSY transformations of 6D N = 1 SYM can be found in Appendix C.
4D action and SUSY transformation
Next we dimensionally reduce the action, demanding that the Lie derivatives (which in components will just be represented as ordinary derivatives) of the gauge and fermionic fields along the torus directions vanish. The rules/notation for dimensional reduction from 6D to 4D spinors are given in Appendix D.
Dimensional reduction gives the action for N = 2 SYM in 4D, to which we add the standard theta term as well as a boundary term with coefficient θ m , just as we did in the case of the purely bosonic theory.
The SUSY transformation rules may be summarized as Note that both the theta term and our extra boundary term are separately invariant under SUSY.
3D action and dualization
Dimensional reduction along a circle direction is straightforward for the bosonic action. The reduction of the fermionic action and the SUSY transformations from 4D to 3D is described in Appendix D. The 3D action, leaving out the auxiliary fields, is where i, j = 1, 2, 3.
The corresponding SUSY transformation is The bosonic action (along with the θ-term) may be dualized as shown in Section 3.1. The final form of the 3D action is The SUSY transformation rules of the dualized 3D action are δγ = (τζ aλ a −τζ a λ a ), δσ = (ζ aλ a −ζ a λ a ), Backgrounds preserving the SUSY, i.e. obeying δ(fermions) = 0, are
U(1) SYM on R 3 × S 1 : hyperkähler sigma model picture
The dualized 3D action obtained above is an elementary example of a hyperkähler sigma model in 3D. In the context of the deformed hyperkähler sigma model proposed in this paper, this is an example of the "undeformed" case.
To recast the above 3d action into the standard form of a hyperkähler sigma model, we organize the scalar fields (ϕ i with i = 1, 2, 3, 4) as The SUSY transformations then reduce to From the dualized action (3.24) and the redefinition (3.27), the bosonic part of the action can be written as We now show how the fermions and the SUSY parameters in the UV Lagrangian of a U(1) SYM are related to the fermions and the SUSY parameters respectively in the corresponding HK sigma model. In particular, the UV Lagrangian has a manifest SU(2) R-symmetry which is not easily visible in the sigma model description. Relating the fermions on the two sides, among other things, clarifies the action of R-symmetry on the sigma model fermions.
The Sp(r) indices (primed indices) are raised and lowered by the following 2-form For the Sp(1) indices (unprimed indices), the corresponding 2-forms are The intertwiner can be explicitly written as Now writing the SUSY variations of the fermionic fields in the HK sigma model, we have (3.33) The fermions and SUSY parameters can now be easily related by comparing equation 3.28 and 3.33.
One can readily check that SUSY variations of the bosons match precisely with the above identification.
Therefore one can derive that which shows that q A ′ B ′ i = 0, ∀i.
U(1) theories on Gibbons-Hawking spaces
In this section, we present a nontrivial example of the deformed hyperkähler sigma model introduced in §2. We start with U(1) SYM on a general Gibbons-Hawking space and dimensionally reduce along the circle fiber to obtain an explicit form for the deformed hyperkähler sigma model in 3D. This allows one to compute the connection Γ on the family of hyperkähler manifolds M and directly check that the condition of supersymmetry derived for a generic sigma model in this class in §2 holds in this particular case.
Bosonic U(1) gauge theory on Gibbons-Hawking space
The action of a bosonic U(1) gauge theory on a Gibbons-Hawking space X is The metric on the 4-manifold X can be written in the form where Θ = dχ + B, with B ∈ Ω 1 (R 3 ) and ⋆ (3) dB = 1 R dV. Since our task is to reduce the action to flat 3D, we express all p-forms in terms of the orthogonal basis of dx 1 , dx 2 , dx 3 and Θ and rewrite four-dimensional star operators in terms of the three-dimensional (flat) ones, as we did before.
For a 1-form α ∈ Ω 1 (X), Similarly, for a 2-form β ∈ Ω 2 (X), one can show that Thus the bosonic action (4.1), dimensionally reduced to three dimensions, reads To dualize, we add the term (4.10) The equation of motion for F (3) modulo the boundary term is Integrating out F (3) using the above equation of motion, we arrive at the dualized 3D action: where S boundary = −2iR ∞ γF (3) . (4.13)
Adding the θ m term
Consider adding to the 4D action the following boundary term, generalizing (3.2) above: where X ∞ denotes the S 1 bundle on S 2 at the boundary of the Taub-NUT space, as r → ∞. Since ∆S E and S boundary will cancel each other if at r → ∞, γ and σ have the boundary conditions Therefore, the final form of the dualized 3D action is Note that the periodicity of θ m is a bit subtler than it was in the case of R 3 × S 1 . In that case we had simply S E (θ m + 2π) = S E (θ m ). In the present case we have instead where k measures the degree as defined in (1.6). To see this, choose a section s of the S 1 bundle over the complement of one point in S 2 ; so s is a 2-manifold sitting inside the boundary X ∞ . The boundary ∂s winds k times around one fiber of the circle bundle. On the other hand, since s has the topology of R 2 we can choose a global potential A (4) along s, and thus we get which gives (4.18).
N = 2, U(1) gauge theory on Gibbons-Hawking space 4.2.1 Dimensional reduction from 6D
SYM on X can be obtained from a N = 1 theory on X × T 1,1 using dimensional reduction. Let (x 0 , x 5 ) denote coordinates along T 1,1 while (x 1 , x 2 , x 3 , x 4 ) are coordinates along X. The action of the 6D theory in terms of vierbeins is The above action is invariant under the SUSY transformation The SUSY parameter ζ a is a Grassman-odd symplectic Majorana-Weyl spinor and is a solution of the Killing spinor equation on X × T 1,1 , namely D M ζ a = 0. For the special case of a single-centered Taub-NUT space, we work out the following solution for the Killing Spinor equation in Appendix §E: (4.22) The above equation ensures that exactly half of the original SUSY on flat space is preserved. From a 4D standpoint, the fermionic parameters generating the preserved SUSY on NUT space are constant chiral spinors. This is indeed the preserved supersymmetry for any Gibbons-Hawking space X. The dualized 3D theory on R 3 , which we shall discuss momentarily, is the best place to demonstrate this.
4D action, SUSY and localization equations
The standard 4D SYM action on X can be obtained by dimensional reduction of the 6D action discussed above. As in the case for R 3 × S 1 , we add the standard topological term and a boundary term to the bosonic action.
The SUSY transformation, generated by a chiral half of the supersymmetry parameters on R 3 × S 1 , may be summarized as: To ensure the convergence of the 4D path integral, one needs to consider the theory reduced from a Euclidean version of the 6D theory and this can be achieved by setting A 0 = iA E 0 , with A E 0 real. With this modification, the localization equations give the following solutions for the bosonic fields:
3D action: 4D instanton and Bogomolny equations
The 3D action may be obtained by dimensional reduction of an N = 1 theory on X × T 1,1 as shown in the previous section, giving Note that F (3) is not the curvature of a gauge field in three dimensions, since The SUSY transformations can be summarized as The condition δλ α,a = 0 gives a modified version of the Bogomolny equation: Therefore, the localization equations lead to the following solution for the bosonic fields after making the substitution A 0 → iA E 0 to ensure the convergence of the path integral, as mentioned earlier.
Note that the equations 4.25 and 4.30 are consistent. Recalling the decomposition of the four-dimensional star operator in terms of the three-dimensional star operator, we have which shows that the modified Bogomolny equation obtained in 3D is equivalent to the 4D instanton (solution of the anti-self dual equation) on X.
Dualized 3D action and SUSY
The bosonic part of the action may be dualized as before. (4.32) where i, j = 1, 2, 3 and the rules for SUSY are as follows: Note that the 3D dualized action in (4.32) follows from dimensional reduction of U(1) SYM on a generic Gibbons-Hawking space parametrized by the scalar function V and the 1-form B (and not just NUT space). One can then directly check that this action is invariant under SUSY rules summarized in (4.33) for a constant ζ a . Therefore a general Gibbons-Hawking space preserves exactly the same supersymmetry as a NUT space (Appendix E). The localization equations for the dualized 3D action can be read off from (4.33).
U(1) SYM on Gibbons-Hawking Space: hyperkähler sigma model picture
The dualized 3D action obtained above is an elementary example of the deformed hyperkähler sigma model introduced in §2.4. To recast the above 3D action into the standard form of a hyperkähler sigma model action, we organize the scalar fields (ϕ i with i = 1, 2, 3, 4) in the following manner: (4.36) SUSY transformations then reduce to the following form Defining ϕ 0 = V R , the bosonic part of the action can therefore be written as Unlike the case of U(1) SYM on R 3 × S 1 , the connection A on the line bundle L is nontrivial in this case. The nonzero components of the connection and the curvature are As in the case of U(1) SYM on R 3 × S 1 , Sp(r) indices (primed indices) are raised and lowered by the antisymmetric pairing The intertwiner e can be explicitly written as The fermions and the SUSY parameters in the UV Lagrangian of U(1) SYM may be related to the fermions and the SUSY parameters respectively in the corresponding hyperkähler sigma model. From the discussion in §3.3, we find that half of the SUSY parameters have to be set to zero, namely The fermions and SUSY parameters can now be easily related: (4.43) With the above identification, one can readily check that the SUSY transformation of the scalars and fermions in the sigma model matches (4.37). Since ζ a α is a constant spinor, the above identification immediately implies (4.44) One can also read off q A ′ B ′ i directly by comparing the fermionic actions in the two descriptions: Therefore, the effectiveq A ′ 0B ′ that appears in the extended hyperkähler identity (2.7) will be given asq Given the explicit forms ofq A ′ B ′ 0 and f (ϕ 0 ), one obtains the following nontrivial components of the connection in the extended hyperkähler identity (2.7): Now we can check whether this connection obeys the constraints arising from the vanishing of 1-fermion terms in the SUSY variation of the action, (2.9)-(2.10), which we derived for a general deformed hyperkähler sigma model which preserves some supersymmetry. Note that the 3-fermion constraint (2.19) is satisfied trivially in this case. For U(1) SYM on X, the non-trivial part of the first constraint, namely for j = 0 and arbitrary i, assumes the particular form From the structure of the intertwiners specified in (4.41) and the nonzero components of Γ k ij in (4.47), it is clear that there are only two nontrivial components one needs to check, namely for l =ȳ, i = y, and l =φ, i = φ.
In the first case, we have In the second case , the constraint is satisfied trivially since each of the terms in the equation is individually zero.
The connection derived in (4.47) for the hyperkähler sigma model which arises from the circle compactification of U(1) SYM on X space therefore obeys the first SUSY constraint (2.9). Finally, the second SUSY constraint (2.10) can be written as which is trivially satisfied in this case, since all the relevant components of the connection vanish.
In appendix A, we consider the sigma model again after rescaling the adjoint scalar so that the metric looks closer to the one obtained via compactification on R 3 × S 1 , with an effective radius R e f f = R/V.
Setup
So far we have described the local physics of the 3-dimensional sigma model which one obtains by starting with the pure U(1) N = 2 gauge theory in four dimensions and dimensionally reducing on a Gibbons-Hawking space GH. Now suppose we consider the actual compactified theory as opposed to the naive dimensional reduction. On general grounds we would expect that the local physics of this theory at energy scales E ≪ V/R and E ≪ dV can be described by the same fields which appear in the dimensionally reduced theory. In fact, here we can say more: since the four-dimensional theory is free (even on the Gibbons-Hawking space) the IR physics of the true compactified theory is governed by the same Lagrangian we obtain by dimensional reduction -there are no quantum corrections.
More precisely, what we have described so far is the physics in the locus where V is finite, and hence the fiber of GH is a finite-size circle. In any complete example where V → 1 at infinity, V must have singularities, as it is a bounded harmonic function. We assume GH is smooth; then at these singularities we must have the precise coefficient V ∼ R/r (recall that R is the asymptotic radius of the circle of GH, and r is the distance from the singularity.) At these points our dimensional reduction procedure breaks down.
How should these singularities be incorporated in the reduced theory? We adopt a brutal approach: cut out a neighborhood of each singularity in GH, of radius L ≫ R, and then study the compactified theory at energies E ≪ 1/L ≪ 1/R. In four-dimensional terms, the resulting spacetime has a boundary with one S 3 component for each singularity we cut out; in the compactified theory the corresponding boundary components have the topology of S 2 . The physics of the compactified theory is described by the same local Lagrangian as before, plus some new, unknown boundary interaction at the new S 2 . At energy E ≪ 1/L this interaction will be well approximated by the leading term in the derivative expansion, namely the 0-derivative term, which we may write as Q(ϕ) for some function Q on M.
To determine Q(ϕ) explicitly we will compute the partition function Ψ of the U(1) gauge theory on a particular Gibbons-Hawking space, namely Taub-NUT space, characterized by the harmonic function V = 1 + R/r.
One way of doing this computation is to work directly with the UV description of the theory. We obtain an answer which in principle can depend on various choices involving the boundary at spatial infinity: we have a complex parameter a which gives the asymptotic value of the complex scalar of the theory, an angle θ e which gives the asymptotic value of the holonomy of the U(1) gauge field around the circle fiber, and a parameter θ m which is inserted explicitly into the boundary term (4.14) in the action.
On the other hand we can also work with the IR description just discussed. In this version of the story, the parameters a, θ e , θ m enter on a more equal footing: they determine a point of the target M of the sigma model, which gives a Dirichlet boundary condition for the sigma model fields at infinity. Since we are in the limit E ≪ 1/L and the sigma model is IR free, the partition function up to overall constant will be simply the contribution from constant fields; and since the bulk action vanishes on constant fields, the answer will come just from the boundary term on the S 2 we have cut out around the NUT center. Thus we get Ψ = e Q . (5.1) Comparing this with the UV computation thus determines Q.
UV computation
The bosonic part of the dualized 3D action for a Gibbons-Hawking space X is given as where we have defined the scalar fields φ = A 5 − iA E 0 ,φ = A 5 + iA E 0 .
Instanton configurations
As explained in the previous section, the path integral of U(1) super Yang-Mills on X is completely localized on the following set of instantonic configurations: In terms of 3D fields, the above configuration has Noting that F (3) = −iV(Im τ) −1 ⋆ 3 (dγ − (Re τ)dσ) and demanding that γ → θ m /4πR asymptotically (so that the boundary terms vanish as explained earlier), the correspond-ing solution for γ and σ is Note that this is a particular case of (4.35) where F = 0. Since d(γ − τσ) = 0 for this configuration, the only contribution to the action comes from the topological term. Now, let us evaluate the action in the special case where X is NUT space: where in the final step we used where we define y = θ m −τθ e 4π ,ȳ = θ m −τθ e 4π . Note that it has the expected periodicity properties:
Holomorphy of the boundary terms
The above formula for the partition function can now be used to explicitly check the equation for boundary supersymmetry (2.47). Writing Ψ (θ e , θ m , τ) in terms of the coordinates φ,φ, y,ȳ on M, we get Ψ (y,ȳ, τ) = e 8iπX(y,ȳ,τ) Θ(τ, 2y) Recall the formula for the connection A i derived in §4.3: Given the half supersymmetry which is preserved, (2.47) will reduce to Theφ component of the equation is trivially satisfied, since Aφ = 0 and Ψ is independent ofφ. For theȳ component, we have where for the final equality we have used the formula (5.10) for A.
A U(1) SYM on NUT space as hyperkähler sigma model: rescaled version
In this section, we again consider the hyperkähler sigma model obtained from U(1) SYM on NUT Space via circle compactification, but after rescaling the adjoint scalar so that the metric looks closer to the one obtained via compactification on R 3 × S 1 .
R the bosonic part of the action can now be written as The intertwiners can again be explicitly written as To express the fermionic action and the SUSY transformation in terms of the "effective" radius 1/ϕ 0 , one needs to rescale the fermionic fields and the Killing spinor in the following way: The fermionic action and the rules of SUSY variation in terms of the rescaled fields are Comparing the above SUSY transformation with the standard form of SUSY transformation for the deformed hyperkähler sigma model allows one to relate the fermions in the two descriptions as before: Since ζ ′a α = ζ a α V 1/4 with ζ a α being a constant spinor, the above identification immediately implies One can also read off q 0 directly from the fermionic action: Therefore, the effectiveq A ′ 0B ′ that appears in the extended hyperkähler identity is Given the explicit forms ofq A ′ B ′ 0 and f (ϕ 0 ), one obtains the following nontrivial components of the connection from the extended hyperkähler identity (2.7).
Now, we can readily check whether this connection obeys the SUSY constraints (2.9)-(2.10), which we derived for a general deformed hyperkähler sigma model. For U(1) SYM on NUT space, the non-trivial part of the first SUSY constraint (2.9) is From the structure of the intertwiners specified in (A.2) and the nonzero components of Γ k ij in (A.9), it is clear that there are only three nontrivial components that one needs to check, namely for l =ȳ, i = y, for l =φ, i = φ and for i = 0, l =φ.
In the first case, we have while the second case leads to For the third case, we get The connection derived in (A.9) therefore obeys the first SUSY constraint (2.9). Finally, the second SUSY constraint (2.10) can be written as which is evidently satisfied in this case.
B SUSY variation of the hyperkähler sigma model: 3-fermion and 5-fermion terms
The 1-fermion terms in the SUSY variation of the deformed hyperkähler sigma model were analyzed in §2.5. In this appendix, we show that the 3-fermion terms and the 5fermion terms also vanish such that the sigma model action is indeed invariant under the SUSY transformation (2.23)-(2.25). We show that vanishing of the 3-fermion terms requires that the generalization of hyperkähler identity associated with the special form of curvature on a hyperkähler manifold, given by (2.19), is satisfied. Similarly, vanishing of the 5-fermion terms requires that the Bianchi identity, given by (2.20), is satisfied.
3-fermion terms
Let us consider the 3-fermion terms in the SUSY variation first.
Relabeling indices this becomes
which expands out to Now we may divide this into the terms involving derivatives of fermions and those involving derivatives of bosons. First, the terms with derivatives of fermions add up to zero: Next, the derivatives of bosons: which we may relabel to δL Finally using the gamma matrix identity: γ α νβ ((γ ν γ µ ) σ τ + g µν δ σ τ ) − (β ↔ τ) = 0, one can modify the variation above to which indeed vanishes according to the extended hyperkähler identity (2.19).
5-fermion terms
Now, consider the 5-fermion terms in the SUSY variation.
which we may reorder/relabel into or equivalently, using the symmetry under exchange αβ ↔ δω, where in the last step we have used the definition of B iA ′ B ′ C ′ D ′ in (2.20) Note that B iA ′ B ′ C ′ D ′ is completely symmetric in the Sp(r) indices -this follows from the symmetry property of Ω A ′ B ′ C ′ D ′ and the above definition. Now, using the identity γ µα β γ δ µω = 2δ α ω δ δ β − δ α β δ δ ω , one can show that Therefore, the 5-fermion term reduces to Now recall the Bianchi identity from (2.21), Using the Bianchi identity, the 5-fermion term in the SUSY variation evidently vanishes.
C 6D spinors and N = 1 SYM in 6D
In this section we explain our conventions regarding 6D spinors and provide a few more details about the SUSY transformation of fields in 6D N = 1 SYM.
The Lagrangian of the 6D theory is where we choose the following metric on the flat space η MN = (−1, 1, 1, 1, 1, 1). The fermionic field ψ a is a symplectic Majorana-Weyl spinor which transforms as a doublet of the SU(2) R-symmetry. A SO(1, 5) spinor obeys the Weyl condition and is conjugate to self but does not obey the standard "Majorana" condition. However, when combined with the SU(2) R symmetry, one can have a modified reality condition on these spinorsthe "symplectic Majorana" condition.
where Γ 7 is the chirality matrix in 6D defined as
SUSY transformation
The action in equation (C.1) is invariant under the following SUSY transformation rules: Note that the SUSY parameter ζ a is a Grassman-odd symplectic Majorana-Weyl spinor and a solution of the Killing spinor equation on R 3 × S 1 × T 1,1 : In equation (C.3), we used thatψ a Γ M ζ a = −ζ a Γ M ψ a which follows from the general relation involving 6D spinorsψ The SUSY variation forψ a can be obtained as follows:
SUSY invariance of the action
The variation of the bosonic part of the Lagrangian is The variation of the fermionic terms in the Lagrangian is To obtain the final equation one needs to use the Bianchi identity for the gauge field i.e. dF = 0, in addition to the following identities involving gamma matrices: Therefore, from (C.6) and (C.7), we have
Closure of the SUSY algebra
Since we took the SUSY parameter ζ a to be Grassmann-odd, the operator δ SUSY acts on the fields as a bosonic operator. Therefore, one needs to compute the action of the commutator of two such operators on the fields to check the closure of the SUSY algebra.
The action of two successive SUSY transformation produces a translation with the parameter v N = 2ζ ′ a Γ N ζ a .
D 4D and 3D spinors
In this section, we spell out the connection between spinors in 6D Minkowski space and spinors in 4D and 3D Euclidean space. To go back and forth between the 6D and the 4D description, we choose the following representation of the 6D gamma matrices: where γ m are the 4D gamma matrices and γ 5 is the 4D chirality operator. In this representation, we may write any 6D Dirac spinor ψ a as the doublet ψ a = ψ a 1 ψ a 2 , where each of the entries is a four-component complex spinor.
E Killing spinor on TN 4 × T 1,1 The metric on the space TN 4 × T 1,1 is with θ ∈ [0, π], φ ∈ [0, 2π], χ ∈ [0, 4π] and V(r) = 1 + R/r. The isometry group of TN 4 (a hyperkähler manifold of quaternionic dimension 1) is U(1) × SU (2) and the corresponding Killing vectors are: The X i s satisfy the su(2) Lie algebra while X 0 is the Killing vector which generates the U(1) isometry. We will solve the Killing spinor equation on NUT space in a gauge (i.e. for a certain choice of veirbeins) where the invariance of the Killing spinor under the U(1) isometry becomes manifest. Therefore, let The independent, nonzero spin connections are then dV dr e 4 = R 2 2r 2 V 2 (dχ − cos θdφ) (E.14)
Solution for the Killing Spinor
The Killing spinor equation on the manifold TN 4 × T 1,1 is In terms of the local coordinates of (E.1), the components of the Killing spinor equation as follows ∂ r ζ a = 0, ∂ θ ζ a − 1 2 Γ 12 ζ a + R 4rV (Γ 12 + Γ 34 ) ζ a = 0, ∂ φ ζ a + 1 2 sin θΓ 31 ζ a − 1 2 cos θΓ 23 ζ a − R sin θ 4rV (Γ 31 + Γ 24 ) ζ a − R 2 cos θ 4r 2 V 2 (Γ 41 − Γ 23 ) ζ a = 0, The terms dependent on the radial coordinate drop off if we choose the spinor such that (1 − Γ 1 Γ 2 Γ 3 Γ 4 ) ζ a = 0. Therefore, for any given representation of the gamma matrices Γ ab , the solution of this equation is, With this projection condition imposed, the Killing spinor equation is clearly the same as that for R 3 × S 1 × T 1,1 written in spherical polar coordinates on R 3 for which we know the Killing spinor to be simply a constant spinor. Therefore, for a particular choice of vierbeins (compatible with Cartesian coordinates on R 3 ), the Killing spinor on TN 4 × T 1,1 is simply a constant spinor obeying the above projection condition. The above computation also shows that constancy and covariant-constancy are equivalent for any spinor on the manifold TN 4 × T 1,1 if the spinor obeys the projection condition.
Dimensional reduction of the Killing spinor equation
On dimensionally reducing the theory along the S 1 fiber, we demand that ∂ χ ζ = 0automatically true for the chiral spinor which is a solution of the Killing spinor equation D χ ζ = 0. The spinors generating the supersymmetry transformations in the dimensionally reduced theory are therefore expected to be given by the remaining components the Killing spinor equation, viz. D m ζ = 0, with m = r, θ, φ. We will now express this equation in terms of the three-dimensional covariant derivative.
We will be interested in reducing the theory to flat 3D space, with the metric ds 2 = dr 2 + r 2 (dθ 2 + sin 2 θdφ 2 ) (E.21) Choosing e 1 = dr, e 2 = rdθ, e 3 = r sin θdφ, the spin connections for the above metric are | 14,141.4 | 2014-01-01T00:00:00.000 | [
"Physics"
] |
Investigating the reinforcing mechanism and optimized dosage of pristine graphene for enhancing mechanical strengths of cementitious composites
The proposed reinforcing mechanism and optimized dosage of pristine graphene (PRG) for enhancing mechanical, physicochemical and microstructural properties of cementitious mortar composites are presented. Five concentrations of PRG and two particle sizes are explored in this study. The results confirmed that the strength of the mortars depends on the dosage of PRG. The PRG sizes have a significant influence on the enhancement rate of mechanical strengths of the mortars, whereas they do not have a significant influence on the optimized PRG dosage for mechanical strengths. The PRG dosage of 0.07% is identified as the optimized content of PRG for enhancing mechanical strengths. The reinforcing mechanism of PRG for cement-based composites is mostly attributed to adhesion friction forces between PRG sheets and cementitious gels, which highly depends on the surface area of PRG sheets. The larger surface area of PRG sheets has a larger friction area associated with cementitious gels suggested to be one of favorable parameters for enhancing mechanical strengths with graphene additives.
Introduction
Cementitious composites are the most common construction materials because of their low cost, availability, and high strength in compression. Nevertheless, cementitious composites are weak in tensile strength, and poor in resisting crack propagation and corrosive environments, e.g. sulphate ions and chloride ions. 1,2 To improve these drawbacks, studies showed benets of using reinforcement such as steel, carbon or plastic bres 3,4 to impede the propagation of microcracks, or additives with nanomaterials such as SiO 2 and TiO 2 , 5,6 carbon nanobers and carbon nanotubes [7][8][9] to accelerate the cement hydration process and create materials with denser microstructures. [10][11][12][13] However, these supplementary materials are zero or onedimensional materials with limited performance in bonding and arresting cracks at the nanoscale, and unable to efficiently enhance the reinforcement. [7][8][9]14 Recently studies have shown that two-dimensional materials such as graphene derivatives have a good potential for enhancing performances of cement composites because of their high properties and aspect ratios. 15,16 The applications of different graphene derivatives with different properties (e.g. graphene oxide (GO), reduced graphene oxide (rGO), and pristine graphene (PRG)) in cementitious composites have been explored in the literature. [17][18][19] GO was the most attractive graphene material due to its favourable functional groups on the surface (e.g. carboxyl and hydroxyl), which provides higher reactivity with cement and high dispersion in water. Many studies reported that the addition of GO into cement composites could signicantly improve their mechanical properties. [17][18][19] Kang et al. 20 reported that incorporating GO into cement-based mortars improved 28 day compressive and exural strength by approximately 32% (at 0.05% GO) and 20% (at 0.1% GO). Zhao et al. 21 showed that incorporating 0.022% GO into cement mortars produced a 34.1% and 30.4% enhancement in 28 day compressive and exural strength, respectively. The inuence of the dosage and size of GO on the microstructure of cementitious mortar composites was also reported in the literature. 22, 23 Lv et al. 23 showed that as the size of GO decreased from 430 nm to 72 nm, the enhancement rates of 28 day compressive and exural strengths could be increased from 29.5% and 30.7% to 38.2% and 51.9%, respectively.
The mechanism of the enhancement of GO-cement based composites is accounted for the considerable effect of oxygenfunctional groups of graphene oxide on the cement matrix. This shows that smaller sizes of GO will have more oxygenfunctional groups than larger sizes of GO, which leads to stronger adhesion forces between the functional groups and cementitious gels. Based on GO research, several studies reported the reinforcing mechanism of mechanical strengths of GO-cement based composites which were mainly governed by chemical reactions between hydroxyl and carboxyl groups of GO and the mediating Ca 2+ ions from calcium silicate hydrate of cementitious gels. This results in a space network structure in the cement matrix that supports the load transfer efficiency in cementitious composites. 14,24 Compared with GO, PRG is a remarkably different graphene material with a very limited level of oxygen groups, higher crystallinity, lower defects, and signicantly stronger mechanical properties. 25,26 Therefore, it has attracted signicant research interests in using the PRG in cementitious composites. 17,18,[27][28][29][30] The limitation in water dispersion of PRG sheets (PRGs) has been addressed in recent studies by using superplasticizer and ultrasonication methods. 27,[30][31][32] It has been shown that a small amount of PRG has great potential to enhance the strength of PRG-cement composites. [27][28][29][30]32,33 Wang et al. 30 reported that 0.05% of PRG could enhance 7 & 28 day exural compressive strengths of cement-based mortars by 23.5% & 16.8% and 7.5% & 1.3%, respectively. Besides, the inuence of the dosage of pristine graphene on the strength of cementitious mortar composites has been reported in recent studies. 27,28,33 Baomin and Shuang 33 investigated the use of four different dosages of PRG in cement paste and reported that the optimal PRG dosage of 0.06% could increase the 28 day compressive and exural strength of the cement paste approximately 11% and 27.8%, respectively. Tao et al. 28 studied a combination of cement-based mortars and ve dosages of PRG. The results showed that the mortar with PRG of 0.05% could enhance 28 day compressive strength about 8.3% and 28 day exural strength about 15.6%. However, when the dosages of PRG over 0.05%, the strengths started reducing because of the agglomerations of PRGs. In our previous work, we performed a comprehensive investigation of the dosage dependence of PRG-cement based mortars using seven dosages of pristine graphene. 27 The results revealed that compressive and tensile strengths at 7 days & 28 days of the mortar containing 0.07% pristine graphene signicantly improved by 36 These studies only showed the dependence of the strength of cementitious composites on pristine graphene dosages. The reinforcing mechanism of pristine graphene on the strength of cementitious composites has not been well understood. In addition to the dosage of PRG, there are several parameters of PRGs, such as particle sizes, level of defects and numbers of layers, can affect the performance of PRG-cement based composites. As reported in ref. [34][35][36], these parameters have a considerable effect on mechanical properties of polymer composites. However, there were very limited studies on the effects of these parameters on mechanical strengths of PRG and cementitious composites. To date, only few studies have applied molecular dynamics simulation methods to investigate the interaction between PRGs and cementitious gels at the atomic level. 37,38 The outcomes of these studies showed that the pullout behaviour of PRG in cementitious gels was governed by interfacial interaction and crack surface adhesion forces of PRG-cementitious gels. Although these studies provided the initial knowledge of the incorporation of pristine graphene into cementitious gels, there is still a lack of experimental conrmation on the reinforcing mechanism of PRGs in cementitious composites.
On the other hand, studies on GO-cement showed a wide range of optimum dosages of GO (i.e. from 0.01% to 1%) for improving the strengths of the composites, [17][18][19] which is one of the bottlenecks in practical applications. The dosage dependence of mechanical properties of cementitious composites prepared with pristine graphene on the dosages of PRG showed a much better convergence in the optimal PRG dosage range (i.e. from 0.05% to 0.07%) even though they differed from mix designs and PRG materials used. 27,28,33 These studies showed great potential for applying a small amount of PRG additives in construction materials to improve their mechanical strengths and other properties. However, very limited studies have been done to explore the consistency of these optimum PRG dosages used in cement composites which can support future studies on designing tests for investigating other properties of cementbased materials by graphene additives. Moreover, pristine graphene materials have better crystalline structures and properties than graphene oxide and they are now produced with high quality and low costs. As a result, it is expected that industrially produced PRG materials will be more acceptable additives for improving the properties of construction materials.
This study aims at providing an in-depth investigation of the aforementioned issues with a focus on revealing the reinforcing mechanism and optimized dosage of industrially manufactured PRG materials for enhancing the strengths of cement-based mortars. The impact of the dosage of pristine graphene with two different particle sizes on mechanical and microstructural properties of cementitious mortar composites are explored, compared, and presented in this research. From the ndings and comprehensive analyses of this study, we provide new inputs toward a better knowledge in the proposed reinforcing mechanism and optimized dosage of pristine graphene on the strength of cementitious mortars. This paper not only provides a better knowledge of incorporating PRG into cementitious composites, but they also show the great potential for low-cost industrially produced PRG materials for addressing current drawbacks of cementitious materials.
Materials
PRG materials manufactured by First Graphene Ltd in Perth, Australia, were used in this study (Table 1). These PRG materials were produced by an electrochemical process, which is a unique manufacturing process using electrochemistry to exfoliate PRGs with a few layers, large particle sizes and low defects from graphite that are not achievable by other methods (e.g. thermal or chemical methods from rGO). The general schematic mechanism of PRG materials produced by an electrochemical exfoliation process is illustrated in Fig. S1 (ESI †). The binder was ordinary Portland cement (OPC) with general purpose and its chemical composition ( Table 2) abided by the Australian Standard AS 3972-2010. 39 Natural sand was used as the neaggregate for all mortar mixes, which had maximum particle sizes of 2.36 mm. Table 3 presents the properties of the superplasticizer used in this study that was MasterGlenium SKY 8100 a polycarboxylic ether polymer in order to improve the dispersion of PRG in aqueous solution, which was abided by the Australian Standard AS 1478.1-2000. 40
Preparation of the mortar composites
The design mixes are given in Table 4. As shown in the table, a total of 9 unique mortar mixes were performed, including ve different concentrations and two different sizes of PRG, i.e. 0%, 0.05%, 0.07%, 0.1% and 0.3% mixes and average PRG diameters of 56 mm (S1) and 23 mm (S2). Table 4 shows the labels used for the mixes. S1 and S2 refer to PRG with an average size of 56 mm and 23 mm, respectively. The number aer that indicates the PRG dosage in each mix, which is calculated by the weight of the binder. For example, S1-0.05 indicates the PRG-cement based mortar prepared with a PRG size 56 mm and a PRG content of 0.05%.
The procedures described below were applied to prepare PRG-cement based mortars. The aqueous solution was rst prepared, and it consisted of water, superplasticizer and pristine graphene. Sonication was then carried out using Ultrasonication UIP1000hdT for 30 minutes. Aer that, the sonicated aqueous solution was gradually added into dry mixings, which included OPC and natural sand mixed within four minutes, for ve minutes. A vibration table was used to vibrate these specimens for one minute to remove the entrapped air, then covered by wet fabrics to contain the moisture loss, and demolded aer 24 hours of curing at room temperature. Aer that, they were cured in a fog room with a temperature of 23 AE 2 C until the testing ages.
Test methods
Mechanical strengths (compression and tension) of cementitious composites were determined at 7 and 28 days according to ASTM standards C109/C109M-07 (ref. 41) and C307-03, 42 respectively. These tests were performed to investigate the inuence of the dosage of pristine graphene with two different particle sizes on mechanical and microstructural properties of cementitious mortar composites. The mechanical strengths of each design mix were determined by calculating the average values of three samples. The variance method was performed to assess the statistically signicant difference in mechanical strengths of the mortars containing PRG in the optimum dosage range. Scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) were obtained by using FEI Quanta 450 to analyze surface morphologies and elemental compositions of materials. The particle size distribution was performed by using the Mastersizer 2000-Malvern to analyze the particle size of PRGs used in this study. The Rigaku MiniFlex 600 X-ray diffractometer was used for X-ray diffraction (XRD) analyses to nd the mineralogical characteristics of hydration products of cementitious composites and the distances between layers in pristine graphene sheets. The XRD was carried out at conditions 40 kV and 15 mA, 2q ¼ 5 -80 at 0.02 step size. Thermogravimetric analysis (TGA) of PRG samples was conducted with Mettler Toledo TGA/DSC 2 (the heating rate at 10 C min À1 under air atmosphere with a ow rate of 60 ml min À1 ). Raman spectroscopy (LabRAM HR Evolution, Horiba Jvon Yvon Technology, Japan) with a 532 nm laser (mpc3000) as the excitation source in the range of 500-3500 cm À1 was utilized to study the vibrational characteristics of carbon materials. All spectra were collected at an integration time of 10 s for 3 accumulations using a 100 objective lens with a spot size of 100 mm. Raman map was performed for a 20 mm  20 mm area with 2 mm steps and Raman spectrum of overall 121 points were collected for each sample. Nicolet 6700 was used for Fourier transform infrared spectroscopy (FTIR) analyses to identify functional groups of materials.
Characteristics of pristine graphene sheets
The physical properties and morphology of industrially manufactured pristine graphene with two different graphene sheets particle sizes used in the study are characterized and summarized in Table 1 and Fig. 1, respectively. The irregular shapes of PRGs were observed in SEM images, as shown in Fig. 1. These SEM images and particle size distributions shown in Fig. 1(a) and (b) present a considerable difference in the particle size of 56 mm (S1) and 23 mm (S2), respectively. Table 1 also shows that the physical properties of these two PRGs are similar and only different in particle sizes. Fig. 2(a)-(c) presents the Raman spectra and Raman I D /I G map of both PRG samples. The Raman peak at the 2D band can be used to indicate the number of layers in the graphene samples based on the frequency shi and the shape of the 2D peak. It can be seen in Fig. 2(a) that a narrow and symmetric 2D band at 2709 cm À1 is observed in these PRG samples which are different from graphite materials that have a broad and asymmetric 2D peak located at 2719 cm À1 . Besides, their relative intensity ratios I D /I D 0 and I 2D /I G shown in Fig. 2(a) are respectively 1.58 & 1.59 and 0.39 & 0.32, which are below 3.5 and 1. Moreover, the distribution histogram plots of relative intensity ratios I D /I G of both PRG sizes obtained from the mapping study are mostly below 0.4-0.6 ( Fig. 2(b) and (c)). These combined results conrm that both PRG samples are high-quality products with low defects, and they mostly consist of few-layer sheets of graphene, [43][44][45][46] which are critical for their optimized performance in cement-based composites. 47,48 This shows that the high crystalline structure and quality of pristine graphene materials in this investigation. FTIR spectra in Fig. 2(e) show the major characteristic bands for both PRG sizes at: 1000 cm À1 to 1240 cm À1 is attributed to C-O groups; 1700 cm À1 , and from 2500 cm À1 to 3300 cm À1 are referred to C]O stretching and O-H stretching. These functional groups present the existence of carboxylic acids (i.e. COOH) in both PRGs, which are likely in limited numbers at the edge of PRG structure. The stretching vibration from 1300 cm À1 to 1600 cm À1 corresponds to C]C groups. These are consistent with FTIR results on pristine graphene materials in the literature, which shows minor oxygen groups at edges of their structures. 49,50 Fig. 2(f) shows a typical TGA-DTG graph of PRG samples. The gure shows the maximum thermal decomposition peak of both PRG samples is about 700 C which is different from GO, rGO, 49,51,52 presenting the high quality of PRG materials used in this study. The typical decomposition patterns of TGA-DTG curves of GO and rGO are also provided in Fig. S2 (ESI †) for the comparison purpose.
It is important to note that, in this study, two industrially produced PRG samples with the same manufacturing process were used, which are high-quality products, have similar physicochemical properties, and their main difference is particle sizes. Therefore, the inuence of other parameters of PRG materials (e.g. levels of defects, and numbers of layers) on the strength of cementitious mortar composites is negligible, and the main inuence parameter of PRG materials on the different mechanical results of the mortars containing two PRG samples in this study accounts for the difference in PRG sizes. Fig. 3 shows the compressive strengths and strength enhancements of cement-based mortars with different PRG dosages and sizes at 7 and 28 days. As shown in Fig. 3(a)-(c), the addition of PRG to cementitious composites increases the 7 day and 28 day compressive strengths of the mortars. The mixes containing larger PRG size S1 and smaller PRG size S2 have the optimal PRG dosage at 0.07% and 0.1%, respectively. When PRG is used beyond the optimal dosage, the compressive strengths of the mortars in both PRG sizes start decreasing. As shown in the gure, at the optimal dosage of 0.07% PRG, the 7 day and 28 day compressive strengths of the mix containing larger PRG size S1 (i.e. S1-0.07) are approximately 50 MPa and 56.3 MPa, respectively. These are 36.8% and 34.3% higher than the corresponding strengths of the plain mortar (36.5 MPa and 42 MPa, respectively). For the mortar containing smaller PRG size S2 at the optimal dosage of 0.1%, the 7 day and 28 day compressive strengths are 40.4 MPa and 48.6 MPa, respectively, which represent only a 10.6% and 15.7% respective increase compared to the plain mortar.
Mechanical properties of mortar mixes
The tensile strengths and strength enhancements of cementitious mortar composites with the different dosage and size of pristine graphene at 7 and 28 days are shown in Fig. 4. From Fig. 4(a)-(c), it can be seen that incorporating PRG into cementitious composites increases the 7 day and 28 day tensile strengths of the mortars. The trend in tensile strengths is similar to that in compressive strengths for both PRG samples at both ages. As shown in the gure, at the optimal dosage of 0.07% PRG, the 7 day and 28 day tensile strengths of the mix containing larger PRG size S1 are 3.89 MPa and 4.62 MPa, respectively, which respectively improve approximately 25.3% and 26.9% compared with the plain mortar at these testing days The reduction in enhancement rates of mechanical strengths of the mortars when using PRG beyond the optimal dosage can stem from the poor dispersion of PRG suspension due to the van der Waals forces between PRGs. This leads to the agglomeration of PRGs and the formation of multi-layers PRGs, resulting in the hindrance to the enhancement of PRGs to the hydration process, as well as their interaction with cementitious gels. 27,49,53 Figs. 3(c) and 4(c) show the optimal PRG range for both compression and tension at 7 day and 28 day ages of PRGcement based mortars. Both pristine graphene samples are in a relatively low dosage range of from 0.07% to 0.1%, and the strength improvement rates are between 10.4% and 36.8%. The variance test was performed to assess if the difference in the strength of mortars containing PRG in the optimal range (i.e. 0.07% and 0.1%) to be statistically signicant. To do this, the variance analysis based on the theory of the null hypothesis with the signicant level of 0.05 was tested (the details of this method can be seen in ref. 54 and 55). The results of the analysis of variance test are shown in Table 5. It is evident from the table that the difference between S1-0.07 and S1-0.1 at 28 day compressive strength is statistically signicant (i.e. P-value ¼ 0.018 < 0.05). However, there are no statistically signicant differences in tensile and compressive strengths at curing ages of 7 and 28 days of the other mixes between 0.07% and 0.1% (i.e. P-values >0.05). Moreover, for the PRG dosage of 0.07%, the enhancement rates of 7 day & 28 day compressive strengths and tensile strengths of the mix containing larger PRG size S1 are approximately 3.5 & 2.4 times and 1.2 & 2.1 times more than those of the mix containing smaller PRG size S2, respectively. From those analytic results, it can be concluded that the PRG sizes have a signicant effect on the enhancement rates of mechanical strengths of the mortars, whereas they do not have a signicant inuence on the optimized PRG dosage for mechanical strengths of the mortars. Therefore, the pristine graphene dosage of 0.07% is identied as the optimized content of PRG for enhancing the strength of cementitious mortar composites for both sizes. The reinforcing mechanism of pristine graphene on the strengths of the mortars will be discussed in detail in Sections 3.3 and 3.4.
Physicochemical and microstructural characterizations of mortar mixes
Complementary XRD, FTIR, and SEM-EDX characterizations were performed to examine the inuence of the different dosage and size of pristine graphene on the physicochemical and microstructure characteristics of the composites. The three different PRG concentrations were selected for analysis. They are 0%, 0.07% and 0.3%, which represent the plain mix, the mix with the optimized, and highest considered PRG dosage, respectively. However, XRD and FTIR analysis of smaller PRG size S2 were only presented at 0.07% PRG content for the comparison purpose.
3.3.1. XRD and FTIR characterizations. There are four main components of the OPC binder, i.e. tricalcium silicate or alite (C 3 S), dicalcium silicate (C 2 S), tricalcium aluminate (C 3 A), tetracalcium ferroaluminate (C 4 AF), and a small amount of gypsum. The hydration products of the cement matrix resulting Table 5 Analysis of variance tests for evaluating the difference in 7 day and 28 day mechanical strengths of the mortars containing PRG in the optimal dosage range (0.07% PRG and 0.1% PRG)
Difference of levels Difference of means T-Value
Adjusted P-value Evaluate signicant differences S1-0.07-S1-0.1 (7 day compression) 1.110 0.57 0.596 No S1-0.07-S1-0.1 (7 day tension) 0.043 0.39 0.716 No S1-0.07-S1-0.1 (28 day compression) 3.615 3.85 0.018 Yes S1-0.07-S1-0.1 ( Paper RSC Advances from chemical reactions between these components and water 56,57 can be described by the following equation: (C 3 S, C 2 S, C 3 A, C 4 AF) + gypsum + H 2 O / calcium silicate hydrate (CSH) + portlandite (CH) + sulphoaluminates (most ettringite (Aft)) + part of monosulphoaluminate (1) Fig. 5 presents XRD patterns of different mortar mixes (i.e. the plain, S1-0.07, S2-0.07, S1-0.3) at 28 days of curing age. As shown in eqn (1), the production of the Portland cement hydration process consists of CSH gels, CH and A. Among them, CSH gels are the main part contributing to mechanical strengths of cementitious composites. Therefore, samples with a larger amount of CSH gels can have better strength properties in composites. In XRD analysis, although there are some difficulties in identifying CSH phases that are oen as amorphous phases, 22,58,59 the content of CSH gels and the hydration degree of the binder can be estimated by the content of portlandite and un-hydrated cement particles (e.g. C 3 S, C 2 S). 58,59 The XRD spectra of all samples were standardized at the peak of 26.7 to ensure the amount of natural sand in specimens equal. 22,59,60 It can be seen in Fig. 5(a) that although the samples containing pristine graphene show similar spectra with the plain mortar, they have different intensities, which might cause differences in their mechanical properties. From Fig. 5(a) and (b), the peaks of portlandite phases can be identied at 18.2 , 34.2 and 47.1 ; 59,61 and the intensities of these peaks are different from each mix. The highest value is observed at S1-0.07, followed by S1-0.3, S2-0.07, and the plain. This reveals that the hydration degree of cement paste of the mixes containing PRG is higher than that of the plain mix, which is consistent with previous research on PRG-cement composites. 30,59 In addition, the gures also show that the scattering angles at 29.5 and 32.3 of un-hydrated alite 58,61 of these mixes have different intensities, which is the highest in the plain mix and followed by the PRG-cement samples. This can be attributed to the benecial effect of PRG on the cement hydration process, which might lead to the creation of more CSH gels in the cement matrix. 22,58,59 This is in agreement with the observed trends of the mechanical results of the mixes analyzed in Section 3.2.
The FTIR spectra of the different mixes (i.e. the plain, S1-0.07, S2-0.07, S1-0.3) at the 28 day testing are shown in Fig. 6. The gure shows that the spectrum of these samples is similar. This means no new specic groups observed when adding PRG. The group bands ranged 400-550 cm À1 and 800-1200 cm À1 are attributed to Si-O bonds in the CSH gels. 62,63 The band ranged from 2800 to 3600 cm À1 represents O-H groups in H 2 O belonging to CSH gels. [62][63][64] The narrow band in the range of about 3600-3650 cm À1 is attributed to portlandite, i.e. O-H bonds. 62,65 The band of 1350-1550 cm À1 is attributed to C-O bonds in calcium carbonate. 63,64 Fig. 6 also shows that although these mixes have similar spectra, the spectral intensities representing CSH gels and portlandite in these mixes are different. This indicates that the mortars with pristine graphene materials have stronger intensities than the plain mix and the strongest intensity can be observed in the S1-0.07 mix. This may be due to the higher cement hydration degree in the mixes containing pristine graphene, leading to the enhancement in mechanical strengths of those mixes compared with the plain. This is in agreement with the results of mechanical strengths, and XRD shown above.
3.3.2. SEM characterizations. Fig. 7(a)-(j) shows a series of SEM images of observed microcrack patterns and crystals in different PRG-cement based mortars at the 28 day testing, i.e. the plain mortar, S1-0.07, S2-0.07, S1-0.3, and S2-0.3. As shown in the gure, although these mixes have similar components in their structures (e.g. CSH, CH, A and pores), the distribution and compaction of these components at the microscale are different. The plain mix (Fig. 7(a) and (b)) exhibits higher content of pores and density of microcracks in its microstructure compared to the other mixes. This explains the reason for the lower strengths of the plain mix than those of the mixes prepared with pristine graphene. It can be seen in Fig. 7(c)-(j) that, for a given PRG size, the mixes prepared with the larger PRG size (S1) exhibit better microstructure patterns than those with the smaller PRG size (S2). The crystal content and compactness of the PRG-cement samples are also altered by different PRG dosages for both PRG sizes, which shows the densest microstructure at 0.07% PRG content and followed by 0.3%. As shown in Fig. 7(c)-(f) (i.e. at the optimized dosage of 0.07% PRG), the SEM images of the mix containing the larger PRG size (Fig. 7(c) and (d)) are not only more compact in microstructure but it also has denser interfacial transition zones (ITZ) between cementitious gels and ne aggregates than that of the mix containing the smaller PRG size (Fig. 7(e) and (f)). This can contribute to more efficient stress distribution and better inhibition of crack propagation in the structure of the S1 series, resulting in improvements in their mechanical properties, 14,22,30 and this will be discussed further in Section 3.4.
The reinforcing mechanism of PRG for enhancing the strength of cementitious composites
The strengths of traditional cementitious composites depend on the strengths of Portland cementitious gels, which are formed by the chemical reaction between cement powder and water. The most important product of the cement hydration process is CSH gels, which contribute most of the strength of Portland cementitious gels. 2,66 Similar to traditional cement mortar, the strengths of PRG and cementitious mortar composites are governed by PRG-cementitious gels, which are created by the interaction between PRG structure and Portland cementitious gels (CSH gels). Fig. 8 outlines a general illustration of the proposed mechanism showing the interaction of PRG and CSH structures as a key parameter for the enhancement of PRG-cementitious gels in PRG-cement based mortars. As mentioned in the Introduction section, for GO-cement based composites, the reinforcing mechanism of their mechanical properties was proposed as a result of chemical reactions between the oxygen-functional groups (i.e. hydroxyl and carboxyl groups) of GO and the mediating Ca 2+ ions from cementitious gels, resulting in the formation of strong interference bonds between GO and cementitious gels. This improves the space network structure in the cement matrix that supports the load transfer efficiency in structures, resulting in the improvement of mechanical properties of cement composites 14,24 However, the level of these oxygen-functional groups at the edge of the PRG structure is very limited, and hence, their contribution to strength enhancement of PRG-cementitious Fig. 8 The outline of the proposed mechanism for the formation and enhancement of cementitious gels by PRGs.
This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 42777-42789 | 42785 gels is less signicant. Moreover, as discussed in Section 3.1, both PRG samples have the same manufacturing process, highquality products and similar physicochemical properties, and are only different in particle sizes. Consequently, the main factor to reinforce the strength of PRG-cementitious gels must be related to the interaction between basal planes of PRGs and CSH gels, which depend on surface areas of PRGs. This means PRGs with larger particle sizes will have larger basal plane areas to interact with CSH gels around. This leads to a stronger connection between them in cement matrix. This nding is signicantly supported by the considerable difference in mechanical results of PRG-cement based mortars between the larger PRG size 56 mm (S1) and smaller PRG size 23 mm (S2) as discussed in Section 3.2.
The increase in mechanical strengths of PRG-cement mortars can be explained (which is also supported by the ndings presented in next paragraphs): (1) part from the improvement of cementitious gels due to the closer distance between the particles of the cement binder caused by van der Waals forces between PRGs; 17 (2) the most important part to reinforce PRG-cementitious gels is proposed to be contributed by the adhesion friction forces between surface areas of PRGs and CSH gels: these adhesion friction forces are a combination of crack surface adhesion forces (which was created by atoms near crack surfaces during the pull-out process 38 ), and friction forces between surface areas of PRGs and CSH gels, which depend on particle sizes of PRGs that will increase with an increase in graphene size. This was also demonstrated in the study conducted by Chen et al. 38 using Molecular Dynamic (MD) simulation to investigate the interaction mechanism of PRGs (with low surface roughness properties) and CSH gels. The benet of PRG sizes to enhance PRG-cementitious gels is clearly Fig. 9 (a-f) Energy dispersive X-ray results confirm the combination of PRGs and cement gels in the cement matrix; (g) the detailed outlines of the supporting of PRGs to enhance properties of cementitious gels when sustaining external loads; (h) the outline of crack paths of PRG-cement based composites under external loads.
supported by the experimental results of this study. At the optimized dosage of 0.07%, the strength of 28 day compression and tension of the larger PRG size S1 mix are 2.4 and 2.1 times higher than those of the smaller PRG size S2 mix, respectively. This is because the larger PRG size has larger contact surface areas, which contribute to a better adhesion friction force compared with the smaller PRG size. It is also noted that both PRG samples have similar thicknesses and densities (Table 1), so there is no signicant difference in their specic surface areas (i.e. unit with m 2 g À1 ) at the same dosage. However, they are signicantly different from the contact surface area of each of PRGs with CSH gels (i.e. the area of the large PRG size 56 mm (S1) is approximately 6 times as equal as that of smaller the PRG size 23 mm (S2), as shown in Fig. 8).
The investigation of the interface of PRGs and cementitious gels and the propagation of microcracks in the cement matrix was performed and the results are presented in Fig. 9 and 10. As can be seen in Fig. 9(a)-(f), the EDX results indicate that the carbon contents of spectrums 1, 2, and 3 are dominant and much higher than the other spectrums nearby (i.e. spectrums 4 and 5). This conrms the cement matrix containing a combination of PRGs and cementitious gels. Fig. 9(g) depicts the detailed outline of the reinforcing mechanism and crack propagation in the cement matrix due to PRG additives. The gure also shows that the combination of PRGs and CSH gels can enhance cementitious gels around PRGs and create interlocked PRG-cementitious gels in a space network structure, resulting in the effectiveness of stress distribution. Figs. 9(g) and 10 also present that PRGs can increase the path of crack developments through crack bridging, crack branching, and crack deection. This contributes to the benet of the reduction of crack widths in structures. As a result, it can be said that PRGs with the larger size can create larger interaction areas with CSH gels, and hence, leading to larger strengthened areas. This is more benecial for their interlocks in the cement matrix compared to the smaller size. This nding is consistent with the important role of PRG sizes on adhesion friction forces of PRGcementitious gels as discussed before in the previous paragraph.
Moreover, as discussed in Section 3.2, the enhancement strength rates of the mortars start decreasing when PRG is used over the optimum dosage due to the agglomeration of PRGs in the cement matrix, which can also be explained from the SEM images in Fig. 9(g). From the gure, it can be gured out that when the agglomeration of PRGs happens, it means many layers of PRGs will stack together and form multi-layers PRGs. As a result, the adhesion friction forces between pristine graphene sheets and cementitious gels are diminished due to the effects of weak van der Waals bonds among multi-layers PRGs that causes the debonding and displacement between those PRGs during sustaining external loads, resulting in a decrease in their strengths. Based on the SEM images and the above analyses, the crack paths of PRG-cement based composites under external loads are outlined in Fig. 10. The outcomes of these ndings have provided the better knowledge of the reinforcing mechanism of PRGs on the strengths of cementitious composites prepared with PRG additive through enhancing the PRGcementitious gels, load-transfer mechanism, and crack paths of the composites.
Conclusions
This study has presented the proposed reinforcing mechanism and optimized dosage of pristine graphene additives for improving the mechanical strengths of cementitious mortar composites. The main ndings of this study can be drawn below: The strengths of the mortars are dependent on the PRG dosage and size. The PRG sizes (as changed from 23 mm (S2) to 56 mm (S1)) have a signicant effect on the enhancement rates of mechanical strengths of the mortars, whereas they do not have a signicant inuence on the optimized PRG dosage for mechanical strengths of the mortars. The PRG dosage of 0.07% is identied as the optimized concentration of PRG for enhancing the strength of cementitious mortar composites.
At the optimized dosage of 0.07%, the enhancement rates of 7 day & 28 day compressive strengths and tensile strengths of the mix containing larger PRG size S1 are approximately 3.5 and 2.4 times & 1.2 and 2.1 times more than those of the mix with smaller PRG size S2, respectively. The mortars show less improvement in strengths when PRG is used over the optimal dosages. This is due to van der Waals forces between PRGs, resulting in the agglomeration of PRGs and the formation of multi-layers PRGs, resulting in the hindrance to the enhancement of PRGs to the hydration process.
The reinforcing mechanism of PRG on the strengths of cementitious composites is mostly attributed to the adhesion friction forces between pristine graphene sheets and cementitious gels. This can enhance cementitious gels around PRGs and create interlocked PRG-cementitious gels in a space network structure, resulting in the effectiveness of stress This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 42777-42789 | 42787 distribution. As a result, the mixes containing the larger PRG size (S1) have higher strength improvements than those containing the smaller PRG size (S2). This is because the larger PRG size has larger interaction areas with CSH gels, leading to larger strengthened areas that will be more benecial for their interlock in the cement matrix compared to the smaller size.
The results from microstructural analyses have indicated that there is a correlation between the strength of cementitious composites and their microstructures. This shows that the mixes with higher strengths oen have better microstructure patterns.
The results of this study have not only provided a better understanding of incorporating PRG into cementitious composites, but they have also shown the great potential for low-cost industrially produced PRG materials for improving the performance of cement-based construction materials. The study has also provided a valuable orientation in studying PRGcement based composites so that further investigations on other properties of pristine graphene and cementitious materials can be performed with less time and effort and fewer costs.
Conflicts of interest
There are no conicts to declare. | 8,820 | 2020-11-23T00:00:00.000 | [
"Materials Science"
] |
Crossover from photon to exciton-polariton lasing
We report on a real-time observation of the crossover between photon and exciton-polariton lasing in a semiconductor microcavity. Both lasing phases are observed at different times after a high-power excitation pulse. Energy-, time- and angle-resolved measurements allow for the transient characterization of carrier distribution and effective temperature. We find signatures of Bose–Einstein condensation, namely macroscoping occupation of the ground state and narrowing of the linewidth in both lasing regimes. The Bernard–Douraffourgh condition for inversion was tested and the polariton laser as well as the photon laser under continuous wave excitation were found to operate at estimated densities below the theoretically predicted inversion threshold.
2 thermal equilibrium of photons [9]. Essentially the experimental observations of polariton BEC in the strong-coupling regime [1], photon BEC in the weak-coupling regime [4] and weakcoupling lasing or VCSEL operation [8,10] have very similar signatures. Carriers are distributed according to the Bose-Einstein distribution, the emission narrows in energy and the first-order spatial coherence builds up. Recently we showed that spontaneous symmetry breaking, which is the Landau criterion for the phase transition, can also be observed in polariton and photon lasers [11].
However, the physical processes by which condensation and conventional lasing occur are fundamentally different. Condensation is a purely thermodynamic phase transition during which the total free energy of the system is minimized, whereas conventional lasing is a balance between the gain from inversion and the loss in the system. In a conventional semiconductor laser, lasing occurs by the stimulated emission of the cavity photons from the electron-hole plasma. Above a threshold density, the stimulated emission becomes faster than the thermalization rate. As a result, a dip is formed in the carrier distribution, which is called kinetic hole burning [12]. This is because the thermalization process can no longer supply the lost carriers at sufficient speed. In condensation, however, the system remains thermalized while lasing. The question of whether the term BEC or lasing is appropriate for degenerate condensates of exciton polaritons and photons is still a subject of debate in the scientific community [13][14][15].
The crossover from strong to weak coupling according to the coupled oscillator model takes place when the exciton-photon coupling strength (g 0 ) equals half of the difference between the decay rates of cavity photons (γ cav ) and excitons (γ exc ) [16]. This may be achieved by changing the optical pumping strength. The exciton linewidth increases and the oscillator strength decreases with the increase of pumping intensity, which brings the system from strong to weak coupling. This transition is not to be confused with the Mott transition from an exciton gas to an electron-hole plasma [17][18][19][20]. Whilst the distinction between strong and weak coupling in a microcavity is straightforward, as the dispersion relations exhibit specific differences, it is very hard to identify the exact point of the Mott transition by photoluminescence spectroscopy. A transition to the weak-coupling regime with increasing pumping strength in steady state has been observed by several groups and the carrier densities at the onset of photon lasing compare well with the Mott density [21][22][23][24][25]. This paper adds to this body of work as it investigates the dynamical transition from the weak to the strong exciton-light coupling regime in a planar semiconductor microcavity excited by a short high-power excitation pulse. We particularly investigate the distributions of carriers during this crossover and discuss the possibility of a BEC of photons. We observe clear features of polariton and photon lasing and find quasithermal distributions of quasiparticles in the weak-and in the strong-coupling regime, which could imply BEC of photons and polaritons. A closer look at the temporal dynamics and the change of the effective temperatures during the transition provide insight into the nature of the observed lasing modes and the thermodynamic state of the system. We further investigate the build-up of photon lasing under continuous wave (CW) excitation and investigate the question of whether the system is inverted by means of the Bernard-Douraffourgh condition for lasing.
The system under study is a GaAs microcavity grown by molecular beam epitaxy. Previous works have shown that lasing in the weak coupling can be observed in this sample under CW excitation [10], whilst nonlinearities in the strong-coupling regime are accessible under pulsed excitation, as sample overheating is strongly reduced [26]. At higher excitation power the emission switches to the weak-coupling regime, similar to the observations in [25]. We show that the photon and polariton lasing occur at different times after the excitation pulse and that at high powers photon lasing is followed by polariton lasing. Experiments were carried out using a liquid helium cooled wide-field view cold finger cryostat. The exciton-cavity mode detuning was set to −0.5 meV. Transform limited pulses from a femtosecond Ti:Sapphire oscillator tuned to a reflection minimum of the Bragg mirror were focused to a 30 µm spot through an objective with a high numerical aperture (NA = 0.7). The dispersion relation was imaged through the same objective onto the slit of a monochromator equipped with a water cooled charge coupled device (CCD) and a streak camera with ps resolution. For temporal resolution the momentum space was scanned across the crossed slits of the streak camera and the monochromator. Figure 1(a) shows a snapshot of a bi-linearly interpolated image of the microcavity dispersion 3 ps after optical excitation at the excitation density P = 4P th (where P th = 7 mW is the power threshold for lasing). The photoluminescence intensity is displayed in false-color logarithmic scale. The inset of figure 1(a) shows the same microcavity dispersion in falsecolor linear scale. Solid white lines indicate the exciton-polariton branches in the linear regime and the dashed lines show the bare cavity and exciton modes. Observation of the bare cavity photon dispersion confirms that the excitation pulse brings the microcavity to the weak-coupling regime. Figure 1(b) shows the same as figure 1(a) but at 55 ps after optical excitation. White circles indicate the intensity maxima of the recorded spectrum at each detection angle, following 4 the cavity (a) and the polariton mode (b). The lower exciton-polariton dispersion is uniformly blue-shifted due to the repulsive interaction with the exciton reservoir. It is instructive to compare these results with the lower excitation power sufficient to excite a polariton condensate. Figure 1(c) shows a snapshot of the microcavity dispersion 32 ps after optical excitation at the excitation density P = 1.1P th . Similarly to figure 1(b), a near uniformly blue-shifted lower exciton-polariton dispersion is observed. Figure 1(d) shows a snapshot of the microcavity dispersion at 192 ps under the same optical excitation, when the exciton-polariton dispersion in the linear regime is fully recovered as a result of the depletion of the exciton reservoir. Therefore, using time-resolved dispersion imaging we observe the dynamics of the transition of the microcavity eigenstates through three distinctively different regimes: from the weak-coupling regime where we observe a bare cavity mode, to the nonlinear strong-coupling regime featuring a blue-shifted lower exciton-polariton branch, through to the linear strong-coupling regime where the exciton-polariton dispersion is not altered spectrally (see supplementary movies, available at stacks.iop.org/NJP/14/105003/mmedia). The temporal evolution of the ground state energy is depicted in figure 1(e), showing the redshift of the emission with time. This reflects the transition from the weak-to the strong-coupling regime and can be understood as an effect of the depletion of the carrier reservoir. The emission linewidth reflects the coherence properties in the three regimes. Starting from a linewidth of ∼1 meV in the photon lasing regime the linewidth initially increases when the system enters the transitory regime and then narrows down to the linewidth of the polariton laser [27] ( figure 1(f)). The time-resolved spectra and linewidth evolutions in the linear and nonlinear strong-coupling regime are given in the supplemental material 1 (available at stacks.iop.org/NJP/14/105003/mmedia). The time axis was rescaled to account for the temporal distortion caused by the use of the grating (for further information see the supplemental material 2). Figure 2(a) shows the occupancy as a function of energy at different times for P = 4P th . At early times (3 ps), whilst still in the weak-coupling regime, we observe a massively occupied cavity mode ground state on top of a thermalized tail of excited states. In the transitory regime (10-54 ps) the dispersion cannot be mapped because the linewidth at higher angles is strongly broadened and therefore a distribution of population is unattainable. At later times, 55 and 76 ps after optical excitation whilst in the strong-coupling regime, we observe a largely occupied exciton-polariton ground state coexisting with a thermalized exciton-polariton gas. After ∼116 ps the ground state is no longer degenerate and the particle distribution is close to a Boltzmann distribution. At even later times, the occupation of the ground state cannot be resolved as it is four orders of magnitude lower than at the peak emission intensity. Figure 2(b) shows successive snapshots of the emission intensity of exciton polaritons as a function of energy for low excitation powers (P = 1.1P th ). We observe a largely occupied ground excitonpolariton state on top of a thermalized exciton-polariton gas, in low-excitation nonlinear regime. The depletion of the ground state and the loss of thermalization occur around the same time (∼86 ps) as a bottleneck builds out (140 and 192 ps). Figure 2(c) shows the temporal evolution of temperature in the transition from photon to exciton-polariton condensate (green circles), and from polariton condensate to a thermalized exciton-polariton gas (red triangles). Effective temperatures were extracted by fitting a Bose-Einstein distribution to the measured angular distribution of the emission intensity [28] (dashed gray lines in figures 2(a) and (b)). This analysis provides insight into the thermodynamics of the system and how far away from thermal equilibrium the quasiparticles are. The effective temperature in the weak-coupling regime (∼32 K) is higher than in the strong-coupling regime (∼16 K), while in both cases the quasiparticles remain warmer than the lattice temperature (∼6 K). The lower effective temperature of the polariton gas reflects the longer timescale on which they thermalize with respect to photons. At the formation stage of the photon laser, the effective photon temperature is higher (∼32 K) than the subsequent exciton-polariton gas (∼16 K). The photon gas thermalizes via absorption and re-emission processes in the intracavity quantum wells, similar to the mechanism of photon thermalization in a dye-filled microcavity [9]. This thermalization mechanism is analogous to the exciton-polariton thermalization in the strong-coupling regime if the system is below the Mott transition and the excitons are still present in the weak-coupling regime. In this case each photon state has a finite exciton fraction even in the weak-coupling regime, which allows for efficient interaction with phonons and other dressed photons. On the other hand, the observed photon lasing mode occurs at much shorter times than the usual exciton formation rates of tens of picoseconds [29] in GaAs. In this case the thermalized distribution originates from the ultrafast self-thermalization of an electron-hole plasma (tens of femtoseconds) [30]. Thermalization and BEC in an ionized plasma is in principle possible [31], through Compton scattering.
Upon the formation of excitons the effective temperature approaches the lattice temperature through carrier phonon scattering in the picosecond timescale. Exciton polaritons have larger exciton fraction than the photons, which is why they interact more strongly with acoustic phonons and between themselves. The cooling of exciton polaritons occurs on a longer timescale providing a different temperature for the exciton-polariton gas with respect to that of the exciton reservoir and the host lattice. In the intermediate regime of the temporal transition from photon to exciton-polariton laser, the distinction between the weak-and strong-coupling regime in momentum space vanishes and the energy appears more and more red shifted, indicating broadband emission similar to the kind observed in [32] or the coexistence of polariton and photon lasing [33]. The buildup of the photon laser is below the temporal resolution of our detection apparatus. Therefore we show time-integrated photoluminescence spectra under CW excitation at different temperatures and excitation densities. Figure 3(a) shows pump power dependent data at 60 K. The distribution thermalizes at a temperature close to the lattice temperature (solid line) and upon saturation the ground state becomes macroscopically occupied, as observed in [4]. Next we induce the transition from a thermalized distribution to a photon laser by lowering the temperature, to demonstrate further similarities to atom [5,6] and polariton condensates [2] ( figure 3(b)). We use CW excitation at a constant excitation power and study the emission pattern of the cavity mode as a function of temperature. At a critical temperature of about 90 K the thermal distribution achieves the degeneracy threshold and photons start condensing at the cavity ground state. Although such behavior is characteristic of a thermodynamic phase transition, it is more likely due to the change of the cavity mode energy with respect to the electron hole transition energy in a similar fashion to VCSELs [34]. The temperature of the photon gas follows the lattice temperature between 60 and 120 K, but does not go below 60 K which is slightly above the exciton binding energy in bulk GaAs [20]. This might be an 7 indication that the electron-hole pairs are unbound in this case. In the time-resolved experiments we detect a lower temperature for the photon lasing (figure 2(c)), due to lesser heating of the sample under pulsed excitation.
Next we test the Bernard-Douraffourg condition for lasing [35] by comparing the emission energy in the weak-coupling regime (E weak ) with the difference between the Fermi energies ( E F ) of electrons in the conduction band and holes in the valence band for the weak-coupling regime. Conventional lasing occurs when E F > E weak . Calculations of the Fermi-energies and carrier densities are provided in the supplemental material 3 (stacks.iop.org/NJP/14/105003/mmedia). Figure 3(c) shows E F as a function of the density of electron hole pairs. Arrows indicate the experimental conditions. The system is close to the theoretically estimated inversion threshold, at the onset of photon lasing under pulsed excitation. However, the density in the CW case remains an order of magnitude below the inversion density predicted for 60 K, which is the lowest photon temperature measured under CW excitation. Further investigations are needed to obtain direct spectroscopic evidence of the carrier density under CW excitation.
In conclusion, we have studied the dynamic transition from a photon to a polariton laser following a high-power excitation pulse. Dispersion images clearly show the transition from the weak to the strong coupling. Both regimes exhibit the same signatures of BEC: a macroscopic occupation of the ground state on top of a thermalized tail, narrowing of the linewidth and narrowing of the distribution in momentum space. We have also shown that the transition to the photon laser can be induced by decreasing the temperature. The estimated carrier densities remain below the inversion threshold in the CW excitation regime as well as in the strongcoupling regime under pulsed excitation. The effective temperatures show how far from thermal equilibrium the system is at different times after the excitation pulse and the evolution of the linewidth maps the transition between two coherent states by a passage through an incoherent state. The results presented here as well as the recently reported observation of spontaneous symmetry breaking and long-range order [11] call for further studies of condensation in the weak-coupling regime. Direct spectroscopic evidence of free carrier and exciton densities is needed to unambiguously distinguish between photon lasing and BEC in the weak coupling regime. | 3,614.8 | 2012-10-02T00:00:00.000 | [
"Physics"
] |
Experimental Study on Hybrid Effect Evaluation of Fiber Reinforced Concrete Subjected to Drop Weight Impacts
In this paper, the impact energy potential of hybrid fiber reinforced concrete (HFRC) was explored with different fiber mixes manufactured for comparative analyses of hybridization. The uniaxial compression and 3-point bending tests were conducted to determine the compressive strength and flexural strength. The experimental results imply that the steel fiber outperforms the polypropylene fiber and polyvinyl alcohol fiber in improving compressive and flexural strength. The sequent repeated drop weight impact tests for each mixture concrete specimens were performed to study the effect of hybrid fiber reinforcement on the impact energy. It is suggested that the steel fiber incorporation goes moderately ahead of the polypropylene or polyvinyl alcohol fiber reinforcement in terms of the impact energy improvement. Moreover, the impact toughness of steel-polypropylene hybrid fiber reinforced concrete as well as steel-polyvinyl alcohol hybrid fiber reinforced concrete was studied to relate failure and first crack strength by best fitting. The impact toughness is significantly improved due to the positive hybrid effect of steel fiber and polymer fiber incorporated in concrete. Finally, the hybrid effect index is introduced to quantitatively evaluate the hybrid fiber reinforcement effect on the impact energy improvement. When steel fiber content exceeds polyvinyl alcohol fiber content, the corresponding impact energy is found to be simply sum of steel fiber reinforced concrete and polyvinyl alcohol fiber reinforced concrete.
Introduction
Concrete is one of the most widely used construction materials, but its tensile strength is relatively low in contrast with the compressive strength. This defect limits the further application of concrete in construction and building [1,2]. Using emerging materials with improved mechanical, dynamic and durability properties, fiber reinforced concrete (FRC) is ideal for disaster prevention and mitigation of explosion/impact applications where high impact resistance and energy absorption capacities are required [3]. Fiber also helps arrest micro cracks before the peak, therefore, during the hardening behavior of the composite the cracking are initiated to enhances the post-cracking behaviour due to improved stress transfer provided by the fibers bridging the cracked sections [4][5][6]. The fibers used are mainly steel fibers, carbon fibers, polymer fibers and natural fibers (e.g., the flax fibers) [7][8][9].
Among the polymer fibers, polypropylene (PP) and polyvinyl alcohol (PVA) fibers have attracted most attention due to the outstanding toughness of concrete reinforced with them [10][11][12][13]. Since, concrete is a complex material with multiple phases which include large amount of C-S-H gel in micron-scale size, sands in millimeter-scale size, and even coarse aggregates in centimeter-scale size. Therefore, the properties of FRC will be improved in certain level, but not whole levels if reinforced only by one type of fiber [14]. For instance, steel fibers are supposed to strengthen concrete in the sacle of coarse aggregate, PP or PVA fibers are suitable for the fine aggregate scale crack prevention and carbon nanotubes are proven to improve the strength in the scale cement grains [15].
To provide hybrid reinforcement, in which one type of fiber is smaller, so that it bridges microcracks of which growth can be controlled. This leads to a higher tensile strength of the composite. The second type of fiber is larger, so that it can arrest the propagating macrocracks and can substantially improve the toughness of the composite [16]. In practice, using hybridization with two different fibers incorporated in a common cement matrix, the hybrid fiber reinforced concrete (HFRC) can offer more attractive engineering properties because the presence of one fiber enables the more efficient use of the potential properties of the other fiber [7,[17][18][19]. In addition, HFRC shows improved structural behaviour when compared with conventional concrete, qualities such as less spalling and scabbing under impact loadings [20][21][22][23].
There are several test methods that evaluate the impact strength of FRC where the simplest method is the drop-weight test proposed by the ACI (American Concrete Institution) committee 544 [24]. Nia et al. [25] investigated the increase of first crack initiation and final fracture impact strength of FRC with respect to the plain concrete. Sivakumar and Santhanam [17] compared the FRC with metallic and non-metallic fibers and found that the steel fiber generally plays an important role in the energy absorbing mechanism (bridging action), whereas non-metallic fiber could delay the formation of the micro-cracks. The statistical analysis of impact strength of steel-polypropylene hybrid fiber reinforced concrete (Steel-PP HFRC) carried out by Song et al. [26] with drop-weight tests, demonstrated that the HFRC provides higher improvement on the reliabilities of the first-crack strength and failure strength than the steel fiber reinforced cementitious composites. Conducting repeated drop-weight tests, Yildirim et al. [27] concluded that the steel fiber reinforcement as well as steel-polypropylene hybrid reinforcement can significantly improve the impact performance of concrete. It was found by Wille et al. [28] that there is no significant difference among various types of fibers, such as smooth, hooked or twisted opposite to fiber volume fraction which brings significant difference in terms of tensile strength and energy absorption capacity of resulting material. Providing reasonable trade-off between workability and mechanical properties of the mixture, straight steel fiber was chosen in this study to improve ductility of the composite.
The objective of this work is to investigate the hybrid effect of different fibers on the impact toughness under drop weight tests. As common and popular types of fiber, steel, polypropylene and polyvinyl alcohol fibers were chosen to produce the HFRC. The uniaxial compression and 3-point bending tests (3PBT) were performed to determine the compressive and flexural strength. Drop weight tests were further conducted to compare the fiber reinforcement effect whereby the impact energy was evaluated by first crack and failure strength. A comprehensive analysis was carried out to evaluate the hybrid effect of steel fiber reinforced with PP or PVA fiber on the repeated drop weight impact responses. The experimental results may provide an effective way to improve the impact toughness of fiber reinforced concrete material and structures.
Experimental Programme
The fiber content effect on FRC mechanical and mixing contents was experimentally studied by Yoo et al. [29], indicating that 2 vol.% fibers provides the best performance in fiber pullout behavior including average/equivalent bond strength and pullout energy. Therefore, this work chooses to investigate the HFRC with about 2% content fiber by volume.
Material Composition
The details of the concrete mixture proportions in this study are normalized and listed in Table 1. Portland cement (P.I 42.5) was used herein as a cementitious material and fly ash was added as a mineral active fine admixture. Ground fine quartz sand worked as fine aggregate and its gradation curve is plotted in Figure 1. The water-binder ratio and sand-binder ratio were 0.25 and 0.45, respectively. To improve fluidity, a high performance water-reducing agent, ploycarboxylate superplasticizer (DC-WR2) was also added which may contribute to the self-compacting property. In this experimental study, polypropylene, polyvinyl alcohol and steel fibers used for ultra-high-performance hybrid fiber reinforced concrete (UHP-HFRC) reinforcement were comparatively depicted in Figure 2. The geometric information and mechanical properties of these three fibers are listed in Table 2. It is demonstrated that the steel fiber is stronger and stiffer, while the PP fiber is finer and more flexible and ductile. To investigate the hybridization of PP, PVA and steel fiber (SF) reinforcement effect on HFRC impact energy, 16 mixtures with a single type of fiber reinforcement or hybrid fiber reinforcements at a total content of 1.5-2.5%, by volume of the concrete, are produced for further studied.
Mix Proportioning and Concrete Production
The mixing procedure of FRC needs to be rigorously controlled to ensure the resulting matrix with good workability, particle distribution and compaction. Noting that the small particles tend to agglomerate which may break the chunks when the particles are dry. It is suggested to blend all fine dry particles before adding water and superplasticizer. In the climatic chamber with 90% humidity, the FRC samples were prepared with the following mixing procedures. Firstly, the dry cementitious materials (cement, fly ash) and quartz sand were put together simultaneously and mixed for 1 min at a low speed to achieve the binder-sand mixture. Afterwards, the water and superplasticizer were mixed and gradually poured into the mixture to improve its flowability. Finally, the fibers were slowly added and mixed for another 5 to 8 min to ensure that all the fibers were evenly distributed in the mortar. 24 h later, the specimens were removed from moulds and cured for another 6 and 27 days at room temperature with humidity >95%.
The self-compactability of the fresh mixtures was qualitative evaluated since the FRC mixtures exhibit excellent deformability and proper stability to flow under its own weight. Furthermore, it was observed that mixtures with higher PVA or PP show poorer flowability because more porous microstructure might be generated due to relatively poor consolidation condition compared to steel fiber case.
Test Method
With the foregoing concrete samples reparation procedure, the quasistatic tests, including uniaxial compression (UC) and 3-point bending, were performed to investigate the effect of fiber reinforcement on the compressive and flexural strength. It worth noting that since only fine gradation of quartz sand were used as aggregate, we prepare the UC and 3PBT specimens with similar sizes adopted in [30][31][32]. Moreover, the hybrid effect of steel fiber and polymer (PP and PVA) fiber on the impact performance of the HFRC via the drop weight test. In this section, the experimental programme are explained in detail. Then experimental results will be reported and discussed based on the average values of tests for 3 specimens.
Specimens of 40 mm × 40 mm × 40 mm were cast for quasi-static compressive strength testing. Three samples of each mix were tested to determine the uniaxial compressive strength. Abrasive paper was used to smooth the surface of the specimens. The non-casting surfaces of the cube specimen were used as bottom and top surfaces of the compression test to ensure complete contact with the platen of the universal testing machine in Figure 3a. A loading rate of 2.4 kN/s was adopted to conduct the uniaxial compression test. In order to analyze the fiber effect on the flexural strength of FRC, 3-point bending tests were conducted herein with specimens of different fiber mixes. The dimension of the tested beams are 40 mm (width b) × 40 (depth d) mm in cross-section, and 160 mm in total length where the span l is fixed at 120 mm. To insure quasi-static condition, the 3PBT was conducted at a rate of 0.5 mm/min for load cell of MTS machine. In Figure 3b, the beam was put on the rolling supports which can be deemed as fixed vertical constrain. During the bending test, the displacement and the corresponding load value were recorded. Related to the peak load F P , span l, depth d and width b, the nominal flexural strength f f is expressed as f f = 3F P l/(2bd 2 ) [33].
The impact test was carried out in accordance to ACI Committee 544 drop weight impact test [24]. The test procedure is as follows: a repeatedly dropping hammer with 10.26 kg mass was released from a height of 457 mm. The hammer hit a 63.5 mm diameter hardened steel ball which was fixed at the center of the top surface of the concrete specimen. The hammer dropped to impact on the steel ball which transferred the pulse with the contact. The test apparatus, test set up and dimensions are depicted in Figure 4. In the drop weight impact test, the number of blows to cause the first visible crack was recorded as the first crack strength (N 1 ) while the failure strength (N 2 ) was defined as the number of blows to spread the cracks sufficiently (complete fracture), i.e. the concrete species touched three of the steel lugs [34]. With reference to [19,35], the impact toughness is defined as the impact energy absorbed by the concrete transformed from the drop hammer potential energy during the drop weight impact tests. Thus, the calculation of impact toughness, namely impact energy, is written as follows: where E, N, m, g, h denote impact energy (impact toughness), number of the repeated impact times that the first visible crack and the final failure occur, mass of the drop hammer, gravitational acceleration, drop height of fall, respectively.
Test Results and Discussion
After performing uniaxial compression, 3-point bending and drop weight impact tests, PC and FRC with different types fibers are comparatively studied herein.
UC and 3PBT Results
As listed in Table 3 The flexural responses of 3PBT are presented in Figure 5. The addition of mono PP and steel fiber can considerably improve the post-crack behaviour in Figure 5a
Drop Weight Impact Test Results
After drop weight impact tests, the failure patterns of the PC and HFRC discs (rear surface) are shown in Figure 6. As expected, brittle failure occurs to the plain specimen which breaks into halves. On the other hand, HFRC specimen failed mostly by three pieces but they are still connected by bridging fibers crossing the cracks. The hybrid fiber incorporation to concrete may lead to excessive narrow cracks and pulverized matrix while the PC counterpart breaks into separate pieces. This phenomenon may be caused by the stress redistribution in the matrix achieved with the hybrid reinforcement of steel fiber and polymer fiber [18]. Table 4 lists all the drop weight impact tests results for concrete with different fiber mixes where increase in the number of post-first crack blow (INPB) is introduced with reference to Rahmani et al. [34]. The S N 1 is the standard deviation of first crack blows N 1 while S N 2 denotes the standard deviation of N 2 which helps to quantify the amount of variation or dispersion of test data values. For PC, it is interesting to find that the first crack strength (N 1 ) and failure strength (N 2 ) are the same value of 3 blows which coincides with the experimental results observed by Nia et al. [25]. The reason lies in the fact that the PC specimens fail suddenly through the aggregates in a brittle manner. Meanwhile, the FRC specimens tend to have a much greater failure strength than first crack strength and both N 1 and N 2 have an increase to some extent. Therefore, it can be concluded that the fiber reinforcement may contribute to impact toughness, i.e., both first crack strength and failure strength. The toughening enhancement mechanism of fiber on concrete impact resistance mainly stems from the considerable energy absorption during de-bonding, stretching and pullout out of fibers due to the emergence and propagation of cracks in concrete. Once crack occurs, the evenly distributed fibers are activated to arrest the cracking and limit the further crack propagation. Consequently, the strength as well as the ductility of concrete are improved. The SF, PP and PVA addition effects on impact toughness are evaluated in terms of impact blow times and failure patterns. As indicated by Table 4, the SFRC has the largest failure strength and PP fiber is more effective than PVA fiber which is identical to the flexural strength. Figure 7 gives the bar graphs of FRC impact resistance against the PC counterpart. The impact resistance of SFRC is superior to that of the PC whereby the first crack strength is about 12 times of PC and INPB is improved by 28 blows. Comparing to PP FRC and PVA FRC, the failure strength of SFRC discs show significant increase by 54% and 163%. Therefore, the steel fiber incorporation can better postpone the formation of the first crack and inhibit the crack propagation. Apart from the impact strength analysis, the effectiveness of the fiber reinforcement can be appreciated from the way the FRC discs failed. The failure pattern in Figure 8a shows that multiple cracking occurs and some cementious matrix pieces are excessively separated from the specimen but still remaining integrity. However, both Figure 8b,c reveals that the concrete discs are broken into two pieces which are accompanied by narrow cracks, small bits of debris and dust without crushing indicating relatively brittle behaviour. Thus, the SFRC failure pattern shows more obvious ductile failure properties under drop impacts. After mechanical test, the micro structures of transition zone of fibers and cementitious matrix are usually observed via scanning electron microscope (SEM) [10,12]. In Figure 9a, the SF had a smooth surface which was detached from its surrounding matrix. The few hydration products on the SF surface imply the relatively weaker interfacial bond between SF and matrix. The high fiber strength and weaker bond strength make the SF more susceptible to pullout than to rapture. For PP FRC, Figure 9b indicates that PP fibers are encompassed by C-S-H gel and thy are raptured due to the low tensile strength. An obvious difference in the appearance on the fiber surface could be noticed in Figure 9c, where a considerable quantity of hydration products are attached to the PVA fibers indicating the PVA-matrix bond is stronger than the matrix material. The high tensile strength combined with strong bond strength contribute to the Steel-PVA HFRC strength. The fiber surface and matrix of Steel-PVA HFRC after impact were studied in [12].
Hybrid Effect Evaluation on Impact Energy Absorption
The hybrid reinforcement of steel fiber and ploymer fiber may contribute to a better impact energy absorption property since the steel fiber can diffuse more impact energy and polymer fiber delay the cracks extension. In this section, the hybrid effect of Steel-PP and Steel-PVA are discussed with respect to the impact failure energy. Figure 10a shows the impact toughness of Steel-PP HFRC with 2% fiber content. As PP content increases, the first crack strength remains almost constant when PP is less than SF which is then followed by a decrease tend. Meanwhile, the failure strength has the maximum value for the hybrid mixture with 0.5% PP and 1.5% SF. Also, an obvious decrease for N 2 is observed with PP content increasing ≥1%. It was pointed out by Yap et al. [36] that PP content of PP-Steel HFRC beyond 0.1% is not recommended and only low quantity (≤0.1%) of flexible PP fibers enhances the crack bridging effect. Similarly, we also find that increasing PP content (higher than 0.5%) always negatively affect the impact strength. However, the PVA content effect on the impact toughness is different as shown in Figure 10b whereby the best hybrid mixture corresponding to the largest first crack and failure strength may occur around 1% SF + 1% PVA. This phenomenon is very similar to the energy absorb capacity study results by Zhang and Cao [38] whereby the best hybrid is 1.75% SF + 0.25% PVA. Figure 11a compares the fiber content effect on the impact toughness with 1% constant SF or PP content. It is interesting to find that to achieve better impact toughness the polymer content is supposed to be between 1% to 1.5% when SF content is 1%. In Figure 11b, both first crack and failure strength increase with the SF content increase from 0.5% to 1.5%. Since the SF has the most effective bridging effect, the SF content may play a more important role in the improvement of impact toughness. Based on the regression analysis of the impact resistance results, the liner relationship between first crack and failure strength of Steel-PP HFRC and Steel-PVA HFRC. After best fitting, the linear equations describing the first crack and failure strength are developed as N 2 = 2.78N 1 + 16.3 for Steel-PP HFRC and N 2 = N 1 + 29.6 for Steel-PVA HFRC as shown in Figure 12. The coefficient of determination (R 2 ) is obtained as 0.731 and 0.932 for SP-FRC and SA-FRC fitting equation. According to Ostle [39], a coefficient of R 2 with 0.7 or higher value is considered as a reasonable model. Therefore, the derived equation may successfully be applied to predict the relationship between the first crack and failure strengths for FRC specimens studied herein. The hybrid effect of the impact toughness in drop weight impact was evaluated by introducing the hybrid effect index α [8]: where β i = V i /V represents the fiber volume fraction of one kind fiber in the whole volume of fiber V, V i is the volume of SF, PP or PVA, E i is the impact toughness of concrete incorporated with single kind fiber, E 0 is the impact toughness of plain concrete without fiber, E H denotes the impact toughness of HFRC. To exclude the effect of fiber content, the E H calculation should correspond to E i with the same volume fiber reinforcement. In this study, we concentrate on the 2% mixes for further hybrid effect evaluation. If α > 1, the hybrid effect is positive for impact toughness improvement, while α < 1 means the hybrid effect is negative. With Equation (2), the hybrid effect index of Steel-PP HFRC and Steel-PVA HFRC is calculated with respect to the impact energy of drop weight test as shown in Figure 13. The Steel-PP HFRC mixes are featured with hybrid effect index α greater than 1 and 0.67% SF + 1.33% PP has the largest hybrid effect index value. It reveals that the hybridization of SF and PP always plays a positive effect on the impact energy. On the other hand, the hybrid effect index is around 1 for both hybrids with 1.0% SF + 1.0% PVA and 1.5% SF + 0.5% PVA, indicating that impact energy is almost the simply summation of SFRC and PVA FRC. It may be concluded that PVA and SF hybridization does not result in a positive effect on the impact energy unless the PVA fiber volume fraction exceeds the steel fiber volume fraction.
Conclusions
Hybrid fiber reinforced concrete with PP, PVA and steel fibers was prepared and tested for uniaxial compression, 3-point bending and drop weight tests. The comparative analyses give the conclusions as follows: (1) The improvement of impact energy property of concrete discs can be achieved by the incorporation of polymer (PP/PVA) fiber or steel fiber. The fiber reinforcement transfers the impact failure patterns from brittleness to ductility. (2) Steel fiber addition can better improve the compressive strength, flexural strength and impact strength than its PP or PVA fiber counterpart. Damage modes suggest that the steel fiber tends to pulled out from the matrix while rapture usually occurs to the polymer fiber. | 5,212.2 | 2018-12-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Autonomous Exploration and Mapping with RFS Occupancy-Grid SLAM
This short note addresses the problem of autonomous on-line path-panning for exploration and occupancy-grid mapping using a mobile robot. The underlying algorithm for simultaneous localisation and mapping (SLAM) is based on random-finite set (RFS) modelling of ranging sensor measurements, implemented as a Rao-Blackwellised particle filter. Path-planning in general must trade-off between exploration (which reduces the uncertainty in the map) and exploitation (which reduces the uncertainty in the robot pose). In this note we propose a reward function based on the Rényi divergence between the prior and the posterior densities, with RFS modelling of sensor measurements. This approach results in a joint map-pose uncertainty measure without a need to scale and tune their weights.
Introduction
The task of autonomous exploration and mapping of unknown structured environments combines the solutions to three fundamental problems in mobile robotics: (self)localisation, mapping and motion control. Localisation is the problem of determining the position of the robot within its estimated or a given map. Mapping refers to the problem of integrating robot's ranging and/or bearing sensor measurements into a coherent representation of the environment. Motion control deals with autonomous decision making (e.g., where to move, where to "look") for the purpose of accomplishing the mission in the most effective manner (in terms of duration and accuracy).
Algorithms for simultaneous localisation and mapping (SLAM) have been studied extensively in the last decades. While the problem is still an active research area, see [1,2], several state-of-the-art algorithms are already publicly available in the Robot Operating System (ROS) [3,4]. Integrating decision control with SLAM, thereby making the robot autonomous in modelling structured environments, proves to be a much harder problem. Heuristic approaches are reviewed and compared in [5]. More advanced approaches are based on the information gain [6][7][8][9][10].
Autonomous SLAM is a difficult problem because any SLAM algorithm can be seen as an inherently unstable process: unless the robot returns occasionally to already visited and mapped places (this act is referred to as a loop-closure), the pose (position and heading) estimate drifts away from the correct value, resulting in an inaccurate map. Decision making in autonomous SLAM therefore must balance two contradicting objectives, namely exploration and exploitation. Exploration constantly drives the robot towards unexplored areas in order to complete its task as quickly as possible. By exploitation we mean loop-closures: the robot occasionally must return to the already mapped area for the sake of correcting its error in the pose estimate.
Early approaches to robot exploration ignored the uncertainty in the pose estimate and focused on constantly driving the robot towards the nearest frontier, which is the group of cells of the occupancy-grid map on the boundary between the known and unexplored areas [11]. Current state-of-the art in autonomous SLAM is cast in the context of partially-observed Markov Decision processes. The reward function is typically the expected information (defined in the information theoretic framework) resulting from taking a particular action. Entropy reduction in joint estimation of the map and the pose was proposed in a seminal paper [6]. Assuming the uncertainty in pose and the map are independent, the joint entropy can be computed as a sum of two entropies: the entropy of the robot pose and entropy of the map. Several authors subsequently pointed out the drawback of this approach [9,10]: the scale of the numerical values of the two uncertainties is not comparable (the entropy of the map is much higher than the entropy of the pose). A weighted combination of the two entropies was proposed subsequently, with various approaches to balance the two entropies.
Most of the autonomous SLAM algorithms have been developed in the context of a Rao-Blackwellised particle filter (RBPF)-based SLAM. We will remain in this framework and develop an autonomous SLAM for the recently proposed RFS based occupancy-grid SLAM (RFS-OG-SLAM) [2], which is also implemented as a RBPF. In this short note we propose a reward function based on the Rényi divergence between the prior and the posterior joint map-pose densities, with the RFS modelling of sensor measurements. This approach results in a joint map-pose uncertainty measure without a need to scale and tune their respective weights.
The RFS Occupancy-Grid Based SLAM
The main feature of the RFS-OG-SLAM is that sensor measurements are modelled as a RFS. This model provides a rigorous theoretical framework for imperfect detection (occasionally resulting in false and missed detections) of a ranging sensor. The previous approaches to imperfect detection were based on ad-hoc scan-matching or a design of a likelihood function as a mixture of Gaussian, truncated exponential, and a uniform distribution [12] (Section 6.3).
Let the robot pose be a vector θ = [x y φ] which consists of its position (x, y) in a planar coordinate system and its heading angle φ. Robot motion is modelled by a Markov process specified by a (known) transitional density, denoted by π(θ k |θ k−1 , u k ). Here the subscript k ∈ N refers to time instant t k and u k is the robot-control input applied during the time interval τ k = t k − t k−1 > 0.
The occupancy-grid map is represented by a vector m = [m 1 m 2 · · · m N ] , where the binary variable m n ∈ {0, 1} denotes the occupancy of the nth grid-cell, n = 1, . . . , N, with N 1 being the number of cells in the grid.
The ranging sensor on the moving robot provides the range (and azimuth) measurements of reflections from the objects within the sensor field of view. Let the measurements provided by the sensor at time t k be represented by a set Z k = {z k,1 , · · · , z k,|Z k | }, where z ∈ Z k is a range-azimuth measurement vector. Both the cardinality of the set Z k and the spatial distribution of its elements are random. For a measurement z ∈ Z k , which is a true return from an occupied grid cell n (i.e., with m n = 1), we assume the likelihood function g n (z|θ k ) is known. The probability of detecting an object occupying cell n, n ∈ {1, . . . , N} of the map, is state dependent (the probability of detection is typically less than one and may depend on the range to the obstacle, but also other factors, such as the surface characteristics of the object, turbidity of air, a temporary occlusion, etc.) and is denoted as d n (m n , θ k ). Finally, Z k may include false detections modelled as follows: their spatial distribution over the measurement space is denoted by c(z) and their count in each scan is Poisson distributed, with a mean value λ.
The solution is formulated using the Rao-Blackwell dimension reduction [13] technique. Application of the chain rule decomposes the the joint posterior PDF as: p(θ 1:k , m|Z 1:k , u 1:k ) = p(m|Z 1:k , θ 1:k ) p(θ 1:k |Z 1:k , u 1:k ). (1) Assuming that the occupancy of one cell in the grid-map is independent of the occupancy of other cells (the standard assumption for occupancy-grid SLAM), one can approximate p(m|Z 1:k , θ 1:k ) in (1) as a product of p(m n |Z 1:k , θ 1:k ) for n = 1, . . . , N. Furthermore, the update equation for the probability of occupancy of the n-th cell, i.e., p(m n = 1|Z 1:k , θ 1:k ) = r k,n , can be expressed analytically [2]: with The posterior PDF of the pose, p(θ 1:k |Z 1:k , u 1:k ) in (1), is propagated using the standard prediction and update equations of the particle filter [13]. However, the likelihood function used in the update of particle weights takes the form f (Z k |Z 1:k−1 , θ 1 Assuming the individual beams of the sensor are independent, we can approximate f (Z k |r k−1 , θ 1:k ) by a product of likelihoods p(z|r k−1 , θ 1:k ) for all z ∈ Z k . Finally, where n z as the nearest grid cell to the point where, for a given pose θ k , range-azimuth measurement z maps to the (x, y) plane. The RBPF propagates the posterior p(θ k , m|Z 1:k , u 1:k ), approximated by a weighted set of S particles {(w
Path Planning
Decision making in an autonomous SLAM algorithm includes three steps: the computation of a set of actions; the computation of the reward assigned to each action and selection of the action with the highest reward.
Computing the Set of Actions
An action is a segment of the total path the robot should follow for the sake of exploring and mapping the environment. The number of actions at a decision time t k (options for robot motion) typically should including both short and long displacements and all possible trajectories to the end points. Due to limited computational resources, in practice only a few actions are typically proposed. They fall into one of the following categories: exploration actions, the place re-visiting (loop-closure) actions, or a combination of the two. The exploration actions are designed to acquire information about unknown areas in order to reduce the uncertainty in the map. Exploration actions are generated by first finding the frontier cells in the map [11,14]. Figure 1 shows a partially built map by a moving robot running the RFS-OG-SLAM algorithm and the discovered frontier cells, using an image processing algorithms based on the Canny edge detector [15]. An exploration action represents a trajectory along the shortest path from the robot's current pose to the one of the frontiers. Because the number of frontier cells can be large, they are clustered by neighbourhood. The clusters that are too small are removed, while the cells in the centre of the remaining clusters compose a set of exploratory destinations. Subsequently the A* algorithm [16] is applied to find the shortest path from the robot's current position to each of the exploratory destinations. Because the A* algorithm assumes that the moving robot size equals the grid-cell size, its resulting path can be very close to the walls or the corners. The physical size of the robot is a priori known and therefore the empty space in the map which the robot can traverse is thinned accordingly (using the morphological image processing operations [15]). The place re-visiting actions guide the robot back to the already visited and explored areas.
Reward Function
Reward functions are typically based on a reduction of uncertainty and measured by comparing two different information states. In the Bayesian context, the two information states are the predicted density at time k and the updated density at time k (after processing the new hypothetical measurements, resulting from the action). However, no measurements are collected before the decision has been made and therefore an expectation operator must be applied with respect to the new measurements resulting from the action.
The reward function can be formulated as the gain defined by an information-theoretic measure, such as the Fisher information, the entropy, the Kullback-Leibler (KL) divergence, etc [17]. We adopt the reward function based on the Rényi divergence between the current and the future information state. The Rényi divergence between two densities, p 0 (x) and p 1 (x), is defined as [17]: where α ≥ 0 is a parameter which determines how much we emphasize the tails of two distributions in the metric. In the special cases of α → 1 and α = 0.5, the Rényi divergence becomes the Kullback-Leibler divergence and the Hellinger affinity, respectively [17]. Using the particle filter approximation of the joint posterior at time k − 1, that is w , the expected reward function for an action u k can be expressed as [18] (Equation 18): with θ k−1 ) has been explained at the end of Section 2. Drawing from f (Z k |Z 1:k−1 , u k ) can be done by ray-casting assuming action u k resulted in pose θ (i) k and using the current estimate of the map. In doing so, the probability of a cast ray hitting an object at an occupancy grid cell is made proportional to its probability of occupancy.
An action in the context of active SLAM is a path which can be translated into a sequence of control vectors u k , u k+1 , · · · , u k+L . Computation of the reward for this case is still given by (6) and (7), but one would have to replace Z j k and f k (Z k |θ , respectively. This is computationally very demanding and so we approximate that reward as a sum of single step rewards. The last step of an active SLAM is to select the action with the highest reward and to subsequently execute it. Autonomous SLAM also needs to decide when to terminate its mission. The termination criterion we use is based on the number of frontier cells. Figure 2 shows the map of the area used in simulations for testing and evaluation of the proposed path planning algorithm. The true initial robot pose (distance is measured in arbitrary units, a.u.) is [2.2, 0.65, 80 • ]. The SLAM algorithm is initialised with particles of equal weights, zero pose vectors and the probability of occupancy for all cells set to 0.5. The simulated ranging sensor is characterised with a coverage of 360 • , angular resolution 0.8 • , and d n (1, θ k ) = 0.8 if the distance between the robot and n-th cell is less than R max = 2.5 a.u. False detections are modeled with λ = 8 and c(z) as a uniform distribution over the field of view with the radius R max . Standard deviation is set to 0.01 a.u. for range measurements and 0.3 • for angular measurements. The occupancy-grid cell size is adopted as 0.02 a.u. The number of particles S = 40. The α parameter of the Rényi divergence is set to 0.5. The mission is terminated when none of the clusters of frontier cells has more than 14 members.
Initially, at time k = 0, map entropy is H 0 = 1. The entropy of the maps in Figure 3a,b are 0.6124 and 0.6074, respectively. An avi movie of a single run of the algorithm can be found in Supplementary Material. Next we show the results obtained from 30 Monte Carlo runs of the autonomous RFS-OG-SLAM, using the simulation setup described above. Figure 4a shows the error of the robot estimated position (in a.u.) and the error of the robot estimated heading (in degrees), averaged over time and over all 30 trajectories. Figure 4b displays the final map entropy versus the duration of the exploration and mapping mission. The average estimated map over 30 Monte Carlo runs is shown in Figure 5a, while the variance is displayed in Figure 5b. Variability in the performance of the autonomous RFS-OG-SLAM is due to many causes, such as the inherent randomness of particle filtering, clustering and reward computation. Overall, however, the proposed autonomous RFS-OG-SLAM performs robustly and produces quality maps without a need for human intervention.
Conclusions
The note presented a path planning method for autonomous exploration and mapping by a recently proposed RFS occupancy-grid SLAM. The reward function is defined as the Rényi divergence between the prior and the posterior densities, with RFS modelling of sensor measurements. This approach resulted in a joint map-pose uncertainty measure without a need to tune the weighting of the map versus the pose uncertainty. Numerical results indicate reliable performance, combining short exploration with a good quality of estimated maps. | 3,689 | 2018-06-01T00:00:00.000 | [
"Computer Science"
] |
Inactivation of mediator complex protein 22 in podocytes results in intracellular vacuole formation, podocyte loss and premature death
Podocytes are critical for the maintenance of kidney ultrafiltration barrier and play a key role in the progression of glomerular diseases. Although mediator complex proteins have been shown to be important for many physiological and pathological processes, their role in kidney tissue has not been studied. In this study, we identified a mediator complex protein 22 (Med22) as a renal podocyte cell-enriched molecule. Podocyte-specific Med22 knockout mouse showed that Med22 was not needed for normal podocyte maturation. However, it was critical for the maintenance of podocyte health as the mice developed progressive glomerular disease and died due to renal failure. Detailed morphological analyses showed that Med22-deficiency in podocytes resulted in intracellular vacuole formation followed by podocyte loss. Moreover, Med22-deficiency in younger mice promoted the progression of glomerular disease, suggesting Med22-mediated processes may have a role in the development of glomerulopathies. This study shows for the first time that mediator complex has a critical role in kidney physiology.
www.nature.com/scientificreports/ factors bind to different mediator subunits and many transcription factors can simultaneously bind to MED 8 . An important feature of MED is that its subunit composition can vary 8 . Consequently, the lack of individual MED subunits is associated with silencing of specific transcription pathways. Studies in knockout animals have unravelled roles of MED subunits in cellular differentiation and studies in human genetics have linked them to a number of human diseases. However, to date, no studies on MED in kidney tissue have been performed.
In this study, we explored MED subunits in renal podocyte cells. We show that MED subunit 22 (Med22) is highly enriched in podocytes and although it is not needed for the normal development of podocytes, it is essential for the maintenance of glomerular homeostasis as mice lacking Med22 in podocytes develop progressive renal disease and die prematurely.
Results
Human Protein Atlas suggests Med22 as a podocyte-associated MED subunit. As the role of MED in kidney tissue is unknown, we analysed Human Protein Atlas (www.prote inatl as.org) to identify podocyte-associated MED subunits. We found immunohistochemical data in kidney tissue for 34 MED subunits (Suppl. Figure 1). Med21 and Med22 showed strong glomerular staining with only low signal in tubuli. As Med22 mRNA seemed to give more glomerulus-specific staining (Suppl. Figure 1), we focused on Med22 in further studies.
Med22 is enriched in podocytes and localizes to major processes. In the kidney, the expression of Med22 was enriched in the glomerulus in comparison to the kidney fraction devoid of glomeruli as shown by qPCR human kidney tissue (Fig. 1A). The expression of podocyte-specific gene podocin 9 was analysed to control the purity of glomerulus fractions (Fig. 1A). In immunofluorescence of human kidney tissue a strong reactivity for Med22 was detected in glomeruli with clearly weaker signal in rest of kidney tissue (Fig. 1B). In double stainings, Med22 co-localized partially with podocyte major process marker vimentin (Fig. 1C). No significant overlap was detected with podocyte foot process marker nephrin, mesangial cell expressed protein alpha-smooth muscle actin and endothelial marker CD31 (Fig. 1C). Moreover, Med22 did not co-localize significantly with a podocyte nucleus marker Wt1 (Suppl. Figure 2A). Taken together, Med22 staining was in the glomerulus detected in podocytes in which it located to major processes. The specificity of anti-Med22 antibody was validated by immunostaining and Western blotting of cultured podocytes transfected with full-length human Med22 cDNA (Suppl. Figure 2B,C).
Generation of constitutive and conditional Med22 knockout mouse lines.
To analyse the role of Med22 in the glomerulus, we first generated a conventional knockout (KO) mouse line. A transgenic mouse line with a cassette containing lacZ and neomycin-resistance genes, along with FRT and LoxP sites targeting the exon 3 of the Med22 gene, was obtained from European Mouse Mutant Cell Repository. This line was crossed with ThIRES-cre mouse line that is active in oocytes 10 to generate a germ line deletion and thus a constitutive Med22 allele lacking exon 3 ( Fig. 2A). Breeding of animals heterozygous for constitutive allele did not generate any homozygous mice (166 pups analysed), indicating embryonic lethality. To see whether haploinsufficiency of Med22 resulted in glomerular diseases, we followed heterozygous mice for up to 15 months. No abnormalities were detected in histological or blood/urine analysis (Suppl. Figure 3A,B). Therefore, we decided to generate a conditional Med22 allele.
To inactivate Med22 specifically in podocytes, we crossed the original transgenic line with a FLP-deleter line followed by crossing with a Nphs2-cre line ( Fig. 2A) 11 . Genotyping for this allele amplified 580 bp product, whereas the wild type product was 479 bp (Fig. 2B). To validate the successful deletion of exon 3 we analyzed Med22 mRNA in isolated glomeruli. In controls, we amplified a single cDNA fragment spanning from exon 2 to exon 4 (Fig. 2C). In mice with the conditional allele, we detected a shorter variant corresponding in size to skipping of exon 3 in isolated glomeruli, whereas this variant was not amplified from rest of kidney tissue (Fig. 2C). We confirmed the loss of exon 3 in glomerular tissue by Sanger sequencing (data not shown). As part of the Med22 mRNA seemed intact, we termed this mouse line as a podocyte-specific Med22 mutant line (pod-Med22) that can represent either a null or a hypomorphic allele.
Podocyte-specific defect in Med22 results in vacuole formation, podocyte loss and progressive renal disease. Pod-Med22 mice were born in a normal Mendelian ratio and developed normally (data not shown). No albuminuria was detected by 8 weeks of age (Fig. 2D). However, the mice exhibited massive albuminuria by 12 weeks of age that further increased by 16 weeks of age. In line with this, animals showed normal blood urea nitrogen levels at 8 weeks of age but the levels were significantly increased at 16 weeks of age (Fig. 2D). All pod-Med22 animals (a total of 14) died by 20 weeks of age due to renal failure, whereas littermate control animals (a total of 64), including both wild type and heterozygote mice, showed 100% survival (Fig. 2D).
In light microscopy, 4-and 8-week old pod-Med22 kidneys showed no abnormalities (Fig. 3A, data not shown). At 12 weeks of age, pod-Med22 mice exhibited large "empty" vacuole-like structures in podocytes that were often difficult to discern from capillary lumens in light microscopy (Fig. 3B). These structures were negative for lipids as detected by oil red staining (data not shown). Focal segmental glomerulosclerosis, which develops secondary to podocyte loss, was also observed (Fig. 3B). In addition, dilated tubuli with hyaline casts, a secondary sign of albuminuria, were detected ( Fig. 3B). At 16 weeks of age, more advanced changes were detected with almost global glomerulosclerosis and tubular atrophy (Fig. 3C). Semi-quantification of key histological features validated the abundant histological changes of pod-Med22 mice (Fig. 3D).
In electron microscopy, glomeruli appeared normal in 4-week old mice but at 8 weeks of age pod-Med22 mice showed a significant foot process effacement (Fig. 4A). Rarely, small vacuoles in podocytes were observed www.nature.com/scientificreports/ in 8-week old pod-Med22 mice (data not shown). At 12 weeks of age, round vacuoles were regularly observed inside podocytes (Fig. 4B). These vacuoles showed a single simple membrane, which lacked the electron-dense glycocalyx that was detectable on podocyte plasma membrane (Fig. 4B). Vacuoles showed occasionally heterogeneous electron density, which was in contrast to urinary space that demonstrated mostly clear homogenous signal. Occasionally, podocytes showed nuclear changes suggesting an apoptotic process (Fig. 4B). At 16 weeks of age, sclerotic regions in glomeruli and occasionally areas of the glomerular basement membrane (GBM) that were not covered by podocytes were detected (Fig. 4B), suggesting the loss of podocytes. As vacuole-like structures reminded of a sub-podocyte space 3,12 , we performed a high-resolution microscopy and detailed electron microscopic analysis to see whether these structures had connections to Bowman's space. In pod-Med22 podocytes carrying a tomato-reporter, we detected spherical structures that were surrounded by cytoplasmic (Suppl. Media 1 and 2). A detailed visual evaluation of these structures showed that only 12 out of 77 vacuoles (16%) showed a suspected connection to Bowman's space (Suppl . Media 1 and 2). In electron microscopy, the analysis of vacuole-like structures in 40 pod-Med22 glomeruli (aged 12-16 weeks) showed no connections to Bowman's space (data not shown). Taken together, morphological analysis showed that pod-Med22 mice develop intracellular vacuoles in podocytes followed by loss of podocytes.
Immunofluorescence analysis of pod-Med22 podocytes. To analyse podocytes in more detail, we stained for podocyte markers nephrin, synaptopodin and WT1. All markers showed progressive loss of signal starting at 8 weeks and by 16 weeks only few glomeruli showed positivity (Fig. 5A). The quantification of WT1 nuclei showed a clear reduction at both 12 and 16 weeks. The results are in line with our morphological data that suggest dedifferentiation followed by loss of podocytes.
To investigate the molecular nature of podocyte vacuoles, we performed immunofluorescence staining for vesicular markers. Lysosomal marker LAMP2 was upregulated in 12-week old glomeruli and localized mainly to podocytes as shown by double-labelling with WT1 and CD31 (Fig. 5B). Of note, LAMP2 was not up-regulated in 8-week old pod-Med22 glomeruli (data not shown). Endocytosis marker caveolin was also upregulated in podocytes at 12 weeks as caveolin co-localized with LAMP2 (Fig. 5B). Neither endosomal marker clathrin nor early endocytosis marker Rab5 were changed in comparison to controls (Fig. 5B). Moreover, no difference in autophagy markers beclin, Atg16L, Atg5 and LC3A/B was detected by immunofluorescence or Western blot (data not shown).
RNAseq of KO glomeruli unravels downregulation of podocyte genes. To get molecular insights
into the process in pod-Med22 glomeruli, we performed RNA sequencing (RNAseq) on glomeruli isolated from 8-week old mice. The sequencing revealed 126 significantly differentially expressed genes of which 78 were upand 48 downregulated (Suppl. Table 1). There was a down-regulation of podocyte genes, which was in line with our immunofluorescence data. Besides downregulation of Rab3b, no vesicle transport genes were differentially expressed.
Pod-Med22 mice are susceptible to kidney damage. As pod-Med22 animals did not show any renal phenotype during the first 8 weeks of life, we wanted to analyse whether a Med22-defect in podocytes modulated disease progression during this period. We induced glomerulonephritis at 4 weeks of age using a nephrotoxic serum that binds to the GBM. Both pod-Med22 and litter-mate controls developed similar albuminuria within 48 h after the induction (Fig. 6A). No significant difference in BUN levels were detected (Fig. 6B). However, in histological examination we detected a significant difference as pod-Med22 mice developed more glomerular damage as indicated by scoring of sclerotic changes (Fig. 6C,D). Additionally, more tubular casts were detected in pod-Med22 kidneys (Fig. 6C,D).
Discussion
We identified mediator complex protein Med22 as a podocyte-enriched molecule and show that although Med22 is not needed for normal podocyte development, it plays a critical role in the maintenance of glomerular filtration barrier as mice with a mutant Med22 in podocytes develop a progressive renal disease. Our study shows for the first time that the mediator complex has a critical role in kidney biology.
So far, only few studies have focused on Med22. It was originally termed as Surf5, due to its location in Surfeit locus in mouse genome 13 , and later identified to be a component of the mediator complex, and possibly to directly interact with Med30 14 . Interestingly, studies in cell culture suggested that Med22 protein is localized in the cytoplasm 15 . Our immunofluorescence studies support this notion as we detected the Med22 protein mainly in the cytoplasm of podocytes. This was somewhat surprising taking into account the role of MED in transcription. It may be that Med22 shuttles between the cytoplasm and the nucleus, similar to MED subunit 28, which has been suggested to have dual functions in these compartments 16,17 . Due to lack of Med22 expression in podocyte cell culture models, we did not investigate this further.
Med22-deficiency in mouse podocytes resulted in renal failure and premature death by 20 weeks of age. Starting around 8 weeks-of-age mice developed intracellular vacuoles, which was followed by podocyte death. The exact mechanism how these vesicles lead to podocyte death is unclear, but our temporal morphological analysis showed that the vesicles grew dramatically in size and finally occupied almost the whole cytoplasm displacing nuclei and other intracellular structures. This can result in general cellular dysfunction and cell death.
Mediator is a multi-protein complex that is essential for gene transcription via RNA polymerase II 8,18 . So far, all KO mouse models generated for mediator subunits have been embryonic lethal suggesting that mediator action is critical for embryonic development 8,18 . Our study supports this notion as embryos homozygous for constitutive Med22 allele die in utero. Embryonic lethality has hampered in vivo studies on mediator complex. www.nature.com/scientificreports/ To overcome this, we inactivated Med22 specifically in podocytes and saw that although Med22 was highly expressed by these cells, it was not essential for the development or initial maintenance of podocyte structure and function. There are several potential explanations for this. It is possible that Med22-dependent biological processes become active later in life. In fact, although the glomerulus seems to be structurally mature 2 weeks after birth in mouse, we have observed significant expressional differences in isolated glomeruli at later stages of life (unpublished data). On the other hand, it is possible that Med22 allele generated is not a null but a hypomorphic allele and this function could be sufficient for normal podocyte development. However, we speculate that the "late" phenotype may be a result of the accumulation of damage. Minor defects in transcriptional machinery may exist already at birth but these can fall under our detection threshold. This idea is supported by our data in 4-week-old pod-Med22 mice that were prone to develop podocyte damage and developed vacuole-like structures under pathological stimuli. We made an effort through RNAseq and immunostaining to pinpoint molecular mechanisms that drive the disease development in pod-Med22 mice. However, we failed to identify a clear pathway that would be responsible for the phenotype. It may be that Med22 mediates transcription of multiple pathways that are difficult to define through molecular profiling. Thus, it is difficult to say whether Med22-mediated pathways contribute to the pathogenesis of human glomerular diseases. Vacuole-like structures in podocytes, reminding of the phenotype in our pod-Med22 mice, have been reported in common human glomerulopathies 19,20 , and therefore, it is possible that Med22 has a role even in human glomerulopathies. Obviously, more studies are needed to dissect the role of Med22 in podocytes, as well as to understand how mediator complex contributes to kidney biology. Immunofluorescence. Normal human adult kidney samples were collected from kidneys nephrectomised due to renal cancer (Karolinska University Hospital, Stockholm, Sweden). Nephrin and podocin antibodies have been described previously 9,23 . Other antibodies were: Med22-Atlas Antibodies and Sigma; Vimentin, alpha SMA, WT-1-Sigma (human); CD31-Abcam; Clathrin, Caveolin, Lc3II, Atg16L, Atg5, Beclin, Rab5, Rab7, Rab11 (Cell Signalling); Synaptopodin, Rab3b and LAMP2 (Santa Cruz), mouse nephrin (Acris), mouse WT1 (Millipore).
RT
The samples were snap-frozen, and the cryosections (10 μm) were post-fixed with cold acetone (− 20 °C) followed by blocking in 5% normal goat serum. The primary antibodies were incubated overnight at 4 °C, followed by a 1-h incubation with the secondary antibody. For double-labelling experiments, the incubations were performed sequentially.
Generation of Med22 knockout mouse lines. We used a line generated by European Mouse Mutant
Cell Repository in which the cassette containing lacZ, neomycin-resistance gene, and FRT and LoxP sites were targeted to exon 3 of Med22 gene. The mice were in a mixed C57bl/6 129 Sv background. We crossed these mice with Th-IRES deleter 10 , as well as with a FLP-deleter line (B6.129S4-Gt(ROSA)26Sortm1(FLP1)Dym/RainJ) to generate a floxed mouse line for Med22 gene (Med22-fl). Med22-fl was crossed Tg-Nphs2-cre 11 to inactivate Med22 specifically in podocytes (pod-Med22).
Histological analyses. Histological analyses were done using PAS (Periodic Acid-Schiff) staining. For 16-week-old mice, 8 controls and 8 knockout animals were analysed. For 12-week-old, 17 controls and 7 knockout animals were analysed. For 8-weeks-old, 7 controls and 6 knockout animals were analysed. For 1-year-old Med22 global heterozygous 4 controls and 7 heterozygous mice were analysed at 15 months. Glomerular damage was evaluated by semi-quantitative scoring of histological changes. Glomeruli were scored as having normal histology or abnormal histology (which was usually associated with the presence of sclerosis/mesangial matrix expansion/crescents). The presence of detectable vacuole-like structures in each glomeruli was evaluated and scored separately. We evaluated 30 random glomeruli in each mouse. For the analysis of tubular changes, we chose 10 random high power filed (× 40) images from renal cortex and observed the presence of tubular casts.
Electron microscopic analysis were with standard transmission microscopy using samples fixed with 2.5% glutaraldehyde. Especially, we focused on podocytes and vesicular structures in podocytes. A total of 40 vesicular structures in 12 and 16-week old pod-Med22 mice were analysed and categorized as having a connection to Bowman's space or being isolated vesicle without detectable connection to extracellular space.
Tissue expansion and analysis of expanded kidney samples. Kidney samples were expanded as described previously 24 . Samples were then stained for tdTomato using a goat anti-tdTomato antibody (Sigma AB8181-200) and a donkey anti-goat Alexa-488 secondary antibody. Samples were then imaged on a Zeiss LSM780 inverted confocal microscope using a 40X NA1.3 water immersion objective.
Every large vesicle in podocytes (exceeding 1 µm in depth) was categorized into three groups; open, closed or non-conclusive. Only vesicles that were completely surrounded by tdTomato signal were marked as closed. The vesicles that could not be completely excluded to have an opening to extracellular space were marked as non-conclusive. The vesicles with an apparent opening to extracellular space were marked as open. Bulk RNA sequencing. RNA sequencing was performed for glomeruli isolated from 3 pod-Med22 and 3 littermate control mice at 8-weeks of age. Total RNA was isolated using a combination of Trizol and Chloroform extraction and Qiagen RNAeasy clean-up columns. Total RNA quality and concentration were measured by Agilent BioAnalyzer 2100. All samples passed quality criteria RIN > 8.0, 28S/18S > 1.0. RNA samples were then prepared and sequenced by commercial service in BGI tech. 100 bp pair-end reads (pure reads without adapters) were received from BGI tech. More than 22 million reads passed quality control (quality score > 28) for each sample. Qualified reads were mapped to mouse reference genome Ensembl GRCm38 and transcriptome gene guild version 89 with alignment tool STAR 2.5.2b 25 . Uniquely mapped reads were then quantified as counts by FeatureCount v1.5.1. Differentially expressed genes were tested by R package DESeq2 26 . Only genes with significance FDR < 0.05 (False Discovery Rate) are reported.
Anti-GBM glomerulonephritis model. The induction of glomerulonephritis using anti-glomerular serum (Probetex, cat: PTX-001S) was performed as recently described 27 . A total of 10 4-week old animals (5 pod-Med22 and 5 controls) were used. Glomerular damage was evaluated by semi-quantitative scoring of histological changes. Glomeruli were scored as having normal histology or abnormal histology (which was usually associated with the presence of sclerosis/mesangial matrix expansion/crescents). We evaluated 30 random glomeruli in each mouse (5 control and 5 pod-Med22 mice). For the analysis of tubular changes, we chose 10 random high power filed (× 40) images from renal cortex and observed the presence of tubular casts.
Statistical methods. We used Prism statistics program: Student T test parametric (even groups) and nonparametric (non-even groups). Minimum for significance was p < 0.05, in the figures p values are indicated as follows: *p < 0.05, **p < 0.01, ***p < 0.001. Ethical considerations. The use of human material for studies was approved by the local Ethics Review Authority ("Etikprövningsmyndighet", www.epn.se) in Stockholm, Sweden, archive number 2017-58-31/4. Informed consent was obtained from all subjects. All methods used in human material were carried out in accordance with relevant guidelines and regulations defined in the ethical permit (see above).
For mouse work, all experimental protocols were approved by The Linköping Ethical Committee for Research Animals ("Linköpings djurförsöksetiska nämnd"), Linköping, Sweden (archive number DNR 41-15). All methods used in mouse experiments were carried out in accordance with relevant guidelines and regulations defined in the ethical permit (see above). | 4,702.6 | 2020-11-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
Emergence of infectious malignant thrombocytopenia in Japanese macaques (Macaca fuscata) by SRV-4 after transmission to a novel host
We discovered a lethal hemorrhagic syndrome arising from severe thrombocytopenia in Japanese macaques kept at the Primate Research Institute, Kyoto University. Extensive investigation identified that simian retrovirus type 4 (SRV-4) was the causative agent of the disease. SRV-4 had previously been isolated only from cynomolgus macaques in which it is usually asymptomatic. We consider that the SRV-4 crossed the so-called species barrier between cynomolgus and Japanese macaques, leading to extremely severe acute symptoms in the latter. Infectious agents that cross the species barrier occasionally amplify in virulence, which is not observed in the original hosts. In such cases, the new hosts are usually distantly related to the original hosts. However, Japanese macaques are closely related to cynomolgus macaques, and can even hybridize when given the opportunity. This lethal outbreak of a novel pathogen in Japanese macaques highlights the need to modify our expectations about virulence with regards crossing species barriers.
that contracted the disease exhibited anorexia, lethargy, pallor, nasal hemorrhage, gingival hemorrhage, petechiae, ecchymoses, and melena. After the onset of symptoms, the fatality rate was extremely high and only one macaque survived in each outbreak. Severely decreased blood platelet (PLT) counts and lowered white blood cell (WBC) and red blood cell (RBC) counts were found in all macaques that developed the disease, with PLT counts close to zero at the time of death in nearly all cases. All macaques were active as usual even on the day prior to the onset, with almost all eating normally and showing no early signs of disease. Even veteran caretakers were unable to foresee the onset based on the general conditions of the macaques. The onset was very sudden and death occurred within a very short period of time; thus, little could be done to save or treat the affected macaques. Such peracute thrombocytopenia has not occurred in any primates other than Japanese macaques and there have been no reports worldwide of this disease.
Since the first epidemic, researchers at KUPRI have attempted to determine its cause. The situation was briefly introduced in a report written in Japanese (Kyoto University Primate Research Institute Disease Control Committee 2010); however, the etiology was still unclear when the report was submitted. As written in the news in Nature in 2010, we considered that the illness was probably not due to any known agents, including those such as Ebola, that induce hemorrhagic fevers 2 .
During phase 2, we organized a collaborative team with other institutions to investigate the disease. The causative agent of this unique disease in Japanese macaques was investigated by five research institutions (KUPRI; Institute for Virus Research, Kyoto University; National Institute of Infectious Diseases, Japan; the Corporation for Production and Research of Laboratory Primates; Research Institute for Microbial Diseases, Osaka University) using different research techniques and complementary approaches. This multilateral study allowed us to conclude that the thrombocytopenia in Japanese macaques was caused by infection with simian retrovirus type 4 (SRV-4), which we suspect originated via cross-infection from cynomolgus macaques.
Results
Thrombocytopenia in Japanese macaques. Table 1 shows the individual numbers of animals that developed the disease in the first and second outbreaks, along with details regarding sex, age, date of onset, date of death, and blood data. Japanese macaques at KUPRI were kept separately according to where they originated and the letters (e.g., TH, AR, and HG) before each individual number represent the birthplace of the animal or its ancestors. Onset was determined on the basis of decreased PLT counts or the pathological findings at necropsy. The disease progressed extremely rapidly after the onset (death within zero to a few days), and in some macaques, the disease was detected only after death; thus, blood data were not available for these individuals. Severely decreased PLT counts and lowered WBC and RBC counts were found in all macaques that developed the disease, with PLT counts close to zero at the time of death in nearly all cases. Blood data from healthy Japanese macaques kept at KUPRI showed that the normal levels were PLT counts of 29.0 6 7.8 3 10 4 /ml, WBC counts of 12.2 6 3.8 3 10 2 /ml, and RBC counts of 505 6 37 3 10 4 /ml. These data indicated that the macaques that contracted the disease had extreme anemia. The Japanese macaques that contracted the disease did not share any attributes such as sex, birthplace, age, or genealogy. Life prolongation was possible to some extent with blood transfusion, but treatments with any drugs or antibiotics had no effect at all. As a result, many diseased macaques were euthanized from November 2009 onward, and samples were collected from them. After the onset occurred, the fatality rate was extremely high and only one macaque survived each outbreak. PLT counts recovered in these two macaques.
Clinical investigation and necropsy. The main clinical findings in the macaques that contracted the disease were reduced appetite, recumbency, facial pallor, hemorrhaging of the nasal mucosa and gums, subcutaneous bleeding, and brown-colored mucous and bloody stool (external appearance; Fig. 1A-E). Animals that exhibited these symptoms died within one to three days. Necropsy of the dead macaques revealed hemorrhaging of all tissues, with particularly marked petechial hemorrhaging of the serosa and/or the mucous surface of the digestive tract, and pulmonary hemorrhaging (necropsy findings; Fig. 1F, G). Splenomegaly was observed in some macaques, but other macroscopic lesions could not be identified.
Blood examination. Figure 2 shows the changes in PLT, WBC, and RBC counts of the affected macaques. Because the onset was sudden, as stated above, little information from prior to the onset was available. However, blood data were monitored in some macaques (ID: HGN155, HGN156, HGN161 and WK1649) that were kept in the same breeding rooms as the macaques that contracted the disease. These data showed that PLT counts were maintained at 10 3 10 4 /ml or above from two weeks to ten days before the onset, after which they dropped rapidly. Furthermore, PLT and WBC counts dropped before RBC counts decreased. However, no remarkable changes were noted in any other serum biochemical values, and the levels of C-reactive protein (CRP), a marker of inflammation, did not increase in any macaque that contracted the disease (data not shown).
Antibody tests against viral pathogens. Antibodies against Ebola, Marburg, Lassa, and Crimean-Congo hemorrhagic fever viruses, which induce hemorrhagic syndromes in humans, were all negative. Neutralizing antibodies against canine distemper virus (CDV) were also negative. The results of the antibody tests against eight simian pathogenic viruses are shown in Table 2. Antibody prevalence levels against simian Epstein-Barr virus (SEBV), cytomegalovirus (CMV), and simian foamy virus (SFV) were quite high in the affected macaques; however, the levels in the healthy controls were also high. Thus, there appeared to be no correlations between the disease and these viruses. Anti-SRV antibodies were negative in all affected macaques tested.
PCR and RT-PCR analyses of viral pathogens. All DNA viruses listed in Table 3 were not detected by the specific PCR in the plasma of 3/3 Japanese macaques (ID: HGN174, HGN181 and IZ1470) that contracted severe thrombocytopenia. In the case of RNA viruses, SRV and SRV-4 (Table 3, 12 and 13) but none of the other RNA viruses were positive according to RT-PCR (data not shown).
Rapid determination of viral sequences (RDV) method. RDV was performed to examine the affected macaques (ID: HGN174, HGN181 and IZ1470), and we determined the nucleotide sequences of 136, 118, and 50 amplicons from RNA viruses, double-stranded (ds) DNA viruses, and single-strand (ss) DNA viruses, respectively. However, we did not detect any known ds DNA or ss DNA viral sequences, whereas we detected partial sequences that were virtually identical to SRV-4 in 4/136 amplicons from RNA viruses.
Metagenomic analysis. To further explore the existence of pathogens, we independently performed metagenomic analysis of the plasma of an affected macaque (ID: TBN201 The hyphen (-) indicates that data was not obtained.
Virus isolation. Virus isolation tests using Vero E6, human SLAM-Vero, canine SLAM-Vero cells, and embryonated egg cultures were performed using plasma samples, which detected no cytopathic effect (CPE) in the cell cultures after five serial passages, and no abnormalities were observed in egg cultures after five days of incubation. The inoculated Vero cells and human and canine SLAM-expressing Vero cells were examined for the presence of viral antigens using the plasma of affected macaques. However, no viral antigens were detected in the cells by indirect immunofluorescence tests (data not shown). A hemagglutination test using the allantoic fluid of the inoculated eggs and RBCs of turkeys was also negative (data not shown).
When peripheral blood mononuclear cells (PBMCs) from the SRV-4-provirus positive Japanese macaques (ID: TH1626 and IZ1470) were cocultivated with Raji cells, typical SRV CPE including syncytia were observed after three days of cocultivation. DNA from Raji cells were examined by nested PCR using SRV-4 specific primers, and PCR products of the expected size were observed following agarose gel electrophoresis (data not shown).
Detection of SRV-4 viral genome by RT-PCR or PCR. RT-PCR using each of two primer sets for viral RNA from affected macaques amplified single products of the expected size. Thirty affected macaques for which sera or plasma were stored in KUPURI, including the 14 macaques, which were examined by antibody tests (Table 2), were all positive for SRV-4 in both RT-PCR test. In contrast, healthy controls at the second campus of KUPRI were all negative for SRV. Partial results of RT-PCR were shown in Fig. 4.
Nested PCR for proviral DNA revealed that the 14 affected macaques described above were all positive for SRV-4. On the other hand, healthy controls at the second campus were all negative (data not shown). in Japanese macaques that developed severe thrombocytopenia. PLT counts were maintained at 10 3 10 4 /ml or above from two weeks to ten days before the onset, after which they dropped rapidly. Furthermore, PLT and WBC counts dropped before RBC counts decreased. Phylogenetic analyses. After RT-PCR and sequencing using SRV genomic RNA from one affected macaque (ID: OK2015), a partial sequence of gag gene (514 bp) was determined. The nucleotide sequence data obtained in this study is available in the DDBJ/ EMBL/GenBank databases under the accession number AB933257.
The same sequence was also detected in viral RNA extracted from affected macaques (ID: IZ1470, TH1626, TBN201, TH2158 and AR1649) (data not shown). Figure 5 shows the phylogenetic relationships among SRVs, which were inferred from the partial gag sequences. The phylogenetic tree indicates that the SRV detected here from Japanese macaques clustered with the SRV-4 strains isolated from cynomolgus macaques in Texas and California, USA 4 , and Tsukuba, Japan 5 .
Discussion
The disease described here was first detected in 2001 in two female Japanese macaques (one in July and one in August) housed in the same cage complex (but in different individual cages). Both macaques died two days after the first examination by veterinarians. The progress of the disease was peracute in all macaques that subsequently contracted it. Excluding a few cases, all macaques exhibited systemic hemorrhagic symptoms such as those shown in Fig. 1, and they died within a few days of onset (Table 1). Only two macaques survived the two outbreaks. At the time of the first examination of the affected macaques by veterinarians, i.e., when their responsible caretakers first noticed clinical symptoms such as subcutaneous hemorrhaging or mucous and bloody stool, their PLT, WBC, and RBC counts had already dropped markedly and they had severe anemia (Table 1). Since PLT counts had dropped to below 1 3 10 4 /ml, the clinical findings of subcutaneous hemorrhage, mucous and bloody stool, and gingival hemorrhage were considered to be consequences of their decreased PLT counts. On the other hand, five macaques (ID: AR2182, TH1439, HGN160, HGN267 and HGN165) that were kept in the same room as the affected macaques were euthanized when their PLT counts dropped below 10 3 10 4 /ml, because we predicted that they had a poor prognosis. However, four of these macaques maintained their PLT counts to some degree at the time of euthanasia, and necropsy showed that the hemorrhage lesions were mild. It is possible that these macaques may not have progressed to onset, i.e. PLT counts may not have become near zero. Therefore it is doubtful whether these macaques should be included in the same category as the severely affected macaques. Because this is the first report of infectious thrombocytopenia in Japanese macaques, all data from the macaques that possibly developed the disease are shown in Table 1, but further investigations are needed to determine specific criteria for onset.
As shown in Fig. 2, PLT counts were maintained at over 10 3 10 4 / ml from two weeks to ten days before the onset but dropped markedly thereafter. We also found that PLT and WBC counts decreased before RBC counts decreased. Considering the life span of each type of blood cell, it is possible that acute bone marrow dysfunction occurred within one month of the onset. Detailed pathological analysis is currently ongoing, although marked decreases in bone marrow cells have been confirmed by bone marrow smear examination (data not shown).
Despite not knowing the causative agent at the time, we suspected viral infection during each outbreak because C-reactive protein (CRP), which typically increases when inflammation occurs, did not increase in any of the affected macaques, while WBC counts decreased and treatments with antibiotics were completely ineffective. The clinical symptoms reminded us of hemorrhagic fever caused by viruses such as Ebola 6 , thus, we first investigated antibodies for several hemorrhagic fever viruses, which can affect other animals including humans. However, all macaques were negative for the antibodies against those pathogens. The antibody tests for eight macaque pathogens indicated no clear correlations between the onset and control groups, and no cause-and-effect relationships could be demonstrated for any of the pathogens. All affected macaques were negative for the anti-SRV antibody. These same examinations were also performed on the seven macaques affected in the first outbreak, all of which tested negative for the anti-SRV antibody. It has been reported that anti-SRV antibodies may not have been produced in a small subset of cynomolgus macaques infected with SRV 7 . we could not assume that no affected Japanese macaques had produced such antibodies at all, and thus, we excluded SRV as a causative virus. Therefore, we considered that it was highly likely that this disease was caused by an unknown pathogen 2 .
We tried to preserve samples from affected monkeys as much as possible, but most of samples were fixed with formalin and raw or frozen samples were quite limited. Therefore we handed the samples, which had been stored relatively in the large amount, to the several organizations and carried out the different examination at each organization. Although the results that each examination provided were not necessarily enough, we obtained the evidence suggesting a close association between SRV-4 and thrombocytopenia in the Japanese macaques. Our investigations demonstrated as follows: Large numbers of viruses with retrovirus-like morphology were present in the plasma of the Japanese macaques that developed thrombocytopenia. In addition, SRV-4 genomes were confirmed in the plasma of all affected macaques and SRV-4 was isolated from PBMCs at the Corporation for Production and Research of Laboratory Primates. Later, SRV-4 was also isolated from the plasma, feces, and bone marrow cells of a Japanese macaque that exhibited severe thrombocytopenia at the Institute for Virus Research, Kyoto University (unpublished data). In contrast, SRV-4 genomes were not detected in macaques that were raised at the second campus and who had no contact at all with the affected macaques. Furthermore, no candidate pathogens other than SRV-4 could be detected in the affected macaques despite the use of the RDV method, electron microscopy and virus isolation. Since the RDV method is used to comprehensively detect viral genomes, it is extremely significant that RDV could not detect any other candidates. Metagenomic analysis for RNA also only detected SRV-4 in the affected macaque (ID: TBN201), which was not used for above-mentioned examinations. This fact strongly supports that SRV-4 is the causative agent of thrombocytopenia in Japanese macaques. Additional research is required to determine whether this disease occurs with SRV-4 alone or whether some co-factors are associated with thrombocytopenia; however, it is clear that SRV-4 was the principal agent responsible for this disease. SRV is classified in the Betaretrovirus genus and both endogenous and exogenous variants have been identified in NHPs 4 . Seven serotypes of exogenous SRV have been identified and six of these are known to infect at least eight species of macaques 8,9 . However, no SRV serotypes have been reported in wild Japanese macaques. SRV-4 has been found only in cynomolgus macaques kept in experimental animal facilities 4,10,11 , e.g. at the Tsukuba Primate Research Center in Japan 5,7,12,13 . Various species of macaques, including cynomolgus macaques, have been kept at KUPRI since their introduction from other facilities; thus, we assume that these translocated macaques brought SRV-4 to KUPRI. More than one species of macaque has sometimes been kept in the same room depending on the research purpose, and injured or diseased macaques may have been hospitalized in the same room for medical treatments, although they were kept in different cages. It is likely that SRV-4 was transmitted from a carrier cynomolgus macaque to a Japanese macaque in the past, and that only Japanese macaques developed severe thrombocytopenia as a consequence.
SRV has been identified as the etiology of an infectious immunosuppressive syndrome, so-called Simian AIDS, in several species of macaques at primate research centers in USA such as the Washington National Primate Research Center and the California National Primate Research Center 8 . It was also reported that SRVinduced immunosupression brought on neoplastic diseases in infected macaques. For instance, SRV-2 is specifically associated with a proliferative condition called retroperitoneal fibromatosis (RF) of infected macaques, including pigtailed macaques, crab-eating macaque, rhesus macaques, and Celebes macaques. In these, pigtailed macaques are most susceptible and their clinical symptoms are severe. It is suspected that RF is due to a gammaherpesvirus acting as cofactors secondary to SRV infection and immune suppression. Recently pathogenicities of SRV-4 have been reported 4,5,13 . SRV-4 is usually asymptomatic when it infects cynomolgus macaques, but may cause immunosuppression marked by chronic diarrhea and some macaques may exhibit mild thrombocytopenia. However, thrombocytopenia in cynomolgus macaques is not inevitable and the disease can also progress chronically. In this way, no cases of the peracute thrombocytopenia seen in the Japanese maca- Herpesvirus consensus [41] in-house: the in-house testing procedures at National Institute of Infectious Diseases, Japan. ques have been reported. We consider that the virus in the cynomolgus macaque crossed the so-called species barrier and infected and spread to Japanese macaques, causing severe thrombocytopenia. It is a well-known fact that crossing the species barrier increases pathogen virulence 14 . Viruses such as the avian influenza virus 15 and the severe acute respiratory syndrome (SARS) virus 16 exhibited strong pathogenicity after crossing the species barrier from their natural hosts to humans. In NHPs, herpesvirus saimiri can spread from naturally infected squirrel monkeys to other new world monkeys, causing serious lymphoma 17 . After crossing the species barrier into new hosts, the virus exhibits unexpectedly severe symptoms that are not observed in the original hosts. In most cases, the new hosts are distantly related to the original host 15,16 . In the case of SRV-4, however, although the virus spread from cynomolgus macaques to Japanese macaques, which are closely related and both belong to the fascicularis group of the genus Macaca (hybridization may also occur between the two species), extremely severe acute symptoms developed. This disease merits attention because it has changed our concept of the relationship between species barriers, genetic distance and virulence.
Despite the causative pathogen being SRV, a known virus, its identification took over ten years from the first outbreak. Nearly 30 years have passed since the discovery of the human immunodeficiency virus (HIV), which causes acquired immunodeficiency syndrome (AIDS), and the development of an animal model was urgently required at the time of its discovery 18,19 . As mentioned above, SRV causes immunosuppression in infected macaques; thus, it was used as a model of AIDS 20,21 . Indeed, a large amount of research was conducted using SRV from the mid-1980s to the 1990s, and many reports have been published 8,22 . Serotypes 1 to 5 were also discovered at an early stage 23 . However, after it was discovered that HIV was the result of a cross-species transmission of simian immunodeficiency virus (SIV) 24,25 , interests of researchers were immediately shifted from SRV to SIV, and a few reports on SRV have been published recently. SRV is a relatively small RNA virus with a full length of just over 8,000 base pairs, but the full sequence of the SRV-4 genome was only reported in 2010 4 . Thus, the reason why we required a long period of time to detect the cause of the disease was not only the fact that the affected macaques did not produce antibodies but also because an effective molecular diagnosis method had not been developed.
Furthermore, the severe symptoms in the Japanese macaques were very different from those that were known to be caused by SRV in other hosts. Thus, we have held the biased view that ''SRV could not cause such a severe disease.'' Many cynomolgus macaques are imported into Japan every year for experimental purposes; however, SRV is not included among the items in the imported quarantine inspection and few facilities can screen for it independently. In animal facilities in Japan, several species of macaques are often kept in the same room because of lack of space and other reasons. Moreover, Japanese macaques and cynomolgus macaques are also often kept in adjacent cages in zoos in Japan. Nevertheless, similar disease outbreaks have not been reported at other facilities. These facts, as well as the severe symptoms, led us to believe that an unknown pathogen was involved. The article that introduced this disease in Nature mentioned that, ''Because Japanese laboratories tend to have excellent diagnostic capabilities, the illness is probably not due to any of the known agents'' 2 . We still have reservations about why this thrombocytopenia occurred only at KUPRI and whether SRV-4 alone can actually cause this disease. In light of this, it may be said that the metagenomic analysis and the RDV method, which objectively detected the viral genome, are epoch-making diagnostic techniques.
According to Koch's postulates, the final identification of a pathogen requires experimental infection of the hosts. However, experimental infection would be dangerous in this case because the symptoms of this disease are extremely severe and no treatment has been established. More than eight hundreds of Japanese macaques are reared in KUPRI, but we don't have a suitable facility for experimental infection of viral pathogens in KUPURI. Therefore, our colleagues in Institute for Virus Research, Kyoto University, have performed experimental infections at the P3A level animal facility in their Institute. SRV-4 from one of the affected Japanese macaques as well as an infectious molecular clone derived from isolated SRV-4 were inoculated into Japanese macaques. As a result, the isolate induced severe thrombocytopenia in all four macaques within 37 days. Infectious molecular clone-derived virus could also develop same symptoms (unpublished data).
This disease is a great threat against the Japanese macaque, a primate that is indigenous to Japan, and its spread to the natural population must be prevented. The pathology, mechanism of onset, and natural host of this disease must be clarified. In addition, the establishment of the diagnostic method and construction of the quarantine inspection system should be done urgently. ELISA is usually a simple, easy and extremely effective diagnostic method for infectious disease. However it is revealed that no specific antibody against SRV-4 was detected in affected macaques. Therefore molecular diagnosis for SRV, such as PCR for proviral DNA or RT-PCR for viral genome, is effective. The pathogenicity for Japanese macaques of SRV-4 variants or other serotypes is still unclear, but the molecular diagnostic methods that can detect them will be necessary. As described above, Japan is importing more than 5000 cynomolgus macaques every year from Southeast Asia where the prevalence of SRV may be quite high. However SRV is not included among the items in the imported quarantine inspection. Unrecognized co-infection with SRV in the biomedical model may severely compromise the integrity of toxicology studies. SRV infection is a threat not only against the Japanese macaque but also against biomedical researches. Importers and researchers using cynomolgus macaques should carry out inspection of SRV absolutely. In future, some kind of legal regulation may be necessary.
We had not carried out special rearing management before we suspected that the cause of illness was an infectious disease. Even so, the humans who handled the macaques, including researchers, veterinarians and caretakers, showed no symptoms of thrombocytopenia. There is no report on SRV-associated thrombocytopenia of humans even from Southeast Asia, where wild macaques are naturally infected with SRV and humans may come into close contact with those macaques regularly. Therefore SRV-4 is less likely to cause thrombocytopenia in humans. In this study, however, we isolated SRV-4 using the Raji cell line, which is the continuous human cell line from hematopoietic origin. It seems premature to conclude that humans are not susceptible to SRV-4 as cautioned by Cyranoski 2 .
Japanese macaques. We examined affected Japanese macaques that were dead or euthanized. As non-infected controls, we also examined Japanese macaques that were raised at the second campus at KUPRI, among which there were no affected macaques. April 28, 2006). This study was carried out in accordance with the approved guidelines.
No specific animal research protocol was drafted for this study as only clinical samples were analyzed for diagnostic purposes. The protocol to collect samples in control macaques was reviewed and approved by the Monkey Committee at KUPRI, and then authorized by the Kyoto University Animal Experimentation Committee (2011-115). Extensive veterinary care was provided to all macaques affected to minimize pain and distress. Macaques in extreme thrombocytopenia were humanely euthanized using overdose of pentobarbital by veterinarians.
Clinical investigation and necropsy. Clinical investigations were performed on affected macaques and some macaques who were kept in the same room together with the affected macaques. Symptomatic treatment were initially provided to the affected macaques, however these treatments were not effective. After Nov. 2009, the affected macaques were euthanized due to poor prognosis and risk of spreading infection. Necropsies were performed on all dead and euthanized macaques, and samples were taken for further investigation.
Antibody tests against viral pathogens. To determine the virus that was responsible for the disease, antibodies against Ebola, Marburg, Lassa, and Crimean-Congo hemorrhagic fever viruses, all of which induce hemorrhagic syndromes in humans, were examined at the National Institute of Infectious Diseases, Japan. Neutralizing antibodies against CDV were also tested, because CDV is known to cause a lethal disease in macaques 26 .
Antibody tests against eight primate viral agents [SEBV, CMV, simian varicella virus (SVV), B virus (BV), SIV, simian T-lymphotropic virus (STLV), SFV, and SRV] were conducted at the Corporation for Production and Research of Laboratory Primates. The corporation is an inspection agency which established original testing procedures for monkey viruses. Detailed procedures for the antibody tests used at the corporation have not been published. However, the corporation is the only inspection agency in Japan that can perform these antibody tests. Most primate research institutes in Japan use this inspection agency and it is trusted.
Electron microscopy (EM). Viral particles in the plasma of three Japanese macaques that developed thrombocytopenia were examined by transmission EM. The plasma was diluted three times with phosphate-buffered saline (PBS) and centrifuged at 3,000 rpm for 20 min. The supernatants were clarified further by centrifugation at 8,000 rpm for 30 min. The virus particles, if any, were pelleted using 20% (w/v) sucrose in PBS at 31,000 rpm for 2 h, resuspended in PBS, and subjected to EM analysis. The samples were fixed with 4% glutaraldehyde, negatively stained with 2% phosphotungstic acid, and observed under a JEM-1400 transmission electron microscope (JEOL Ltd., Tokyo, Japan).
Isolation of viral RNA and proviral DNA. Viral RNA was isolated from the plasma or serum of macaques at onset using a QIAamp Viral RNA Mini Kit (Qiagen, Tokyo, Japan) or a High Pure Viral RNA Kit (Roche Diagnostics, Tokyo, Japan). Proviral DNA was also isolated from blood using a DNeasy Blood & Tissue Kit (Qiagen).
RT-PCR and PCR analyses of viral pathogens. Viral pathogens in the plasma of three Japanese macaques that contracted severe thrombocytopenia (ID: HGN174, HGN181 and IZ1470) were examined. Twenty DNA viruses and 13 RNA viruses listed in Table 3 were tested by RT-PCR or PCR based on the previous studies 3,12, or using the in-house testing procedures at National Institute of Infectious Diseases, Japan.
Rapid determination of viral sequences (RDV) method. RDV was performed as described previously 54,55 . In this study, RNA and DNA were extracted independently from partially purified viral fractions, which were prepared as described in the ''Electron microscopy'' section. Hae III and Alu I restriction enzymes were used to synthesize the second cDNA library.
Metagenomic analysis. Metagenomic analysis was performed using a highthroughput sequencer. Total RNA was extracted from specimens with TRI-LS (Sigma-Aldrich, Tokyo, Japan) and reverse-transcribed with a Transplex whole transcriptome amplification (WTA1) kit (Sigma-Aldrich) 56 using a quasi-random primer according to the manufacturer's protocol with modifications (i.e., 70 cycles of PCR). PCR amplification to prepare the template DNA for pyrosequencing was performed using AmpliTaq Gold DNA Polymerase LD (Applied Biosystems, Tokyo, Japan) 56 . Because almost all of the amplified cDNA were within the 200-1,000-bp range, the PCR products were used as templates directly for emulsion PCR in GS Junior pyrosequencing (454 Life Sciences). The obtained data were then subjected to a data analysis pipeline for BLAST search. Data analysis was performed with computational tools using each read sequence, as described previously 56 .
Detection of SRV-4 viral genome by RT-PCR or PCR. The viral RNAs from Japanese macaques were examined by RT-PCR using oligonucleotide primers that were specifically designed for SRV-4 4 . The partial regions of the gag gene were amplified using following two primer sets; aSRV-F1167 and aSRV-R1710, and/or aSRV-F429 and aSRV-R855 4 . RT-PCR was performed with OneStep RT-PCR Kit (Qiagen) as reported previously with some modification 4 . RT-PCR cycles were performed using an iCycler (Bio-Rad, Tokyo, Japan) as follows: 50uC for 30 min, 95uC for 15 min, 40 cycles at 95uC for 30 s, 45uC for 32 s, and 72uC for 75 s, and 1 cycle at 72uC for 2 min 4 .
In addition, the gag region of proviral DNA was amplified by nested PCR using primers (Tga1, Tga2, Tga3, and Tga4) that specifically detected the gag region of the SRV/D-T (SRV-4) 12 .
Phylogenetic analysis. The partial regions of the gag gene were amplified by RT-PCR using the primer set, aSRV-F1167 and aSRV-R1710, as described above. The RT-PCR products were purified using MinElute PCR Purification Kits (Qiagen) or cleaned enzymatically with Calf intestine Alkaline Phosphatase (TOYOBO, Osaka, Japan) and Exonuclease I (TaKaRa, Otsu, Japan). Direct sequencing was performed with a Dye Terminator Cycle Sequencing Kit and an ABI 3130xl generic analyzer (Applied Biosystems). DNA sequences of the samples in this study were combined with previous data (database accession numbers: SIVRV1CG for SRV-1; M16605, AF126468, and AF126467 for SRV-2; M12349 and AF033815 for SRV-3; FJ971077, FJ979638, FJ979639, GQ454446, and AB181392 for SRV-4; and AB611707 for SRV-5) and aligned using the CLUSTALW computer program 59 . Phylogenetic trees were constructed using the neighbor-joining (NJ) method with MEGA 6.0 60 . Evolutionary distances were computed using the maximum composite likelihood method. The phylogenetic tree was evaluated using a bootstrap test based on 1,000 resamplings. The sequence of simian endogenous retrovirus (SERV; STU85505) was used as an outgroup to indicate the location of the root of the ingroup. | 7,467.6 | 2015-03-06T00:00:00.000 | [
"Biology"
] |
Incipient ferroelectricity of water molecules confined to nano-channels of beryl
Water is characterized by large molecular electric dipole moments and strong interactions between molecules; however, hydrogen bonds screen the dipole–dipole coupling and suppress the ferroelectric order. The situation changes drastically when water is confined: in this case ordering of the molecular dipoles has been predicted, but never unambiguously detected experimentally. In the present study we place separate H2O molecules in the structural channels of a beryl single crystal so that they are located far enough to prevent hydrogen bonding, but close enough to keep the dipole–dipole interaction, resulting in incipient ferroelectricity in the water molecular subsystem. We observe a ferroelectric soft mode that causes Curie–Weiss behaviour of the static permittivity, which saturates below 10 K due to quantum fluctuations. The ferroelectricity of water molecules may play a key role in the functioning of biological systems and find applications in fuel and memory cells, light emitters and other nanoscale electronic devices.
The orientation profile of water molecules confined to nano-channels should be provided.
Reviewer #3 (Remarks to the Author): The authors in this work report incipient ferroelectricity of water molecules confined to nano-channels of beryl crystals, in which the intermolecular hydrogen coupling is strongly weakened or completely absent. Based on the theoretical and experimental investigation on the property of the water, the authors reveal that it is the intermolecular electrical dipole-dipole interactions, rather than by the hydrogen bonds as in bulk water or solid ice, plays a key contribution to the incipient ferroelectricity of the confined water. Hence, the work provides new insight in the ferroelectricity of the water. Owing to the ferroelectricity of confined water playing a significant role in various phenomena and areas of natural sciences, I'd like to recommend publication of the work in nature communications after major revision.
1) The authors claim in the work that "there is no firm experimental evidence so far for ordering within the subsystem of water molecules". In fact, there are some examples experimentally demonstrated existence of ferroelectric and antiferroelectric water in confined system. Because both are related to the order of water molecules, thus the claim should be revised and the related references should be cited.
2) How about the crystal size? I'd like to suggest the authors provide the picture of the crystal. 3) From a statistical point of view, there is no doubt that each cage in the compound contains 0.3 water molecules. The actual situation may be that no water exists in part of the cage, since the water molecules can form doublets, triplets, etc., disposed in adjacent cages separated by the bottlenecks inside the channels as mentioned by the authors. Considered that the diameter of the cage is about 5.1 Å, is it possible to encapsulate one water molecule into the cage. If possible, I'd like to suggest the author to prepare the compound and investigate the property of the confined water.
I would like to thank authors for response to my comments, and I accept some of them. Nevertheless, I still have a question regarding the observed soft modes at about 20 and 40 cm-1 (2.5-5 meV) at 5 K, which increase their values to ~60 cm-1 (7.5 meV) at 300 K. In addition at low temperatures the authors observed sharp peaks in the range 1.2-1.5 THz (4.8-6 meV) which were explained as twistlike modes of the H2O molecules. Recently, a paper on inelastic neutron scattering (INS) study of water in beryl has been published (Kolesnikov et al., Phys. Rev. Lett. 116 (2016) 167802), which presented observation of multiple tunneling peaks in the INS spectra in the energy range 2-118 cm-1 (0.27-14.7 meV) at low temperatures, which strongly decrease their intensity with temperature increase, but without noticeable change in the peaks positions. Unfortunately, the peaks observed in that paper are not visible in the manuscript under review, and the peaks in the manuscript at 20-40 cm-1 are completely missed in the INS spectra. Another related comment is regarding the estimated depth (A=1.41 meV) for the water molecule rotational potential well. In the above mentioned paper on INS study of water in beryl the observed maximum splitting of the ground state is 14.7 meV, that means that the depth of the potential well should be larger than that value, so the value A=1.41 meV in the manuscript is at least one magnitude order smaller, than obtained from INS study. It is well known that the INS spectra are very sensitive to the vibrational modes of hydrogen due to anomalously large neutron scattering cross-section on hydrogen, and INS spectra directly related to the density of vibrational states. Based on these, it should be clarified or discussed in the manuscript why the observed soft modes (as well as twist-like modes) are completely missed in the INS spectra of water in beryl, and why there is a big discrepancy on the value of depth of the potential well. Could it be that the peaks observed in the manuscript are not related to the vibrational modes of water in beryl? Maybe water influences the beryl cage vibrations, which were observed in the manuscript? Therefore, I need to have answers for these questions, before I can make the recommendation for the publication.
Reviewer #2 (Remarks to the Author): The revisions are satisfactory. I'd like to recommend publication in Nature Commun.
Reviewer #3 (Remarks to the Author): I am satisfied with the revised manuscript and recommend publication of the work. I thank the authors for the answers and the manuscript modification, but I still disagree with some of their statements.
Regarding the observed resonances at 1.2-1.5 THz, which "are connected with the twist-like modes of the H2O molecules that involve librations of the oxygen ions [ 32]." Twist-like mode of water molecule is a libration of water molecule around its dipole moment (see e.g. https://en.wikipedia.org/wiki/Molecular_vibration ), therefore this mode should be inactive in optical spectroscopy since it does not affect the dipole moment of the molecule. Also, the twist mode does not involve vibration of oxygen at all. These resonances could be, hypothetically, due to a wagging mode (librations of water-I molecule around the axis of channels / c-axis of beryl). In this case the 1.5 THz (6 meV) excitation would require the potential depth (A) to be larger than this value, which disagrees with the estimates made in the manuscript, A=1.41 meV. In addition, eigenvectors of water librational modes are much larger for hydrogen than for oxygen, therefore INS spectra for water librational modes are mostly originated due to neutron scattering on hydrogen, and these modes are the most strong in the water INS spectra.
The above remarks should be responded before I can recommend the manuscript for the publication in Nature Communications.
Our reply. It is very important to distinguish between ferroelectricity and incipient ferroelectricity. Of course, beryl crystal is centrosymmetric, so it cannot be ferroelectric; but incipient ferroelectricity is fully compatible with centrosymmetry -see the classical incipient ferroelectrics strontium titanate SrTiO 3 (2003)], which are simple cubic perovskites. In contrast to a true ferroelectric state where the microscopic polarization is linked to a structural distortion with a translational symmetry, the incipient ferroelectricity in beryl corresponds to transient alignments of the water molecules due to their mutual interactions. Consequently, the earlier symmetry analyses of beryl, performed by X-ray diffraction, inherently neglect the tiny time-dependent structural deformations due to the fast moving water molecules. Note that the average long-range symmetry of the water-containing and water-free beryl is certainly the same within the accuracy of XRD. It cannot differ since the water molecules are dynamically disordered and therefore their effect on the symmetry must be nil in the time average.
We have added to the manuscript (p. 6/7) a few sentences clarifying this issue to readers who are not well familiar with the incipient ferroelectricity.
Reviewer 1:
A recent theoretical paper [8] on possibility of ferroelectric ice-XI stated, that "the depolarization field ... dominates the energetics, making the existence of ferroelectric ice unlikely." The authors have estimated that the depolarization field in ice-XI should be "E = 7.24 × 10^9 V/m, which is an electric field that cannot be supported by any material: even a small crystal ... will develop enormous depolarization fields." Hence, this quantitative estimate shows that ferroelectric state of water in beryl is practically impossible.
Our reply. The suppression of the ferroelectric order by depolarization fields is crucial for conventional ferroelectric thin films or small particles (submicrometer sizes), because the origin of the depolarization fields is closely connected to the geometry of the object. This field comes from unscreened charges that are generated at the surfaces or interfaces. The effect is nicely treated by P. Parkkinen et al.
[8] with regard to water ice XI. The corresponding structure considered in Ref.
[8] is essentially two-dimensional and it is shown that the ferroelectric alignment of the water dipoles in the ice slab will be suppressed by a depolarization field amounting to 7.24*10^9 V/cm, as correctly stated by the Reviewer. However, this large field is generated by a potential drop of 4.64 V across a very thin (6.54 Angstroms) slab of ice XI. Obviously, the geometrical arrangement of partly disordered (single, dimers, trimers, etc.) water molecules in the channels of beryl has nothing in common with the two-dimensional water molecular arrangement in ice XI and, correspondingly, the above estimate of the depolarization field cannot be applied to the case of water-containing beryl.
Furthermore, as we write two paragraphs above, we are considering incipient ferroelectricity, whereas Ref.
[8] denies the existence of true ferroelectricity with a quasi-static polarization. In contrast, in the case of incipient ferroelectric dynamics, the dipolar field oscillates with the soft mode (proton hopping) frequency (a few THz) and it cannot be fully compensated by the beryl lattice, whose inertia is much higher. The dipolar field might be partly compensated along the channels by some defect charges, if they were available, but the dynamics of the defects are much slower, so this cannot completely suppress the transient alignments of the water molecules. It could somewhat stiffen the measured soft mode frequency and decrease the extrapolated negative Curie temperature. This was not considered in our model calculations (it can hardly be included), but it can help to understand why the soft-mode energy is somewhat higher than the hopping barrier in Fig. 6 in our calculations, which answers partly also the 2 nd additional question of the Reviewer, see below.
Reviewer 1: Other comments: 1) The orientation of water I in beryl shown in Fig. 1 is incorrect; the water H-H direction should be almost parallel to the crystal c-axis, while it is shown as being in ab-plane.
Our reply. The Reviewer is right, this was an error in preparation of the figure. We have corrected the drawing in the revised version. We thank the Reviewer for pointing it out.
Our reply. We are dealing with the system of coupled dipoles rotating above the 6-well potential of depth A, and this potential makes some kind of friction for the dipolar rotation. Clearly, the frequency (energy) of the resultant collective soft mode should depend both on the wells' depth and on the coupling strength. It is not straightforward to quantitatively compare the energy value of the soft mode with the value of the wells depth and of the coupling energy. The task was analyzed by Nakajima and Naya [38] for the case of coupled dipoles rotating above the two-well potential. However, their analysis does not allow for a simple quantitative comparison of the above values. We have extended the theory of Nakajima and Naya to the case of six-well potential relief, and after going through all steps of the analytical and numerical analyses, we obtained the value of A=1.41 meV that could best describe the experimental spectra. In addition this value is independently confirmed by our ab initio DFT analysis of the system "water molecule in beryl" where we have obtained A=0.83 meV, that is quite close to 1.41 meV taking into account the known approximations of the Nakajima/Naya theory when extended to six minima and applied to water in beryl. For these reasons, we are convinced that this objection of the Reviewer does not reflect any incoherence in our paper.
Our reply. We thank the Reviewer for pointing out the poor wording, that can lead to confusion and misunderstanding. We have changed the figure caption, which now reads in total: " Fig.6. Squared cosine potential ( ) = − cos 3 (A>0) used to model the dynamics of interacting water-I molecules that are located within the cage of hexagonal beryl crystal lattice and can rotate around the crystallographic c-axis. The molecule rotates freely at temperatures well exceeding the potential energy barriers (T>>A/k B T), but can only librate within one minimum or tunnel between the minima at low temperatures (T<<A/k B T)."
Replies to remarks of the Second Reviewer
Reviewer 2: The manuscript examines the ferroelectricity of water molecules confined to nano-channels of beryl through experiments. To avoid formation of hydrogen bonds between water molecules, the authors had experimentally placed water molecules in the channels of a beryl crystal, and the incipient ferroelectricity is observed. The authors attributed the ferroelectricity of confined water molecules to the disruption of hydrogen bond network and the intermolecular dipole-dipole interactions. This work is of great interest and important.
Our reply. We thank the Reviewer for his high appreciation of our results and positive evaluation of our manuscript.
Reviewer 2: I support publication of this work in Nature Communications after the authors address the following comments: Experimental evidence of placing water molecules in the structural channels of a beryl crystal must be provided in the supporting information.
Our reply. As is written in the Methods section of the manuscript, this evidence is provided by the chemical analysis of the studied crystal performed before spectroscopic experiments. We have added to the section, that the presence of water molecules in the channels is evidenced by the known experimental material on the beryl crystals where relatively large water molecule can find enough space only within the channels' cages. It is also clearly demonstrated in our experiments by the observation of intramolecular ν 1 , ν 2 and ν 3 modes and of their ν 1 + ν 2 overtone; the values of the frequencies are only slightly shifted relative to those of a free H 2 O molecule, evidencing its week coupling to the crystal lattice that can be realized only when the molecule is within the cage.
Reviewer 2:
Discussion on how to put water molecules into the nano-channels of beryl since the bottleneck is very narrow (~2.8 Å).
Our reply. This was specified on page 3 of the manuscript as:" Crystals grown in an aqueous environment contain water trapped in the framework of the crystal lattice in such a way that single H 2 O molecules reside within the cages". To make it more clear, we have completed the sentence with "the molecules are captured into the cages during the growth process, i.e., during the cages formation".
Our reply. We thank the Referee for pointing out that we missed important recent literature. We have added the citation to this work (Ref. [20]).
Reviewer 2:
The orientation profile of water molecules confined to nano-channels should be provided.
Our reply. In our manuscript we cite a large body of literature where ample experimental results are given about the well-known orientation of the water molecules in beryl. The orientational profiles of the water molecules in our samples are evidenced also by our polarization dependent spectroscopic data. We now state that in the text as: "The two orientational profiles of the molecules of both types in our crystals are unambiguously verified by our observations of H 2 O intramolecular vibrational modes ν 1 , ν 2 and ν 3 that couple to the probing radiation strictly differently for the two orientations of the molecule, see Figure 3 in [28]".
Reviewer 3:
The authors in this work report incipient ferroelectricity of water molecules confined to nanochannels of beryl crystals, in which the intermolecular hydrogen coupling is strongly weakened or completely absent. Based on the theoretical and experimental investigation on the property of the water, the authors reveal that it is the intermolecular electrical dipole-dipole interactions, rather than by the hydrogen bonds as in bulk water or solid ice, plays a key contribution to the incipient ferroelectricity of the confined water. Hence, the work provides new insight in the ferroelectricity of the water. Owing to the ferroelectricity of confined water playing a significant role in various phenomena and areas of natural sciences, I'd like to recommend publication of the work in nature communications after major revision. 1) The authors claim in the work that "there is no firm experimental evidence so far for ordering within the subsystem of water molecules". In fact, there are some examples experimentally demonstrated existence of ferroelectric and antiferroelectric water in confined system. Because both are related to the order of water molecules, thus the claim should be revised and the related references should be cited.
Our reply. First of all, we also want to acknowledge the positive assessment of our work by the Reviewer. We agree with the first comment of the Reviewer and have described the actual situation in some more details in our revised manuscript. We have added the following: "As to the experimental realization of the dipolar ordering, the situation is not so unambiguous and clear. On the one hand, there seem to be firm indications towards ordered (ferroelectric or antiferroelectric) arrangements of the water molecules (ice nanotubes) within one-dimensional channels of carbon nanotubes or molecular organic structures [ 20 21] or on two-dimensional surfaces (ice slabs) [ 9 22 23]. On the other hand, either the fraction of the polarized dipoles can be very low, of the order of 1% or even smaller [ 22], limited to only few surface layers, or the reliability of the obtained results is discussed [ 8 19]".
Reviewer 3:
2) How about the crystal size? I'd like to suggest the authors provide the picture of the crystal.
Our reply. We thank the Reviewer for these useful suggestions. Accordingly we added some words on the crystal size to the Methods section; Fig.1was amended by a photograph of a Beryl single crystal.
Reviewer 3:
3) From a statistical point of view, there is no doubt that each cage in the compound contains 0.3 water molecules. The actual situation may be that no water exists in part of the cage, since the water molecules can form doublets, triplets, etc., disposed in adjacent cages separated by the bottlenecks inside the channels as mentioned by the authors. Considered that the diameter of the cage is about 5.1 Å, is it possible to encapsulate one water molecule into the cage. If possible, I'd like to suggest the author to prepare the compound and investigate the property of the confined water.
Our reply. We appreciate the insight of the Reviewer and his/her comments and suggestions, as they exactly describe what we have done in our experiments. We have grown a crystal of beryl that contained single water molecules in some cages with the filling factor of about 30%. These molecules are isolated, or grouped in dimers, trimers, etc., according to our statistical analysis (Methods section). Some cages do not contain water molecules at all; note however, that the cavity space is not sufficient to host two molecules at the same time. This is the system we have actually studied.
We hope that the above explanation did answer the pertinent questions of the Reviewers. We are grateful for their comments and suggestions which we have incorporated into the corrected manuscript. We are confident that the present manuscript should be published in Nature Communications.
On behalf of the authors with the best regards,
Martin Dressel
Reviewer 1: I would like to thank authors for response to my comments, and I accept some of them. Nevertheless, I still have a question regarding the observed soft modes at about 20 and 40 cm-1 (2.5-5 meV) at 5 K, which increase their values to ~60 cm-1 (7.5 meV) at 300 K. In addition at low temperatures the authors observed sharp peaks in the range 1.2-1.5 THz (4.8-6 meV) which were explained as twist-like modes of the H 2 O molecules. Recently, a paper on inelastic neutron scattering (INS) study of water in beryl has been published (Kolesnikov et al., Phys. Rev. Lett. 116 (2016) 167802), which presented observation of multiple tunneling peaks in the INS spectra in the energy range 2-118 cm -1 (0.27-14.7 meV) at low temperatures, which strongly decrease their intensity with temperature increase, but without noticeable change in the peaks positions. Unfortunately, the peaks observed in that paper are not visible in the manuscript under review, and the peaks in the manuscript at 20-40 cm -1 are completely missed in the INS spectra. Another related comment is regarding the estimated depth (A=1.41 meV) for the water molecule rotational potential well. In the above mentioned paper on INS study of water in beryl the observed maximum splitting of the ground state is 14.7 meV, that means that the depth of the potential well should be larger than that value, so the value A=1.41 meV in the manuscript is at least one magnitude order smaller, than obtained from INS study. It is well known that the INS spectra are very sensitive to the vibrational modes of hydrogen due to anomalously large neutron scattering cross-section on hydrogen, and INS spectra directly related to the density of vibrational states. Based on these, it should be clarified or discussed in the manuscript why the observed soft modes (as well as twist-like modes) are completely missed in the INS spectra of water in beryl, and why there is a big discrepancy on the value of depth of the potential well. Could it be that the peaks observed in the manuscript are not related to the vibrational modes of water in beryl? Maybe water influences the beryl cage vibrations, which were observed in the manuscript? Therefore, I need to have answers for these questions, before I can make the recommendation for the publication.
Our replies. 1. The relatively intense mode around 90 cm -1 (11 meV) is clearly detected in both our THz and INS spectra, and its origin is firmly identified by us and by Kolesnikov c) The intensities of most INS modes absent from our THz-IR spectra are considerably lower than that of the translational mode at around 11 meV clearly observed in both THz-IR and INS data. Some of these weak modes with higher frequencies (12.7 and 14.7 meV) could be "hidden" within the set of resonances in our spectra above 100 cm -1 (above 12 meV) [Gorshunov et al. J. Phys. Chem. Lett. 4, 2015 (2013)]. 3. As stated in our manuscript, the modes we observe in the range 1.2-1.5 THz are related to the librations of heavy oxygen. Since the INS spectroscopy is "sensitive primarily to the motion of individual hydrogen atoms", as is rightly written in the paper of Kolesnikov et al. and as is well-known, it is not surprising that these terahertz modes are not observed in the INS spectra. 4. The dynamics of the terahertz soft excitation we observe for the perpendicular geometry (the peak that softens from 60 cm -1 at 300 K down to 20 cm -1 at 2 K, Fig.5a in the manuscript) correspond quite well to the results of the INS measurements, namely to the broad wing below 5 meV (Fig.1a in the paper of Kolesnikov et al.). In the physics of ferroelectrics such a wing centered around zero frequency in the inelastic scattering spectra is called a central peak and the temperature evolution of its half-width closely traces the behavior of overdamped soft excitations that typically accompany the structural phase transitions, see e. Fig.1a below 5 meV resembles quite closely the response of our soft mode. At 40 K, the THz soft mode is located near 20 cm -1 (2.5 meV) and its width (damping) is about the same (Fig.3b of our manuscript). At T = 2 K, the mode shifts down to ~ 12.5 cm -1 (1.55 meV), its damping staying at about 20 cm -1 (2.5 meV), so the mode becomes overdamped. A basically identical response is seen in the INS spectra in Fig.1a: at T=45 K there is a bump at >2 meV of a width of 2-3 meV that gets narrower and more intense while cooling, similar to what we observe in the THz spectra. Unfortunately, Kolesnikov et al. did not analyze this interesting low-energy behavior in their paper. 5. The final issue is the discrepancy in the potential well depth. Our value of (1.41 ± 0.05) meV was obtained based on the model of Nakajima and Naya which we have modified to the case of a six-well potential and applied to our experimental spectra. A similar value of 0.83 meV was obtained independently by our ab initio DFT analysis carried out with the VASP packages with HSE03 hybrid functional. Both these values are also in a good agreement with the characteristic energy of the process that inhibits the ferroelectric phase transition, as determined from the fit using Barrett's formula (Figs. 2, 7 and Eqs. 10, 11 of the manuscript), yielding k B T 1 =1.7 meV (where k B is the Boltzmann constant). This indicates that the dipole moments cannot align themselves macroscopically within the channels since they fall into different potential minima separated by barriers of about 1.7 meV. According to the general statistical principles applied to the system with a multi-minimum potential [Y. Onodera, Progr. Theor. Phys. 44, 1477 (1970)], the Barrett-like leveling off of the soft-mode frequency during the order-disorder or displacive phase transitions happens when the ratio of the Curie temperature to the potential well depth gets close to unity (Fig.11 in [Y. Onodera, Progr. Theor. Phys. 44, 1477 (1970)]. This again points to the value of 1.7 meV, based on the Curie temperature |T C |=20 K which we obtained experimentally from the temperature variation of the dielectric permittivity. Another confirmation of the validity of our DFT estimate of the potential well depth is provided by the closeness of the obtained value 1.33 meV of the dipole-dipole coupling energy to the energy k B T C =1.7 meV characterizing the strength of dipole coupling. Finally, our DFT analysis has provided the frequencies and relative intensities of the infrared translational and librational modes of the trapped water molecules and of the isotopic shifts (when H 2 O is replaced by D 2 O); these values agree well quantitatively with our experimental data, as written in the manuscript. In contrast, the energy barriers of 48 meV and 56 meV stated by Kolesnikov et al. seem to be rather large and might imply quite a strong coupling of the H 2 O molecules to the cages' walls. Such a coupling could, in turn, lead to a blue shift of the intramolecular modes ν 1 , ν 2 and ν 3 relative to their positions in the free molecule. It is well-known, however, that the intramolecular vibrations frequencies of H 2 O within beryl are very close to those of free molecules; see, for example, [L. M. Anovitz, E. Mamontov, P. ben Ishai, and A. I. Kolesnikov, Phys. Rev. E 88, 052306 | 6,392.4 | 2016-09-30T00:00:00.000 | [
"Physics",
"Chemistry"
] |
2D Material Liquid Crystals for Optoelectronics and Photonics
The merging of the materials science paradigms of liquid crystals and 2D materials promises superb new opportunities for the advancement of the fields of optoelectronics and photonics. In this paper, we summarise the development of 2D material liquid crystals by two different methods: dispersion of 2D materials in a liquid crystalline host and the liquid crystal phase arising from dispersions of 2D material flakes in organic solvents. The properties of liquid crystal phases that make them attractive for optoelectronics and photonics applications are discussed. The processing of 2D materials to allow for the development of 2D material liquid crystals is also considered. An emphasis is placed on the applications of such materials; from the development of films, fibers and membranes to display applications, optoelectronic devices and quality control of synthetic processes.
Introduction
Two-dimensional (2D) nanocomposite materials with dynamically tunable liquid crystalline properties have recently emerged as a highly-promising class of novel functional materials, opening new routes within a wide variety of potential applications from the deposition of highly uniform layers and heterostructures, to novel display technologies. Here, we will introduce the underlying concepts that underpin this recent technological advance; provide an overview of the synthetic routes towards such 2D nanocomposite materials; and review recent advances in the application and applicability of these materials within the fields of optoelectronics and photonics.
Since the advent of graphene in 2004, 1 there has been an explosion in the investigation of a wide range of atomically thin (two-dimensional) materials. In addition to graphene (exfoliated from graphite), materials that can be reduced to monolayer size have been shown to include: graphene oxide (from graphite oxide); transition metal dichalcogenides (TMDCs) -for example
B. T. Hogan
Ben graduated from the University of Bath in 2015 with a MSci in Natural Sciences. He is currently a postgraduate researcher within the EPSRC Centre for Doctoral Training in Metamaterials at the University of Exeter, UK, working in the Optoelectronic Systems Group lead by Prof. Baldycheva. His research focuses on the synthesis, characterisation and application to devices of novel optoelectronic and photonic 2D liquid crystal materials incorporating metamaterial structures.
E. Kovalska
Dr Evgeniya Kovalska is a Postdoctoral Research Fellow in the Laboratory of Smart Materials and Devices at Bilkent University of Ankara, Turkey. Her current research pertains to synthesis and application of graphene, transfer printing processes, patterning methods, developing new micro/ nano-scale devices, chemical and electrooptical characterization, and microfabrication techniques. She obtained her PhD degree in chemistry with specialty in the physics and chemistry of surfaces in the Department of Physical Chemistry of Carbon Materials at Chuiko Institute of Surface Chemistry of the NAS of Ukraine, Kyiv. MoS 2 , WSe 2 and MoTe 2 ; and hexagonal boron nitride (h-BN) amongst countless others. The possibilities for applications of these materials are almost limitless, owing to the diverse properties that they exhibit. However, adoption of these materials in novel optoelectronics and photonics applications is often limited by challenges surrounding the scalability, cost of production processes or limited device tunability. Recently, two paradigms of significant interest for the development of novel functional materials where dynamic reconfigurability is delivered through the exploitation of liquid crystalline properties and 2D materials have emerged. Firstly, 2D material particles can be dispersed in a conventional liquid crystal host. 2,3 Alternatively, 2D materials dispersed in specific solvents have been shown to display lyotropic liquid crystalline phases within certain ranges of 2D material concentration. 4,5 Liquid crystals The liquid crystal phase is a phase of matter that exists for a variety of molecules and materials, depending on their geometric and chemical properties, with characteristics intermediate to those of a conventional crystalline solid and a liquid. 6,7 Liquid crystals (LCs) have found use in a variety of applications through the years (Fig. 1). The liquid crystal phase was initially described by Austrian botanist Friedrich Reinitzer in 1888 when looking at the properties of cholesterol derivatives, 8,9 although some credit also goes to Julius Planer, who reported similar observations 27 years prior. 10,11 This new and distinct state of matter was then identified as the ''liquid crystal phase'' by Otto Lehmann in 1890 and in 1904 the first commercially available LCs were produced by Merck-AG. 12 Over the following 18 years, scientists established the existence of three distinct liquid crystalline phases (nematic, smectic and cholesteric) 12 but, with no applications of note forthcoming, the study of LCs was halted. For the next 30 years, the scientific community ignored LC materials, considering them as an interesting curiosity. However, following a renaissance in liquid crystal science in the 1950s, the previously scientific curiosity has become a ubiquitous part of the modern technology landscape.
During the 1950s, the invention of the first cholesteric LC temperature indicators, as well as advances in analytical metrology, cancer diagnostics and non-destructive material testing methods drove a new era in liquid crystal science. By 1962, liquid crystals were already finding applications in state-of-the-art laser devices, despite the relative youth of laser science. However, the most important technological innovation came in 1965 with the development of the first LC displays (LCDs). [13][14][15] Subsequently, twisted nematic LCDs (1969)(1970)(1971) advanced the field further. [13][14][15] Significant breakthroughs in the evolution of liquid crystal technologies occurred in the 1980-1990s and continue to have a profound impact on day-to-day life: the miniaturisation of display technologies facilitated the development of portable PCs, mobile telephones and countless other innovations. [13][14][15] Since the start of the new millennium, LCs and recently discovered 2D material LCs have come into demand as optoelectronic and photonic materials. 3,[16][17][18][19] The possibility of the existence of a liquid crystal phase stems principally from the geometric structure of the molecules in the material as well as the functional groups present in the molecule. In lyotropic liquid crystals, mesogens are dispersed in a host solvent (typically water but other organic solvents can be used depending on the molecule). 6,7,20 Lyotropic liquid crystals exhibit a liquid crystal phase within a certain range of temperatures but also require a concentration of the active mesogens that falls within a certain range. In the lyotropic phase, the fluidity of the material is induced by the solvent molecules rather than being intrinsic to the mesogens themselves. The mesogens contain immiscible solvophilic and solvophobic parts separated at opposing 'ends' or facets of the molecule, making them amphiphilic.
M. F. Craciun
Monica Craciun is a Professor in Nanoscience and Nanotechnology. She received her PhD from Kavli Institute of Nanoscience, Delft University of Technology, The Netherlands and held a prestigious Fellowship of the Japanese Society for the Promotion of Science at The University of Tokyo, Japan. Her research is focused on twodimensional materials for emerging technologies. She has published more than 80 papers in reputed journals such as Nature Nanotechnology, Science Advances, Advanced Materials, and Nano Letters. She was awarded an early career fellowship from the U.K. Engineering and Physical Sciences Research Council and was investigator on more than 30 funded research projects.
A. Baldycheva
Prof. Anna Baldycheva completed her BSc(Hons) at Saint-Petersburg University in 2008 and PhD at Trinity College Dublin in 2012. She is currently leading a highly interdisciplinary research group, Opto-Electronic Systems Laboratory, at the Centre for Graphene Science at the University of Exeter. Prof. Baldycheva's research group works in the areas of Si photonics, and flexible and fluid optoelectronics. Her research interests span from the development of the new photonic composite materials to the engineering of integrated hybrid electronic-photonic devices for application in communications, energy harvesting, and bio-chemical sensing. Since 2011 she has over 50 peer-reviewed publications, invited talks and conference proceedings.
As one end has a preferential interaction with the host solvent, ordering of the amphiphilic molecules occurs to maximise the solvophilic 'head' interaction with the host solvent while minimising that for the solvophobic 'tail'. The structures formed by the mesogens are dependent on the relative volumes of the 'head' and 'tail' as well as the concentration of the molecules within the solvent.
At very low concentrations, there will be no ordering of the amphiphilic molecules dispersed in the solvent. 6,7,20 As the concentration is increased, there will be a critical concentration at which micelles are spontaneously formed -however, the micelles do not order themselves so this still does not represent a liquid crystal phase. At higher concentrations, the micelles must order themselves as the inter-micelle interactions become energetically important above a critical micellular concentration within the solvent. Typically, a hexagonal columnar phase is formed where long cylindrical rods of amphiphilic mesogens arrange themselves into a hexagonal lattice structure but other structures are possible depending on the mesogen. As the concentration increases further, a lamellar phase will form, with layers of the mesogens separated by thin layers of solvent. In lyotropic liquid crystals, it is objects formed by the aggregations of amphiphiles that can then be ordered in the same ways as observed for thermotropic liquid crystals. Lyotropic liquid crystals possess significant tunability as the structural properties are highly sensitive to changes in concentration. For example, within in the hexagonal columnar phase, the lattice parameters can be varied by varying the solvent volume in the mixture.
Liquid crystals are of particular interest due to their inherent ordering while in the liquid phase and for their ability to align the director along an external field. 21,22 Permanent electric dipoles can exist in the individual liquid crystal molecules when one part of the mesogen has a positive charge while another has a negative charge. When an external electric field is then applied to the liquid crystal, the dipoles orient along the direction of the field as the electric field exerts a force on the dipoles. Some liquid crystal molecules, however, do not form a permanent dipole but can still be influenced by an electric field. The shape anisotropy of many liquid crystal mesogens means that they are highly polarisable and as such an applied electric field can induce a dipole by relocating the electron density within the molecule. While not as strong as permanent dipoles, orientation of the induced dipoles with the external field still occurs. The effects of magnetic fields on liquid crystal molecules are analogous to electric fields with the molecules aligning with or against the magnetic field.
2D materials
Whereas, in the past, 2D materials have typically been produced by either a mechanical cleavage method 1 or by vapour deposition, [23][24][25][26] recently liquid phase exfoliation has attracted significant interest due to the inherent scalability of the process. Liquid phase exfoliation (Fig. 2) is a method where a bulk material is dispersed in a solvent and then layers are broken apart. [27][28][29][30][31] In most cases, the layers are broken apart using ultrasonication where high frequency sound waves are transmitted through the solution. [27][28][29][30][32][33][34][35] The sound waves induce the formation of bubbles and cavities between layers which break the layers apart as they expand. However, they also cause strains in the material which cause intralayer cleavage of the particles, reducing the size of the particles obtained after exfoliation. Other than ultrasonication, other methods have been developed for liquid phase exfoliation, including strong acid induced oxidation reactions causing cleavage 36 and freezing of water intercalated layered structures where expansion of water as it freezes causes interlayer cleavage. 37 Following exfoliation, particles of specific sizes can be isolated by centrifugation of the dispersion, 38,39 solvent induced selective sedimentation 40 or by pH-assisted selective sedimentation 41 amongst others. Materials of interest for optoelectronics and photonics that can be reduced to few-layer or monolayer by means of liquid phase exfoliation encompass a broad range; from graphene and its derivatives to transition metal dichalcogenides (TMDCs), metal oxides and hexagonal boron nitride (h-BN) amongst many others. Liquid phase exfoliated 2D materials are of significant interest for the production of 2D material liquid crystal composites as the exfoliating solvent can be used as the fluid host for spontaneous liquid crystal phase self-assembly, 4,27,[42][43][44][45][46][47][48][49][50][51][52][53] or to allow combination with conventional liquid crystals. 3 Amongst materials discussed further here, graphene can be exfoliated from bulk graphite owing to the weak van der Waals interactions between layers in graphite. 54 Graphene is an allotrope of carbon consisting of a two-dimensional hexagonal lattice with a single carbon atom at each vertex. The carbon atoms in graphene are sp 2 hybridised in-plane with these sp 2 electrons forming three carbon-carbon bonds. The final p orbital is unhybridised and directed out of the plane. For a graphene sheet these out of plane p orbitals hybridise to form the delocalised p and p* bands which are responsible for graphene's exceptional electronic properties; these exceptional properties make graphene of significant interest as a material for forming electrical contacts, films and fibers.
Graphene oxide is the 2D material produced by the exfoliation of graphite oxide. 55 Maximum oxidation of graphite results in a carbon to oxygen ratio between 2.1 and 2.9. Graphite oxide retains the layered structure of graphite but the interlayer spacing is increased and no longer regular for bulk graphite oxide. The oxidation of graphite introduces three types of oxygen containing functional groups to the structure: epoxy bridges (oxygen bridging between two carbons on the surface of a graphitic sheet), hydroxyl groups (on either the surface or the edges) and carboxyl groups (on the edges of the graphitic sheets). 56 Graphene oxide can be exfoliated from bulk graphite oxide analogously to graphene from graphite. 54,55 However, the intercalation of the graphitic carbon sheets by oxygenated functional groups results in graphene oxide being more readily exfoliatable. This means that graphene oxide can be exfoliated to few layers and even monolayer in large quantities without the use of additional surfactant molecules. 33,35 Graphene oxide possesses nonlinear optical properties of significant interest for applications in ultrafast photonics and optoelectronics. The saturable absorption can be used for pulse compression, mode locking and Q-switching of laser systems. 57 The large observed Kerr effect introduces possibilities in all-optical switching and signal regeneration and hence optical communications devices. 58 The nonlinear optical properties of graphene oxide can be tuned by controlling the carbon to oxygen ratio of the material, 59 this tuning has been achieved using laser irradiance to reduce the material.
Transition metal dichalcogenides (TMDCs) are a class of material where transition metal atoms are connected by bridging group 14 elements with a stoichiometry of 1 : 2 to form layers. The layers are held together by weak van der Waals interactions and therefore present an ideal candidate for reduction to few-layer or monolayer materials. Cleavage to monolayer is typically achieved using mechanical exfoliation methods but few-layer material can be readily attained using liquid-phase exfoliation methods. Many different TMDCs have been synthesised. A common example, molybdenum disulfide (MoS 2 ), consists of layers of molybdenum atoms bound to six sulfide ligands in a trigonal prismatic coordination sphere. 60,61 MoS 2 is an indirect bandgap semiconductor with a band gap of 1.23 eV in its bulk form 62 but the monolayer form has a direct bandgap of 1.8 eV 63 so can be used in switchable transistors and photodetection devices. 60 MoS 2 can emit light opening applications in in situ light generation devices. 60
2D material liquid crystals
It has been shown that by dispersing nanoparticles or molecules in a liquid crystal host that the ordering of the liquid crystal mesogens can impart ordering to the dispersed particles. [64][65][66] The nanoparticles have been shown theoretically 67 and experimentally 3,68-72 to align with the disclinations of the liquid crystal due to the energetic favourability of such an alignment. More recently, the impartment of ordering from a liquid crystal host has also been shown with dispersed 2D material particles. 3,73 Additionally, dispersions of graphene oxide in water have been shown to have a lyotropic liquid crystal phase within a specific range of concentrations of dispersed graphene oxide particles (Fig. 3), where the dispersed discotic graphene oxide particles are either stacked in the columnar manner typical of discotic liquid crystals or exhibit ordering analogous to a nematic phase. 45,47,48 The liquid crystal phase of the graphene oxide dispersions arises due to the competition between the long-range electrostatic repulsion between particles, originating from ionised functional groups at the edges of the particles, and the weak attractive interactions originating from the unoxidised graphitic domains on the surface. 74,75 The liquid crystallinity is therefore dependent on the particle size; more precisely to the ratio of the surface area to the circumference (and number of layers) as this determines the balance of the attractive and repulsive forces. 75 Most dispersions of liquid phase exfoliated graphene oxide will consist of particles of differing sizes and therefore the polydispersity of the particles becomes an important factor. 51 Additionally, this balance is affected by the degree of oxidation -the carbon to oxygen ratio of the material. 75 The stability of the liquid crystal phase can also be strongly affected by the ionic content of the solvent as this determines the degree of ionisation of the oxygen containing functional groups on graphene oxide. 43,75 The pH of the solvent also affects the critical concentration for the onset of liquid crystalline behaviour. 76 By tuning these separate parameters, it is possible to observe either a nematic phase or a columnar phase of the graphene oxide dispersion (Fig. 4). The different liquid crystalline phases can be observed using photoluminescence measurements as there is a strong polarisation dependence of the photoluminescence for ordered mesophases in graphene oxide dispersions. 77 Similarly, this liquid crystal phase has been observed in a range of other organic solvents including acetone, dimethylformamide, ethanol, cyclohexylpyrrolidone and tetrahydrofuran 33,78 (Fig. 5). The concentration of particles required to give rise to the liquid crystal phase is different for each solvent, but there is also some discrepancy between the threshold concentrations observed for the same solvent due to the effect of the size, shape and polydispersity of the graphene oxide particles in the solution. A liquid crystal phase has also been observed for graphene exfoliated and dispersed in chlorosulfuric acid. 27 A similar phase has been observed in other solvents for graphene and small graphitic particles although only with the addition of either stabilising surfactant 4,5,79 or polymer coatings. 80 Dispersions of graphene in water have been reported to show an extrinsic chirality associated with a cholesteric liquid crystal phase. 4 More recently a liquid crystal phase has been observed for dispersions of molybdenum disulfide at high concentration in water 50 suggesting the possibility of liquid crystalline phases existing for a far greater range of dispersions of 2D materials. 79
Films, fibers, membranes and inks
The self-assembling nature of liquid crystalline materials has led to the use of graphene oxide dispersions for the formation of well-ordered layers and stacks of 2D materials. Behabtu et al., demonstrated that graphite spontaneously exfoliates into singlelayer graphene in chlorosulfonic acid, and spontaneously forms liquid-crystalline phases at high concentrations. Transparent, conducting films were produced from the liquid crystalline dispersions. Jalili et al. 78 showed that self-assembly of graphene oxide sheets is possible in a wide range of organic solvents. The prepared dispersions were employed to achieve self-assembled layer-by-layer multifunctional 3D hybrid architectures comprising SWNTs and GO with promising mechanical properties (Fig. 6). More recently, the same group has showed that similar self-assembly can be achieved using liquid crystalline dispersions of molybdenum disulfide. 50 Layers of these materials have been combined with other materials for a variety of diverse applications such as photovoltaics 81 and improving the mechanical properties of composite materials, 82 the more homogeneous layers produced from the liquid crystalline dispersions are of significant interest to applications of these natures. The use of liquid crystalline dispersions of graphene oxide to produce uniform layers has been used as a precursor to forming similarly uniform structures of graphene through the reduction of the graphene oxide. 4,49 Akbari et al. demonstrate that the discotic nematic phase of GO can be shear aligned to form highly ordered, continuous films of multi-layered GO on a supporting membrane. The highly ordered graphene sheets in the plane of the membrane make organized channels and give greater permeability. The nanoporous membranes may find application in a variety of filtering applications. 83 Fu et al. demonstrate the use of graphene oxide liquid crystals can be applied as composite inks for the formation of electrodes in 3D printing applications 84 due to the intrinsic self-assembly that means they retain ordering of the GO platelets on drying of the solvent.
The development of fibers formed from graphene, GO or reduced GO is a widely reviewed maturing area for investigation, [85][86][87][88] with many proposed applications such as in conducting wires, energy storage and conversion devices, actuators, field emitters, catalysis and optoelectronic and photonic devices. One of the most promising developments in this field, and of particular interest here, has been the use of the liquid crystal phase to improve the homogeneity and ordering of the fibers produced; numerous examples exist where fibers comprised of 2D materials have also been produced by the wet-spinning of liquid crystalline solutions. 4,47,49,50,89 Xu and Gao 4 developed a method by which aqueous graphene oxide liquid crystals were continuously spun into metres of macroscopic graphene oxide fibres; subsequent chemical reduction gave the first macroscopic neat graphene fibres with high conductivity and good mechanical performance (Fig. 7). Jalili et al. demonstrate a method for one-step continuous spinning of graphene fibers where the need for post-treatment processes is eliminated by the use of basic coagulation baths for reduction of GO during the spinning process, 49 as well as the applicability of wet-spinning to the formation of fibers of other 2D materials. 50 Optoelectronics Liquid crystalline nanocomposites incorporating 2D material particles show great promise for optoelectronic applications due to their field induced tunability and enhanced functionality stemming from the plethora of properties displayed by the range of exfoliatable materials. For example, dispersions of liquid crystalline graphene oxide have been shown to undergo electro-optical switching with low threshold voltage requirements. 90 Kim et al. show that GO LCs possess an extremely large Kerr coefficient, making them attractive for low power consumption optoelectronic devices. By stabilising a suspension of reduced GO using surfactants, they demonstrated increased time stability and drastically improved electro-optic properties with an induced birefringence twice as large at the same field strength as that with an unreduced GO suspension.
Zhu et al. 91 have shown that the preparation of poly(Nisopropylacrylamide)/GO nanocomposite hydrogels with macroscopically oriented LC structures, after polymerisation, can be readily achieved under assistance from a flow-field -induced by vacuum degassing. Nanocomposites prepared with a GO concentration of 5.0 mg mL À1 exhibit macroscopically aligned LC structures, which endow the gels with anisotropic optical properties. Furthermore, they show that the oriented LC structures are not damaged during switching of the hydrogels, and hence their behaviour undergoes reversible changes. Additionally, they show that the oriented LC structures in the hydrogels can be permanently maintained after drying the nanocomposite samples. The liquid crystalline properties of such nanocomposites facilitate their applicability to switching in optoelectronic devices.
Kim et al. 92 have demonstrated significant improvement of the electro-optic performance of a polymer-stabilized liquid crystalline blue phase using a reduced graphene oxide (RGO) enriched polymer network. The conductivity of the nanocomposite system is increased by the inclusion of the RGO. Furthermore, reductions in the operational voltage (B32%), response time (B51%) and hysteresis (B53%) compared to that of a conventional polymer-stabilized BPLC signify great potential for the use of 2D materials in enhancing novel electrooptic device applications of conventional LC systems.
Recently, Hogan et al. proposed that by tuning the liquid crystal director by means of an applied field, one could induce the formation of metastructures formed of the dispersed 2D material particles as they are repositioned. In particular, they show that nanocomposites of nematic phase liquid crystals with dispersed graphene oxide particles can be integrated with CMOS photonics devices as a back-end process as part of microfluidic systems and that the integrated nanocomposites can be readily controlled by use of either an electric field or laser light to reposition and rearrange the dispersed particles 3 (Fig. 8). They present a novel characterisation method based on Raman spectroscopy to allow determination of the spatial positioning of the integrated 2D material particles, allowing precise monitoring of metastructure formation.
Displays
2D material liquid crystals can be used in back-illuminated liquid crystal display applications as they exhibit electro-optic switching. The large Kerr coefficient of graphene oxide liquid crystals observed by Shen et al., 53 for example, facilitates this application. However, the slow switching times reported by Kim and Kim (41 s) 93 must be considered, although Ahmad et al. 94 report that this can be improved by approximately an order of magnitude by careful selection of the size of graphene oxide mesogens.
More promisingly, 2D material liquid crystals have also been proposed for application in liquid crystal displays -particularly in so-called 'e-ink' displays -without requiring the polarising optics typically necessary for these applications. 52,76 He et al. 52 demonstrate a process by which graphene oxide liquid crystals can be used for reflective displays without the need for polarizing optics (Fig. 9). By using flow-induced mechanical alignment, they prepared graphene oxide in different orientational orders and demonstrated that the ordered graphene oxide liquid crystals can be used as a rewritable display medium. The surface of the graphene oxide liquid crystal can be switched from a bright, reflective state to a dark, transmissive state using, for example, a wire to manually draw patterns on the surface. They explain that the contrast between the two states arises due to the anisotropic response of the flakes due to the inherent high aspect ratio of the 2D material.
Quality control
Inducing the onset of a liquid crystal phase in a dispersion of graphene oxide has been used for size selection of the graphene oxide particles. 95 Lee et al. introduce a method for facile size selection of large-size graphene oxide particles by exploiting liquid crystallinity. They show that in a biphasic graphene oxide dispersion where both isotropic and liquid crystalline phases are in equilibrium, large-size GO flakes (420 mm) are spontaneously concentrated within the liquid crystalline phase. Selectivity of large flake sizes without the need of filtering presents several advantages for photonics and optoelectronics applications; primarily larger flakes allow for greater uniformity of device characteristics over wider areas and can help to increase the uniformity of depositions.
Outlook 2D materials encompass a fascinating range of diverse properties with a myriad of possible applications in optoelectronics and photonics. The development of liquid crystalline nanocomposite materials incorporating 2D materials represents a significant advance in the opportunities for integration and exploitation of 2D materials within these fields. However, there remain a large number of questions that demand further investigation before 2D material liquid crystals can find wider application. Primarily, there remain many candidate 2D materials for which a liquid crystal phase is theoretically possible but not yet shown; the discovery of further 2D material liquid crystals would broaden the range of utilisable properties available. Similarly to that observed for graphene oxide, observation of this liquid crystallinity should require a combination of careful solvent selection, tuning of the 2D material particle sizes and control of the concentration of the particles. Additionally, the use of surfactant molecules may be necessary to stabilise the liquid crystalline phase of the dispersions by maximising the aligning forces acting on the dispersed particles. However, this raises the additional question of the exploration -both theoretical and experimental -of the conditions required for the existence of the liquid crystal phase, an area in which little work has far been explored for the specific systems of interest here. A significant part of such work remains to be done in the comparison of the different synthetic routes towards the LC phase, and how the synthesis can affect the observed properties.
Additionally, dispersion of 2D materials in conventional liquid crystal host fluids presents superb new possibilities in optofluidic systems; from light generation to dynamic sensing applications. This is owing to the dramatic improvements that can be observed in the operational parameters of the nanocomposite systems in comparison to the conventional LC systems currently used in optoelectronics and photonics. Such nanocomposites, can not only improve properties such as switching times and threshold voltages, but can also add further functionality, for example by metastructuring of nanoparticle dispersions. For these nanocomposite systems, the most important advances to be made are in the fundamental understanding of the basis for improvements in their intrinsic properties; and in the exploration of predicting metastructuring as well as experimental observation.
Overall, the existence of liquid crystal phase 2D material dispersions presents fantastic opportunities in the exploration of novel optoelectronic and photonic systems, allowing new highly-scalable production processes for thin film integration and novel fiber systems amongst numerous other applications. | 6,687.8 | 2017-11-09T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Compressive Video Coding: A Review of the State-Of-The-Art
Video coding and its related applications have advanced quite substantially in recent years. Major coding standards such as MPEG [1] and H.26x [2] are well developed and widely deployed. These standards are developed mainly for applications such as DVDs where the compressed video is played over many times by the consumer. Since compression only needs to be performed once while decompression (playback) is performed many times, it is desirable that the decoding/decompression process can be done as simply and quickly as possible. Therefore, essentially all current video compression schemes, such as the various MPEG standards as well as H.264 [1, 2] involve a complex encoder and a simple decoder. The exploitation of spatial and temporal redundancies for data compression at the encoder causes the encoding process to be typically 5 to 10 times more complex computationally than the decoder [3]. In order that video encoding can be performed in real time at frame rates of 30 frames per second or more, the encoding process has to be performed by specially designed hardware, thus increasing the cost of cameras.
Introduction
Video coding and its related applications have advanced quite substantially in recent years. Major coding standards such as MPEG [1] and H.26x [2] are well developed and widely deployed. These standards are developed mainly for applications such as DVDs where the compressed video is played over many times by the consumer. Since compression only needs to be performed once while decompression (playback) is performed many times, it is desirable that the decoding/decompression process can be done as simply and quickly as possible. Therefore, essentially all current video compression schemes, such as the various MPEG standards as well as H.264 [1,2] involve a complex encoder and a simple decoder. The exploitation of spatial and temporal redundancies for data compression at the encoder causes the encoding process to be typically 5 to 10 times more complex computationally than the decoder [3]. In order that video encoding can be performed in real time at frame rates of 30 frames per second or more, the encoding process has to be performed by specially designed hardware, thus increasing the cost of cameras.
In the past ten years, we have seen substantial research and development of large sensor networks where a large number of sensors are deployed. For some applications such as video surveillance and sports broadcasting, these sensors are in fact video cameras. For such systems, there is a need to re-evaluate conventional strategies for video coding. If the encoders are made simpler, then the cost of a system involving tens or hundreds of cameras can be substantially reduced in comparison with deploying current camera systems. Typically, data from these cameras can be sent to a single decoder and aggregated. Since some of the scenes captured may be correlated, computational gain can potentially be achieved by decoding these scenes together rather than separately. . Decoding can be simple reconstruction of the video frames or it can be combined with detection algorithms specific to the application at hand. Thus there are benefits in combing reduced complexity cameras with flexible decoding processes to deliver modern applications which are not anticipated when the various video coding standards are developed.
for signals which possess some "sparsity" properties, the sampling rate required to reconstruct these signals with good fidelity can be much lower than the lower bound specified by Shannon's sampling theorem. Since video signals contain substantial amounts of redundancy, they are sparse signals and CS can potentially be applied. The simplicity of the encoding process is traded off by a more complex, iterative decoding process. The reconstruction process of CS is usually formulated as an optimization problem which potentially allows one to tailor the objective function and constraints to the specific application. Even though practical cameras that make use of CS are still in their very early days, the concept can be applied to video coding. A lower sampling rate implies less energy required for data processing, leading to lower power requirements for the camera. Furthermore, the complexity of the encoder can be further simplified by making use of distributed source coding [21,22]. The distributed approach provides ways to encode video frames without exploiting any redundancy or correlation between video frames captured by the camera. The combined use of CS and distributed source coding can therefore serve as the basis for the development of camera systems where the encoder is less complex than the decoder.
We shall first provide a brief introduction to Compressed Sensing in the next Section. This is followed by a review of current research in video coding using CS.
Compressed sensing
Shannon's uniform sampling theorem [7,8] provides a lower bound on the rate by which an analog signal needs to be sampled in order that the sampled signal fully represents the original. If a signal contains no frequencies higher than radians per second, then it can be completely determined by samples that are spaced = ⁄ seconds apart. can be reconstructed perfectly using the these samples by The uniform samples of may be interpreted as coefficients of basis functions obtained by shifting and scaling of the sinc function. For high bandwidth signals such as video, the amount of data generated based on a sampling rate of at least twice the bandwidth is very high. Fortunately, most of the raw data can be thrown away with almost no perceptual loss. This is the result of lossy compression techniques based on orthogonal transforms. In image and video compression, the discrete cosine transform (DCT) and wavelet transform have been found to be most useful. The standard procedure goes as follows. The orthogonal transform is applied to the raw image data, giving a set of transform coefficients. Those coefficients that have values smaller than a certain threshold are discarded. Only the remaining significant coefficients, typically only a small subset of the original, are encoded, reducing the amount of data that represents the image. This means that if there is a way to acquire only the significant transform coefficients directly by sampling, then the sampling rate can be much lower than that required by Shannon's theorem.
Emmanuel Candes, together with Justin Romberg and Terry Tao, came up with a theory of Compressed Sensing (CS) [9] that can be applied to signals, such as audio, image and video www.intechopen.com that are sparse in some domain. This theory provides a way, at least theoretically, to acquire signals at a rate potentially much lower than the Nyquist rate given by Shannon's sampling theorem. CS has already inspired more than a thousand papers from 2006 to 2010 [9].
Key Elements of compressed sensing
Compressed Sensing [4][5][6]10] is applicable to signals that are sparse in some domain. Sparsity is a general concept and it expresses the idea that the information rate or the signal significant content may be much smaller than what is suggested by its bandwidth. Most natural signals are redundant and therefore compressible in some suitable domain. We shall first define the two principles, sparsity and incoherence, on which the theory of CS depends.
Sparsity
Sparsity is important in Compressed Sensing as it determines how efficient one can acquire signals non-adaptively. The most common definition of sparsity used in compressed sensing is as follows. Let ∈ be a vector which represents a signal which can be expanded in an Here, the coefficients = , . In matrix form, (1.2) becomes When all but a few of the coefficients are zero, we say that is sparse in a strict sense. If denotes the number of non-zero coefficients with ≪ , then is said to be S-sparse. In practice, most compressible signals have only a few significant coefficients while the rest have relatively small magnitudes. If we set these small coefficients to zero in the way that it is done in lossy compression, then we have a sparse signal.
Incoherence
We start by considering two different orthonormal bases, Φ and Ψ, of . The coherence between these two bases is defined in [10] by which gives us the largest correlation between any two elements of the two bases. It can be shown that Sparsity and incoherence together quantify the compressibility of a signal. A signal is more compressible if it has higher sparsity in some representation domain Ψ that is less coherent to the sensing (or sampling) domain Φ. Interestingly, random matrices are largely incoherent with any fixed basis [18].
Random sampling
Let be a discrete time random process and let Consider a general linear measurement process that computes < inner products of = , between and a collection of vectors . Let denote the × matrix with the measurement vectors as rows. Then is given by If is fixed, then the measurements are not adaptive or depend on the structure of the signal [6]. The minimum number of measurements needed to reconstruct the original signal depends on the matrices and .
Theorem 1 [11]. Let ∈ has a discrete coefficient sequence in the basis . Let besparse. Select measurements in the domain uniformly at random. Then if for some positive constant , then with high probability, can be reconstructed using the following convex optimization program: where denotes the index set of the randomly chosen measurements.
This is an important result and provides the requirement for successful reconstruction. It has the following three implications [10]: i. The role of the coherence in above equation is transparent -the smaller the coherence between the sensing and basis matrices, the fewer the number of measurements needed. ii. It provides support that there will be no information loss by measuring any set of coefficients, which may be far less than the original signal size. iii. The signal can be exactly recovered without assuming any knowledge about the nonzero coordinates of or their amplitudes.
CS Reconstruction
The reconstruction problem in CS involves taking the measurements to reconstruct the length-signal that is -sparse, given the random measurement matrix and the basis matrix . Since < , this is an ill-conditioned problem. The classical approach to solving ill-conditioned problems of this kind is to minimize the norm. The general problem is given by However, it has been proven that this minimization will never return a -sparse solution. Instead, it can only produce a non-sparse solution [6]. The reason is that the norm measures the energy of the signal and signal sparsity properties could not be incorporated in this measure.
The norm counts the number of non-zero entries and therefore allows us to specify the sparsity requirement. The optimization problem using this norm can be stated as There is a high probability of obtaining a solution using only = + i.i.d Gaussian measurements [10]. However, the solution produced is numerically unstable [6]. It turns out that optimization based on the norm is able to exactly recover sparse signals with high probability using only ≥ log ⁄ i.i.d Gaussian measurements [4,5]. The convex optimization problem is given by Which can be reduced to a linear program. Algorithms based on Basis Pursuit [12] can be used to solve this problem with a computational complexity of [4].
Compressed Video Sensing (CVS)
Research into the use of CS in video applications has only started very recently. We shall now briefly review what has been reported in the open literature.
The first use of CS in video processing is proposed in [13]. Their approach is based on the single pixel camera [14]. The camera architecture employs a digital micro mirror array to perform optical calculations of linear projections of an image onto pseudo-random binary patterns. It directly acquires random projections. They have assumed that the image changes slowly enough across a sequence of snapshots which constitutes one frame. They acquired the video sequence using a total of M measurements, which are either 2D or 3D random measurements. For 2D frame-by-frame reconstruction, 2D wavelets are used as the sparsityinducing basis. For 3D joint reconstruction, 3D wavelets are used. The Matching Pursuit reconstruction algorithm [15] is used for reconstruction.
Another implementation of CS video coding is proposed in [16]. In this implementation, each video frame classified as a reference or non-reference frame. A reference frame (or key frame) is sampled in the conventional manner while non-reference frames are sampled by CS techniques. The sampled reference frame is divided into non-overlapping blocks each of size × = pixels whereby discrete cosine transform (DCT) is applied. A compressed sensing test is applied to the DCT coefficients of each block to identify the sparse blocks in the non-reference frame. This test basically involves comparing the number of significant DCT coefficients against a threshold . If the number of significant coefficients is small, then the block concerned is a candidate for CS to be applied. The sparse blocks are compressively sampled using an i.i.d. Gaussian measurement matrix and an inverse DCT sensing matrix.
The remaining blocks are sampled in the traditional way. A block diagram of the encoder is shown in Figure 1.
Signal recovery is performed by the OMP algorithm [17]. In reconstructing compressively sampled blocks, all sampled coefficients with an absolute value less than some constant are set to zero. Theoretically, if there are − non significant DCT coefficients, then at least = + samples are needed for signal reconstruction [10]. Therefore the threshold is set to < − . The choice of values for , , and depends on the video sequence and the size of the blocks. They have proved experimentally that up to 50% of savings in video acquisition is possible with good reconstruction quality. [16] Another technique which uses motion compensation and estimation at the decoder is presented in [18]. At the encoder, only random CS measurements were taken independently from each frame with no additional compression. A multi-scale framework has been proposed for reconstruction which iterates between motion estimation and sparsity-based reconstruction of the frames. It is built around the LIMAT method for standard video compression [19].
LIMAT [19] uses a second generation wavelets to build a fully invertible transform. To incorporate temporal redundancy, LIMAT adaptively apply motion-compensated lifting steps. Let k-th frame of the frame video sequence is given by , where ∈{ , ,…} . The lifting transform partitions the video into even frames { } and odd frames { } and attempts to predict the odd frames from the even ones using a forward motion compensation operator. Suppose { } and { } differ by a 3-pixel shift that is captured precisely by a motion vector , then it is given by { } = , exactly.
www.intechopen.com
The proposed algorithm in [18] uses block matching (BM) to estimate motion between a pair of frames. Their BM algorithm divides the reference frame into non-overlapping blocks. For each block in the reference frame the most similar block of equal size in the destination frame is found and the relative location is stored as a motion vector. This approach overcomes previous approaches such as [13] where the reconstruction of a frame depends only on the individual frame's sparsity without taking into account any temporal motion. It is also better than using inter-frame difference [20] which is insufficient for removing temporal redundancies.
Distributed Compressed Video Sensing (DCVS)
Another video coding approach that makes use of CS is based on the distributed source coding theory of Slepian and Wolf [21], and Wyner and Ziv [22]. Source statistics, partially or totally, is only exploited at the decoder, not at the encoder as it is done conventionally. Two or more statistically dependent source data are encoded by independent encoders. Each encoder sends a separate bit-stream to a common decoder which decodes all incoming bit streams jointly, exploiting statistical dependencies between them.
In [23], a framework called Distributed Compressed Video Sensing (DISCOS) is introduced. Video frames are divided into key frames and non-key frames at the encoder. A video sequence consists of several GOPs (group of pictures) where a GOP consists of a key frame followed by some non-key frames. Key frames are coded using conventional MPEG intracoding. Every frame is both block-wise and frame-wise compressively sampled using structurally random matrices [25]. In this way, more efficient frame based measurements are supplemented by block measurement to take advantage of temporal block motion.
At the decoder, key frames are decoded using a conventional MPEG decoder. For the decoding of non-key frames, the block-based measurements of a CS frame along with the two neighboring key frames are used for generating sparsity-constraint block prediction. The temporal correlation between frames is efficiently exploited through the inter-frame sparsity model, which assumes that a block can be sparsely represented by a linear combination of few temporal neighboring blocks. This prediction scheme is more powerful than conventional block-matching as it enables a block to be adaptively predicted from an optimal number of neighboring blocks, given its compressed measurements. The block-based prediction frame is then used as the side information (SI) to recover the input frame from its measurements. The measurement vector of the prediction frame is subtracted from that of the input frame to form a new measurement vector of the prediction error, which is sparse if the prediction is sufficiently accurate. Thus, the prediction error can be faithfully recovered. The reconstructed frame is then simply the sum of the prediction error and the prediction frame.
Another DCVS scheme is proposed in [24]. The main difference from [23] is that both key and non-key frames are compressively sampled and no conventional MPEG/H.26x codec is required. However, key frames have a higher measurement rate than non-key frames.
The measurement matrix Φ is the scrambled block Hadamard ensemble (SBHE) matrix [28]. SBHE is essentially a partial block Hadamard transform, followed by a random permutation of its columns. It provides near optimal performance, fast computation, and memory efficiency. It outperforms several existing measurement matrices including the Gaussian i.i.d matrix and the binary sparse matrix [28]. The sparsifying matrix used is derived from the discrete wavelet transform (DWT) basis.
At the decoder, the key frames are reconstructed using the standard Gradient Projection for Sparse Reconstruction (GPSR) algorithm. For the non-key frames, in order to compensate for lower measurement rates, side information is first generated to aid in the reconstruction. Side information can be generated from motion-compensated interpolation from neighboring key frames. In order to incorporate side information, GPSR is modified with a special initialization procedure and stopping criteria are incorporated (see Figure 3). The convergence speed of the modified GPSR has been shown to be faster and the reconstructed video quality is better than using original GPSR, two-step iterative shrinkage/thresholding (TwIST) [29], and orthogonal matching pursuit (OMP) [30].
Dictionary based compressed video sensing
In dictionary based techniques, a dictionary (basis) is created at the decoder from neighbouring frames for successful reconstruction of CS frames.
A dictionary based distributed approach to CVS is reported in [32]. Video frames are divided into key frames and non-key frames. Key frames are encoded and decoded using conventional MPEG/H.264 techniques. Non-key frames are divided into non-overlapping blocks of pixels. Each block is then compressively sampled and quantized. At the decoder, key frames are MPEG/H.264 decoded while the non-key frames are dequantized and recovered using a CS reconstruction algorithm with the aid of a dictionary. The dictionary is constructed from the decoded key frame. The architecture of this system is shown in Figure 4.
www.intechopen.com Two different coding modes are defined. The first one is called the SKIP mode. This mode is used when a block in a current non-key frame does not change much from the co-located decoded key frame. Such a block is skipped for decoding. This is achieved by increasing the complexity at the encoder since the encoder has to estimate the mean squared error (MSE) between decoded key frame block and current CS frame block. If the MSE is smaller than some threshold, the same decoded block is simply copied to current frame and hence the decoding complexity is very minimal. The other coding mode is called the SINGLE mode. CS measurements for a block are compared with the CS measurements in a dictionary using the MSE criterion. If it is below some pre-determined threshold, then the block is marked as a decoded block. Dictionary is created from a set of spatially neighboring blocks of previous decoded neighboring key frames. A feedback channel is used to communicate with the encoder that this block has been decoded and no more measurements are required. For blocks that are not encoded by either SKIP or SINGLE mode, normal CS reconstruction is performed.
Another dictionary based approach is presented in [33]. The authors proposed the idea of using an adaptive dictionary. The dictionary is learned from a set of blocks globally extracted from the previous reconstructed neighboring frames together with the side information generated from them is used as the basis of each block in a frame. In their encoder, frame are divided as Key-frames and CS frames. For Key-frames, frame based CS measurements are taken and for CS frames, block based CS measurements are taken. At the decoder, the reconstruction of a frame or a block can be formulated as an l 1 -minimization problem. It is solved by using the sparse reconstruction by separable approximation (SpaRSA) algorithm [34]. Block diagram of this system is shown in Figure 5.
Adjacent frames in the same scene of a video are similar, therefore a frame can be predicted by its side information which can be generated from the interpolation of its neighboring reconstructed frames. at decoder in [33], for a CS frame , its side information can be generated from the motion-compensated interpolation (MCI) of its previous and next reconstructed key frames , respectively. To learn the dictionary from , and , training patches were extracted. For each block in the three frames, 9 training patches www.intechopen.com including the nearest 8 blocks overlapping this block and this block itself are extracted. After that, the K-SVD algorithm [35] is applied to training patches to learn the dictionary ∈ × , .
is an overcomplete dictionary containing atoms. By using the learned dictionary , each block in can be sparsely represented as a sparse coefficient vector ∈ × . This learned dictionary provides sparser representation for the frame than using the fixed basis dictionary. Same authors have extended their work in [36] for dynamic measurement rate allocation by incorporating feedback channel in their dictionary based distributed video codec.
Summary
CS is a new field and its application to video systems is even more recent. There are many avenues for further research and thorough quantitative analyses are still lacking. Key encoding strategies adopted so far includes:
•
Applying CS measurements to all frames (both key frames and non-key frames) as suggested by [24].
•
Applying conventional coding schemes (MPEG/H.264) to key frames and acquire local block-based and global frame-based CS measurements for non-key frames as suggested in [23,32]. • Split frames into non-overlapping blocks of equal size. Reference frames are sampled fully. After sampling, a compressive sampling test is carried out to identify which blocks are sparse [16].
Similarly, key decoding strategies includes: • Reconstructing the key frames by applying CS recovery algorithms such as GPSR and reconstruct the non-key frames by incorporating side information generated by recovered key frames [24].
•
Decoding key frames using conventional image or video decompression algorithms and perform sparse recovery with decoder side information for prediction error reconstruction. Add reconstructed prediction error to the block-based prediction frame for final frame reconstruction [23]. • Using a dictionary for decoding [32] where a dictionary is used for comparison and prediction of non-key frames. Similarly, a dictionary can be learned from neighboring frames for reconstruction of non-key frames [33].
These observations suggest that there are many different approaches to encode videos using CS. In order to achieve a simple encoder design, conventional MPEG type of encoding www.intechopen.com process should not be adopted. Otherwise, there is no point in using CS as an extra overhead. We believe that the distributed approach in which each key-frame and non-keyframe is encoded by CS is able to utilise CS more effectively. While spatial domain compression is performed by CS, temporal domain compression is not exploited fully since there is no motion compensation and estimation performed. Therefore, a simple but effective inter-frame compression will need to be devised. In the distributed approach, this is equivalent to generating effective side information for the non-key frames. | 5,679.8 | 2012-03-23T00:00:00.000 | [
"Computer Science"
] |
Sexual selection for males with beneficial mutations
Sexual selection is the process by which traits providing a mating advantage are favoured. Theoretical treatments of the evolution of sex by sexual selection propose that it operates by reducing the load of deleterious mutations. Here, we postulate instead that sexual selection primarily acts through females preferentially mating with males carrying beneficial mutations. We used simulation and analytical modelling to investigate the evolutionary dynamics of beneficial mutations in the presence of sexual selection. We found that female choice for males with beneficial mutations had a much greater impact on genetic quality than choice for males with low mutational load. We also relaxed the typical assumption of a fixed mutation rate. For deleterious mutations, mutation rate should always be minimized, but when rare beneficial mutations can occur, female choice for males with those rare beneficial mutations could overcome a decline in average fitness and allow an increase in mutation rate. We propose that sexual selection for beneficial mutations could overcome the ‘two-fold cost of sex’ much more readily than choice for males with low mutational load and may therefore be a more powerful explanation for the prevalence of sexual reproduction than the existing theory. If sexual selection results in higher fitness at higher mutation rates, and if the variability produced by mutation itself promotes sexual selection, then a feedback loop between these two factors could have had a decisive role in driving adaptation.
Sexual selection for males with beneficial mutations Gilbert Roberts 1* & Marion Petrie 2
Sexual selection is the process by which traits providing a mating advantage are favoured. Theoretical treatments of the evolution of sex by sexual selection propose that it operates by reducing the load of deleterious mutations. Here, we postulate instead that sexual selection primarily acts through females preferentially mating with males carrying beneficial mutations. We used simulation and analytical modelling to investigate the evolutionary dynamics of beneficial mutations in the presence of sexual selection. We found that female choice for males with beneficial mutations had a much greater impact on genetic quality than choice for males with low mutational load. We also relaxed the typical assumption of a fixed mutation rate. For deleterious mutations, mutation rate should always be minimized, but when rare beneficial mutations can occur, female choice for males with those rare beneficial mutations could overcome a decline in average fitness and allow an increase in mutation rate. We propose that sexual selection for beneficial mutations could overcome the 'two-fold cost of sex' much more readily than choice for males with low mutational load and may therefore be a more powerful explanation for the prevalence of sexual reproduction than the existing theory. If sexual selection results in higher fitness at higher mutation rates, and if the variability produced by mutation itself promotes sexual selection, then a feedback loop between these two factors could have had a decisive role in driving adaptation.
Mutation is the source of variation, and the overwhelming majority of mutations are deleterious (i.e. they have a negative impact on fitness) 1 . Because of this, most models have ignored beneficial mutations (i.e. those with a positive impact on fitness), deeming them too rare to be of interest (see 2 for an exception). Nevertheless, adaptation depends upon those rare occasions when mutations have a beneficial impact on fitness, especially in changing environments 3,4 . Here, we investigate the role of sexual selection in favouring beneficial mutations. Sexual selection can be a powerful process resulting in strongly biased mating success 5 . This can even allow modifiers of the mutation rate ('mutator genes' , such as factors controlling DNA repair 6,7 ) to persist because female choice selects for those males with beneficial mutations 8,9 . We postulate here that female choice can be so potent that it not only promotes the maintenance of genetic variation, but it allows beneficial mutations, despite their comparative rarity, to have a marked impact on fitness. This means that the two-fold cost of sex may be overcome not just by the lower reduction in fitness caused by deleterious mutation load 10,11 , but by the increase in fitness resulting from fixation of new beneficial mutations. Our focus in this paper is therefore on how sexual selection has a more important effect than has previously been considered in favouring beneficial mutations and driving adaptation, as opposed to the widely considered effect of decreasing deleterious mutational load. We further postulate that female choice can result in higher fitness at higher mutation rates, despite the decline expected from the predominance of deleterious mutations, because female choice is effective in selecting those males with an increased number of beneficial mutations.
Any explanation for the evolution of sexual reproduction within groups must show how the production of males, even when they do not care for offspring, increases the genetic quality of sexually produced offspring, to overcome the numerical reduction of reproducing offspring, the 'two-fold cost of sex' 12,13 . Sexual reproduction can entail variance in mating success, especially where females can choose between males of varying quality and where a male can mate with more than one female 14,15 . This differential mating success is integral to the process of sexual selection, and it has previously been suggested that this process could contribute to the maintenance of sex by the selective removal of low quality males from the breeding population 10,11 . Sexual selection could thereby reduce the risk of population extinction 2 , as has been demonstrated in flour beetles Tribolium castaneum 16 . According to this theory, females can pick those males with the lowest load of deleterious mutations and those males that do not contribute to the breeding population are effectively a sink for deleterious mutations.
We simulated the 'genetic quality' of individuals by examining the evolution of deleterious and beneficial mutations in asexual and in sexual populations, and by varying the degree of female choice in the latter. Implicit www.nature.com/scientificreports/ in our model is the assumption that, in addition to determining survival, the mutations that a male accumulates determine the condition of some trait, which is used by female subjects in mate choice. Thus, we assume that a male's genetic quality is revealed in the trait and the female subjects use that information to select the best male. Choice thereby functions to identify the highest quality mates 3,17 . This is supported by the literature on 'good genes' effects in sexual selection 18,19 and by the demonstration that sexual traits can reveal genetic quality [20][21][22] . It is also consistent with other sexual selection models 23 . We compared sexual and asexual populations in the presence of varying levels of sexual selection and a modifier of mutation rate. We hypothesized that increasing mutation rate above a baseline would leverage the effects of sexual selection making it more likely for sexual types to have higher genetic quality than asexual types. The rationale for this is that an increase in mutation rate feeds variation in genetic quality (and hence attractiveness), and that this variation promotes choice among males 15 . We consider that sexually reproducing individuals will vary in mating success 11 and that a key driver of this variation is female choice for high quality and attractive mates that will provide females with high viability and attractive offspring. We propose that females can actually get 'good genes' rather than 'fewer bad genes' as in the models of how sexual selection facilitates the evolution of sex 10,11 . As such, we predicted that variability in mutation rate should facilitate the effect of sexual selection in overcoming the two-fold cost of sex.
Methods
We simulated the 'genetic quality' of individuals by examining the evolution of deleterious and beneficial mutations in asexual and in sexual populations, and by varying the degree of female choice in the latter. A simplified flow diagram of the simulations is given in Fig. S6 and a Visual C program is provided as supplementary information. Our simulation methods were based on models of the evolution of mutation rate 6,8 . As in those simulations, the parameters reflected literature estimates where available 24 , subject to the constraint that the model was intended as a simple abstraction of reality and not an attempt to simulate an entire genome. The simulations began by setting up a population of P individuals. In simulations of sexual populations, exactly half were male and half female. Individuals were given a pair of homologous 'chromosomes' (i.e., we assumed diploidy) each bearing a 'mutator gene' and an associated string of 10 'viability genes' . We did not assume that these genes constituted the entire genome; only that there were no interactions between the genes of interest and those at other sites and that for the purposes of comparison between simulations, all other things were equal.
Each viability gene was subjected to a mutation process which could introduce deleterious and/or beneficial mutations. By default, deleterious mutations occurred with probability 10 -3 per gene per generation and beneficial mutations at 10 -6 per gene per generation, so deleterious mutation occurred at a rate 1000 × that of beneficial mutations. This implemented both the principle that there are many more ways to introduce faults in a complex organism than there are to improve it; and the empirical finding that beneficial mutations are much rarer than deleterious ones. We note that estimates of the per nucleotide mutation rate for sexually reproducing species tend to be around 1 × 10 -8 per individual per generation [24][25][26] ). However, if we were to use literature mutation rates we would also need to use realistic numbers of genes because selection acts at the level of the individual carrying those genes. This would then effectively mean we were trying to simulate entire genomes. We cannot do this, nor is this what the model is for. Instead, what we try to do is provide a 'proof of concept' by modelling a scenario where individuals are subject to selection based on differences in numbers of mutations. We assumed that each mutation had a sexually concordant effect. A mutator gene increased the rates of beneficial and of deleterious mutation by a factor M. We assumed that the mutator gene affected DNA repair only in a relatively small region of the genome 27 , namely the set of viability genes referred to above; that it was adjacent to the first of the row of viability genes; and that the crossover rate between the mutator and the first viability gene was the same as between each other viability gene.
Individuals were subjected to a mortality process, whereby their probability of survival was a function of their genetic quality. Deleterious mutations were assumed to have larger phenotypic effects than beneficial mutations: deleterious mutations reduced the wild type fitness of 1 by 0.005; beneficial mutations increased it by 0.002. The model assumed co-dominance with additive fitness effects, so the effects of mutations were summed to give 'genetic qualities' , which determined individual survivorship in a mortality process.
Surviving individuals reproduced, either by asexual or by sexual reproduction, the latter with or without female choice. Reproduction replaced the population by selecting females at random as parents, each such selection producing one offspring, with or without a male, which was either selected at random or chosen from a set of n. Asexual reproduction was implemented by selecting an individual at random and copying its chromosomes into an individual in the next generation. Sexual reproduction without female choice involved selecting a male and a female subject at random. In the case of female choice, a female subject was selected at random, and a set of F male subjects was selected at random. The female then bred with the male of highest genetic quality from that set of F males. This 'best of n' rule is the most widely used rule in modelling female choice 28,29 and has some empirical support from lekking species 30 . For each sexual mating, one offspring was produced by bringing together chromosomes contributed by both parents. This was carried out by selecting a chromosome at random from each parent and allowing crossover between each adjacent gene with probability 0.01. The process of selecting parents and producing offspring was repeated until the population was replaced by a new generation of P individuals.
Each scenario was simulated 10 times, allowing for uncertainty estimates to be computed across runs which each produced different results due to the stochastic processes in the models. Unless otherwise stated, parameters used were as given in Table S1.
Results
Considering first the results with the default mutation rates, if we compare Fig. 1a and b, we can see that female choice had a marked effect in decreasing the frequency of deleterious mutations (means and standard errors across 10 simulations at generation 100 were 1.7526 ± 0.4349 and 0.04200 ± 0.06530 with no female choice and with female choice between two males respectively) and in increasing beneficial mutations (from 0.0003 ± 0.0055 to 0.2000 ± 0.1897). We can approximate the relative effects of female choice on beneficial mutations as increasing them by a factor of 0.2000/0.0003 = 667, as compared to decreasing deleterious mutations by a factor of 1.7526/0.042 = 42. Figure 1 shows that this difference continued to increase markedly with the number of generations.
Looking at the relationship between mutation rate and deleterious mutations, we found that in populations with asexual reproduction and in those with sexual reproduction but no sexual selection, deleterious mutation load increased dramatically when mutation rate increased (Fig. 2a). However, female choice, even between just two males, reduced deleterious mutations to very low levels. Female choice was so effective that even with a high mutation rate, the numbers of deleterious mutations were reduced to substantially below those found in asexual populations or in sexual populations lacking female choice. Conversely, numbers of beneficial mutations were low, even with a ten-fold increase in mutation rate in asexual populations and sexual ones without female choice (Fig. 2b). However, with female choice, beneficial mutations were much more common, increasing from 12.49 ± 1.52 through 15.47 ± 1.57 to 17.19 ± 1.48 with female choice of 1, 2 and 5 males respectively; and an increase to ten times the mutation rate accentuated this effect (with 3.29 × , 5.87 × and 6.38 × the numbers of beneficial mutations with female choice of 1, 2 and 5 males respectively). Summing these effects, overall genetic quality decreased with increasing mutation rate in asexual populations and sexual ones lacking female choice, yet it increased with increasing mutation rate in populations with female choice (Fig. 2c). Thus, sexual selection www.nature.com/scientificreports/ was so powerful that it overcame the 1000-fold disadvantage (see Methods) of the beneficial mutation rate and allowed an increase in genetic quality with increased mutation rate. This increase in genetic quality occurred because sexual selection was effective at keeping numbers of deleterious mutations low despite an increase in mutation rate, yet was also effective in selecting for beneficial mutations, and could do this most effectively when mutation rate was high. We further investigated the interaction between female choice and mutation rate (Fig. 3). Genetic quality increased with mutation rate and female choice to an intermediate maximum before declining rapidly. High levels of female choice improved genetic quality; but there came a point where even the highest level of female choice shown was insufficiently powerful not to be overwhelmed by the surge in deleterious mutations caused by a substantially increased mutation rate.
We investigated the robustness of our results by varying key parameters whilst maintaining a moderate degree of female choice (F = 5). First, we varied the ratio of deleterious to beneficial mutation around the default ratio of 1000:1 respectively. As intuitively expected, genetic quality increased as beneficial mutations became relatively more common, and especially when this factor combined with a higher mutation rate (Fig. S1). Looking at the effect of each beneficial mutation on genetic quality, we can again see that the results are intuitive (Fig. S2): increasing the effect of beneficial mutations increases genetic quality, especially when mutation rate is high. Varying the cost of female choice within the range shown has no effect on genetic quality in either mutation rate condition (Fig. S3). Increasing the population size, P, increased the effect size, as expected from the increased potential for genetic change across more individuals (Fig. S4). Varying recombination rate had a little quantitative effect, with increased recombination allowing an increase in genetic quality, presumably through allowing favourable genetic combinations (Fig. S5). All in all, varying key parameters suggested that the main effect we describe was robust.
The benefit of simulations is that they can help us predict processes that are difficult to comprehend intuitively or mathematically; the corollary is that they can be hard to interpret. To better understand the processes in the model we hypothesized that female choice selects for a breeding population that results in offspring that are s standard deviations above the mean of the underlying population. To test whether males that were chosen to breed were indeed of higher genetic quality than the population from which they were drawn, we aggregated across the first 1000 generations for all 1000 reproducing pairs and calculated the mean difference in genetic quality between females and their chosen males. Where female choice was absent and mutation was at the default rate, the mean difference was 0.0001 ± < 0.00005 (standard error of the mean); with females choosing the best of 10 males and mutation rate at 10 × the default, the difference was 0.0022 ± < 00005. Therefore, females were choosing males that had a genetic quality equivalent to approximately 1 beneficial mutation above the average (where each beneficial mutation had an effect of 0.002 on fitness, as represented by the 10 simulated genes). In this way, sexual selection appears to be able to exploit increasing variation caused by increased mutation rate and actually produce an increase in genetic quality out of a background that one would otherwise expect to be dominated by increased deleterious mutation load.
As a simple analytical approximation, consider that female choice between males results in offspring that are s standard deviations above the mean genetic quality. For these offspring to be of a quality that overcomes the two-fold cost of sex we need: And therefore: x + sσ > 2x www.nature.com/scientificreports/ This equation will most readily be satisfied when the standard deviation of genetic quality is high relative to the mean. That is, there needs to be high variability within the population. Variability results from mutation, and we can show when an increase in mutation rate can be favoured. This can occur when the decrease in the mean genetic quality in the population that necessarily results from an increase in mutation rate (assuming there is a strong predominance of deleterious over beneficial mutations) is more than compensated for by the effect of female choice in selecting for males of high genetic quality. If subscripts 1 and 2 indicate the means and standard deviations before and after the change in mutation rate, then: So: This analytical approach has the benefit that it considers the whole genome, something we do not attempt in our simulations, which allow us to include stochastic factors at the level of the gene. As a hypothetical numerical example, consider that increasing the mutation rate lowers the mean genetic quality from 1 to 0.9 whilst increasing the variance in genetic quality from 0.2 to 0.4. Consider also that female choice selects for males that produce offspring 1 standard deviation above the mean. Entering these hypothetical values into Eq. 2 demonstrates that an increase in mutation rate combined with female choice can indeed lead to an increase in the genetic quality of offspring. To aid understanding, this example is illustrated in Fig. S6. Whilst simplified, this makes the point that an increase in mutation rate can be favoured under sexual selection and can thereby contribute to overcoming the two-fold cost of sex.
Discussion
We have shown here that sexual selection is a potent force not just in reducing deleterious mutation load but in leveraging the spread of beneficial mutations. Several empirical studies have found that sexual selection over relatively short periods of time (typically under 100 generations) resulted in increased average fitness or reduced extinction 31 . Intuitively, it seems more plausible that these results are observed because sexual selection is acting on the already present standing variation, which largely comprises deleterious alleles. However, our result shows that sexual selection for beneficial mutations can actually be the more important effect, even over moderate numbers of generations and even when beneficial mutations were 1000 times rarer than deleterious ones and when they each had only 40% of the impact on fitness. We suggest that rather than males being effectively a sink for bad genes 10,11 they can be a vehicle for good genes.
The power of sexual selection was also apparent when we allowed for an increase in mutation rate. We found that sexual selection for beneficial mutations could result in an increase in genetic quality with increased mutation rate. This is counterintuitive given that any increase in mutation rate brings 1000 times as many deleterious as beneficial mutations in our model. An increase in mutation rate can never be adaptive in a model that considers only deleterious mutations; it is only by allowing for rare beneficial mutations that we can discern this effect. The result highlights just how powerful the effect of female choice can be, even when just choosing between 2 males. Not only can sexual selection combined with increased mutation rate provide a solution to the paradox of how variation can be maintained 8,32 , but it can also lead to an increase in genetic quality.
We believe this is the first time that this effect has been reported. Its significance is that if mutation rate is elevated above a typically postulated minimum level, then this would increase the benefits of sex relative to asexual reproduction, helping to overcome what has been termed the 'two-fold cost of sex' 12,13 . It therefore seems that the role of sexual selection in the promotion of heritable genetic variation is key to understanding the predominance of sexual reproduction.
Our model employs 'best of n' female choice as a convenient way of modelling sexual selection. However, this is just one way of implementing variance in mating success in which some individuals take a disproportionate share of mating opportunities. Active choice by females is clearly something that must have evolved after sexual reproduction itself, but in line with theoretical treatments that have focussed on deleterious rather than beneficial mutations 10,11 , we posit that some variance in mating success is inevitable and that this could have contributed to the invasion of sexual reproduction in an asexual population in the first instance. Current thinking suggests that anisogamy has evolved from isogamy 33,34 . The evolution of anisogamy needs to be accompanied by an effect on mating success. Only when an increase in mating success of smaller male gametes overcomes the cost of loss of a male resource contribution to the zygote will this system be evolutionarily stable. Where mating success is determined by the level of indirect genetic benefits, the male phenotype will evolve to reveal their underlying genetic quality and only when the genetic quality differences are large and constantly maintained by mutation will this system be evolutionarily stable 8,15,35 . Signalling of genetic quality by males must therefore have evolved either concurrently or very early on in the evolution of anisogamy.
Our model assumes that mutation can be controlled genetically. One mechanism for this is through selection on those genes that are responsible for DNA repair 6 , but there are other possible genetic mechanisms 36 . We predict a greater mutation rate in genes that influence sexually selected phenotypes and in more sexually selected species. Across non-human species, particularly in birds, which differ in within population variance in mating success 37 , and thus the level of sexual selection, there is some evidence of a positive correlation with the rate of mutation, measured by variance in minisatellite mutation rate 38,39 . Given that sexual selection can often result www.nature.com/scientificreports/ in greater variance in mating success in males, our results are also consistent with male-biased mutation rates 40 .
Our results may also be consistent with the findings of experimental work on seed beetles 41 : interestingly, an experiment reported that not only did sexually selected males pass on a lower mutation load, but that they had fewer de novo mutations suggesting that sexual selection interacts with mutation rates. We have shown here that sexual selection can cause fitness to be higher when mutation rate is higher; but we have not shown that this can dynamically drive an increase in mutation rate through individual level selection. To do this would require an understanding of how much individuals gain themselves from having a high mutation rate and how much they gain from others having a high mutation rate. These fitness benefits depend in turn on with whom individuals interact, which means considering population size and structure. Hence to limit the complexity of the analysis presented we have here confined ourselves to considering fitness benefits at the population level from fixed mutation rates.
Our results suggest a positive feedback effect whereby a high mutation rate could favour sexual reproduction over asexual reproduction, while sexual reproduction with sexual selection could favour a high mutation rate. We speculate that this could create a ratchet effect in that once sexual reproduction is established, it would become harder to switch back to asexual reproduction. We predict that the transition from sexual reproduction to asexuality should be particularly rare in populations with strong sexual selection and that parthenogenetic females could only invade a population of sexually reproducing females when there is little or no genetic variation in male genetic quality. This could occur in small, isolated inbred populations that become genetically depauperate. There is some evidence that parthenogenetic populations can occur on islands 42 . Interestingly, some island populations of birds are also known to lose the sexually selected signals that are characteristic of mainland populations 43 .
Natural selection will tend to favour a low mutation rate 1 , yet adaptation requires mutation. Our model shows how a faster rate of adaptation can occur with sexual selection. This is consistent with the finding that sexual selection (measured as the degree of polygyny) interacts with the rate of molecular evolution and with body mass to predict species richness at the genus level 44 . We suggest that evolvability itself is under selection 45 , and that through sexual reproduction with sexual selection, evolution can lead to greater evolvability. | 6,238.4 | 2021-05-26T00:00:00.000 | [
"Biology"
] |
The impact of temperature vertical structure on 1 trajectory modeling of stratospheric water vapour 2
Lagrangian trajectories driven by reanalysis meteorological fields are frequently used to 13 study water vapour (H 2 O) in the stratosphere, in which the tropical cold - point 14 temperatures regulate H 2 O amount enterin g the stratosphere. Therefore, the accuracy of 15 temperatures in the tropical tropopause layer (TTL) is of great importance for 16 understanding stratospheric H 2 O abundances . Currently, most reanalyses, such as the 17 NASA MERRA (Modern Era Retrospective-Analysis for Research and Applications), 18 only provide temperatures with ~1.2 km vertical resolution in the TTL , which has been 19 argued misses finer vertical structure in the tropopause and therefore introduce 20 uncertainties in our understanding of stratospheric H 2 O. In this paper, we quantify this 21 uncertainty by comparing the Lagrangian trajectory prediction of H 2 O using MERRA 22 temperatures on standard model levels ( traj.MER-T ), to those using GPS temperatures in 23
Trajectory Model and Temperatures Used 35
Stratospheric water vapour (H 2 O) and its feedback play an important role in regulating 36 the global radiation budget and the climate system (e.g., Holton et al., 1995;Randel et al., 37 2006;Solomon et al., 2010;Dessler et al., 2013). It has been known since Brewer's 38 seminal work on stratospheric circulation that tropical tropopause temperature is the main 39 driver of stratospheric H 2 O concentration (Brewer, 1949). As parcels approach and pass 40 through the cold-point tropopause -the altitude at which air temperature is the coldest, 41 condensation occurs and ice falls out, thereby regulating the parcel's H 2 O concentration 42 to local saturation level (e.g., Fueglistaler et al., 2009, and references therein). This is the 43 dehydration process. The role of tropopause temperature variation in tropical dehydration 44 is most apparent in the annual variation in tropical stratospheric H 2 O, also known as the 45 "tape recorder" (Mote et al., 1996). 46 When air crosses the tropical tropopause layer (TTL), it experiences multiple 48 dehydrations due to encounter of lower temperatures, and the final stratospheric H 2 O 49 mixing ratio is established after air passing through the coldest temperature along its path, 50 which sets the strong relation between cold-point tropopause and the entry level H 2 O 51 (e.g., Holton and Gettelman, 2001;Randel et al., 2004Randel et al., , 2006. 52 53 The details of the transport and dehydration process can be understood by performing 54 Lagrangian trajectory simulations, which track the temperature history of a large number 55 of individual parcels. Unlike modeling chemical tracers that depends strongly on the 56 transport imposed (Ploeger et al., 2011;Wang et al., 2014), the simulation of H 2 O is 57 primarily constrained by tropopause temperatures. Dehydration thus primarily depends 58 on the air parcel temperature history, and stratospheric H 2 O simulations ultimately 59 require accurate analyses of temperatures particularly in the tropopause (e.g., Mote et al., 60 1996;Fueglistaler et al., 2005Fueglistaler et al., , 2009Liu et al., 2010;Schoeberl and Dessler, 2011;61 Schoeberl et al., 201261 Schoeberl et al., , 2013. 62
63
In this paper, we use a forward, domain-filling trajectory model to study the detailed 64 dehydration behavior of the humidity of air parcels entering the tropical lower 65 stratosphere. Previous analyses have demonstrated that this model can accurately 66 simulate many aspects of the observed stratospheric H 2 O (Schoeberl and Dessler, 2011;67 Schoeberl et al., 201267 Schoeberl et al., , 2013. Despite the good agreements with observations, there are 68 clear areas of uncertainties from, for instance the accuracy of circulation fields 69 atmosphere (Kursinski et al., 1997). 162 163 The GPS radio occultation (RO) technique makes the data accuracy independent of 164 platforms. That makes the biases among different RO payloads could be as low as 0.2 K 165 in the tropopause and stratosphere (Ho et al., 2009). Therefore, to compensate the 166 relatively lower horizontal resolution (relative to that of reanalysis), we include GPS RO 167 from all platforms. This include the Constellation Observing System for Meteorology, 168 Satellite-A (MetOp-A), the Satellite de Aplicaciones Cientifico-C (SACC) satellite (Hajj 173 et al., 2004), and the TerraSAR-X (TerraSAR-X). There are ~2000-3500 profiles per day, 174 mostly from COSMIC, with ~700-1100 profiles of these in the tropics. 175 176 Each day, GPS temperature profiles are binned to 200-m vertical resolution. Horizontally, 177 we grid data into 2.5x1.25 (longitude by latitude) grids with 2-D Gaussian function 178 weighting. This gridded dataset has been successfully used in diagnosing many detailed 179 features of tropopause inversion layer (Gettelman and Wang, 2015 finer vertical structure induced by waves (refer Fig. 3 in Kim and Alexander, 2013). The 218 trajectory simulation using this temperature dataset is denoted as traj.MER-Twave. 219 220 Note that we only considered the vertical structure issue, since it is by far a limiting 221 factor in representing waves in the TTL. A large portion of TTL wave spectrum has 222 horizontal and temporal scales much larger and longer than reanalysis resolution, 223 therefore, temperature behaves almost linearly in-between model horizontal and temporal 224 resolution. However, temperature does not behave linearly in vertical space due to the 225 fact that a significant portion of TTL waves have vertical wavelengths shorter than ~4 km 226 (see Figure S4 in supporting information of Kim and Alexander, 2015), which could 227 make wave-induced disturbances less represented by the ~1.2 km vertical resolution in 228 reanalyses. 229
230
The wave scheme produces both positive and negative perturbations to the MERRA 231 temperature profiles, depending on the phase of waves. Overall, the change in 232 temperature induced by waves is less than 2 K (Fig. 3), although in rare cases it can reach 233 5-7 K. Importantly, however, about 80% of the changes in cold-point temperature are 234 negative, with the wave scheme lowering the average cold-point temperatures by ~0.35 235 K. It is this reduction in cold-point temperature that is responsible for the reduction in 236 exists, but the mean temperatures are more accurate. In contrast, MER-Twave has better 242 variability but not accurate mean, since it is designed to have similar variability to 243 radiosondes but with mean reserved to original MER-T. In summary, the mean 244 temperature is closer to reality in GPS than in MER-T and MER-Twave, but the 245 temperature variability is closer to reality in MER-Twave than in MER-T and GPS. In 246 addition, the MER-Twave is a general technique that could be applied to situations where 247 GPS temperatures are not available (e.g., reanalyses before 2006, climate models). 248 249
Interpolation scheme 250
In our studies, we use linear interpolation to estimate the temperature between the fixed 251 levels of temperature datasets. However, some previous analyses have used higher order 252 interpolations, such as cubic spline (e.g., Liu et al., 2010), to make assumptions about the 253 strong curvature of temperature profiles around the cold-point tropopause. In order to 254 determine which approach is superior, we sample GPS tropical temperature profiles at 255 MERRA vertical levels and then use the two interpolation schemes to reconstruct the full 256 GPS resolution. Then we compare the minimum saturation mixing ratio from the 257 recovered profiles to the minimum calculated from the full resolution GPS profiles. 258 259 Fig. 4a shows the probability distribution of the differences between the minimum 260 saturation mixing ratio in the full-resolution GPS profile and in the two interpolation 261 schemes. On average, the linear interpolation performs better (RMS difference is 0.18 262 and 0.25 ppmv for the linear and cubic spline, respectively). Fig. 4b shows the 263 corresponding probability distribution of the difference of the pressure of this minimum, 264 and the linear interpolation does better for this metric, too (RMS difference is 5.2 and 7.2 265 hPa for the linear and the cubic spline interpolation, respectively). We have also tested 266 higher order spline interpolations and find that none produce lower RMS errors than 267 linear interpolation. Overall, cubic spline interpolation tends to underestimate cold-point 268 temperature, making the implied H 2 O too dry, as noted by Liu et al., (2010). Thus, in our 269 studies we adopted linear interpolation scheme for three different trajectory runs. entering the stratosphere. We define "parcels entering the stratosphere" as parcels that 277 underwent final dehydration between 45 o N-45 o S (thus ignoring polar dehydration) and 278 that were already at altitudes higher (pressure lower) than 90 hPa for at least six months 279 since the last time they were dehydrated (FDP). This guarantees that parcels already 280 crossed the cold-point tropopause (~380 K or ~100-94 hPa) and has indeed experienced 281 the coldest temperature along its ascending paths. Averaging over 7 years minimizes the 282 effects of interannual variability. In another word, the bimodal FDP distribution from MERRA run ( Fig. 5a) could be even 316 more peaked when choosing smaller integration step in our trajectories. Two reasons that 317 we didn't choose such smaller time step: 1) the wind and temperature data are only 318 available 6-hourly or even daily (GPS) so much smaller time step introduces more 319 uncertainties with more interpolation; and 2) considering the balance between model 320 efficiency and computational resources. traj.MER-Twave dries by ~0.2-0.3 ppmv ( Fig. 7a-b), accounting for at most ~2.5% and 386 7.5% changes given typical stratospheric H 2 O abundances of ~4 ppmv, respectively. 387 However, despite the differences in H 2 O abundances, the interannual variability (residual 388 from the mean annual cycle) exhibits virtually no differences, due to the strong coupling 389 between the interannual changes of stratospheric H 2 O and tropical cold-point tropopause 390 temperatures (Fig. 8). Therefore, in terms of studying the interannual changes of 391 stratospheric H 2 O, we argue that reanalysis temperatures are more useful due to its long-392 term availability. 393 Looking at the locations of FDP, we find a bimodal distribution when using standard 395 MERRA temperatures on model levels . This is caused by the fact that the 396 cold-point tropopause is constrained to be near the two MERRA model levels (100.5 and 397 85.4 hPa) that bracket the cold-point tropopause (Fig. 5d-f). When using the temperatures 398 with finer vertical structures, the resultant FDP patterns appear to be more physically 399 reasonable (Figs. 5a-c and Fig.6). 400
401
In this paper we perform linear interpolations for all trajectory runs. Other analyses have 402 used cubic spline interpolation owing to the strong curvature of temperature profile 403 around the cold-point tropopause. We investigate the performances of both schemes using 404 GPS temperature profiles (Sect. 2.2.3) and find that while introducing new information 405 due to its assumption in the temperature profile around the tropopause, cubic spline 406 scheme tends to generate unrealistically low cold-point temperatures due to cubic fitting. 407 Therefore, the results are not necessarily realistic and on the other hand linear 408 interpolation is overall more accurate (Fig. 4). 409 | 2,560.6 | 2015-03-31T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
A Hybrid Mean Value Involving Dedekind Sums and the Generalized Kloosterman Sums
In this paper, we use the mean value theorem of Dirichlet
L
-functions and the properties of Gauss sums and Dedekind sums to study the hybrid mean value problem involving Dedekind sums and the general Kloosterman sums and give an interesting identity for it.
Introduction
Let q be a natural number and h be an integer coprime to q. e classical Dedekind sums where describes the behaviour of the logarithm of the η-function (see [1,2]) under modular transformations. ere are many papers written on their various properties (see the examples in [3][4][5][6][7][8][9][10] and [11]). In particular, Zhang and Liu [12] studied the hybrid mean value problems related to Dedekind sums and Kloosterman sums: where q ≥ 3 is an integer, q′ a�1 denotes the summation over all 1 ≤ a ≤ q with (a, q) � 1, e(y) � e 2πiy , and a denotes the multiplicative inverse of a mod q. ey proved the following results: Theorem 1. Let p be an odd prime, then one has the identity where h p denotes the class number of the quadratic field Q( �� � − p √ ).
Theorem 2. Let p be an odd prime, then one has the asymptotic formula: where exp(y) � e y .
It is natural that people will ask, for the general Kloosterman sums what will happen? Whether there exists an identity similar to eorem1? Here, χ denotes any Dirichlet character mod p.
e main purpose of this paper is to answer these questions. at is, we shall use the mean value theorem of Dirichlet L-functions and the properties of Gauss sums and Dedekind sums to prove the following. Theorem 3. Let p be an odd prime with p ≡ 1 mod 4. en, for any Dirichlet character χ mod p, we have the identity: Theorem 4. Let p be an odd prime with p ≡ 3 mod 4. en, for any Dirichlet character χ mod p, we have the identity: where χ 2 � ( * /p) denotes the Legendre symbol and h p denotes the class number of the quadratic field Q( It is clear that if χ � χ 0 , then K(a, 1, χ; p) � K(a, 1; p). Note that χ 0 (− 1) � 1, from eorems 3 and 4, we may immediately deduce eorem1 in [12], so our results are the generalization of [12].
Several Lemmas
In this section, we shall give several simple lemmas, which are necessary to the proofs of our theorems. Hereafter, we shall use many properties of character sums and Gauss sums, and all of these can be found in reference [13]. First, we have the following.
Lemma 1. Let p > 3 be a prime, χ be any fixed Dirichlet character mod p.
en, for any nonprincipal character χ 1 mod p with χχ 1 ≠ χ 0 , we have the identity: where χ 0 denotes the principal character mod p, τ(χ) denotes the Gauss sums defined as τ(χ) � On the other hand, from the properties of Gauss sums, we have Combining (10) and (11), we may immediately deduce the identity: is proves Lemma 1.
Lemma 2.
Let p be a prime with p ≡ 1 mod 4 and χ be any odd character mod p. en, we have the identity: Proof. Since p ≡ 1 mod 4 and χ is an odd character mod p, we know χ is not the Legendre symbol and χ 2 ≠ χ 0 . Note that χ(− 1) � − 1, from (10), we have is proves Lemma 2.
Lemma 4.
Let q > 2 be an integer, then for any integer a with (a, q) � 1, we have the identity: where L(1, χ) denotes the Dirichlet L-function corresponding to the character χ mod d.
Proof of the Theorems
In this section, we will complete the proof of our theorems. First we prove eorem 3. From Lemma 4 and the definition of S(a, p), we have and (with a � 1) Since p ≡ 1 mod 4, we know the Legendre symbol ( * /p) � χ 2 is an even character mod p. Note that, for any nonprincipal character χ mod p, |τ(χ)| � � � p √ . So, if χ is an even character mod p, then from Lemma 1, (18), and (19), we have If χ is an odd character mod p, then note that the identity: · L 1, χ 1 2 + p · π − 2 p − 1 χ 1 mod p | 1,078 | 2021-03-30T00:00:00.000 | [
"Mathematics"
] |
LapTrack: linear assignment particle tracking with tunable metrics
Abstract Motivation Particle tracking is an important step of analysis in a variety of scientific fields and is particularly indispensable for the construction of cellular lineages from live images. Although various supervised machine learning methods have been developed for cell tracking, the diversity of the data still necessitates heuristic methods that require parameter estimations from small amounts of data. For this, solving tracking as a linear assignment problem (LAP) has been widely applied and demonstrated to be efficient. However, there has been no implementation that allows custom connection costs, parallel parameter tuning with ground truth annotations, and the functionality to preserve ground truth connections, limiting the application to datasets with partial annotations. Results We developed LapTrack, a LAP-based tracker which allows including arbitrary cost functions and inputs, parallel parameter tuning and ground-truth track preservation. Analysis of real and artificial datasets demonstrates the advantage of custom metric functions for tracking score improvement from distance-only cases. The tracker can be easily combined with other Python-based tools for particle detection, segmentation and visualization. Availability and implementation LapTrack is available as a Python package on PyPi, and the notebook examples are shared at https://github.com/yfukai/laptrack. The data and code for this publication are hosted at https://github.com/NoneqPhysLivingMatterLab/laptrack-optimisation. Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
Automated tracking of particles in timelapse images is important in a wide range of fields in science and is especially crucial in creating large datasets of cell lineages in biological studies. Recently there has been considerable development in tracking algorithms, where methods based on probabilistic modeling (Bise et al., 2011;Bove et al., 2017;Chen, 2021;Chenouard et al., 2009Chenouard et al., , 2014Meijering et al., 2009;Ulicna et al., 2021;Ulman et al., 2017) and supervised machine learning (Ben-Haim and Riklin-Raviv, 2022;Chen, 2021;Lou and Hamprecht, 2011;Ulman et al., 2017) are increasingly being developed. The diverse nature of live imaging tasks, however, frequently requires tracking without underlining model or largescale ground-truth annotations, emphasizing the need for a robust tracking algorithm with a small number of parameters that can be tuned by manual annotations.
Defining and optimizing a global cost function to appropriately penalize wrong connections is a common approach in robust tracking methods. If the cost function is a linear sum of the costs associated with the connections, we can employ efficient algorithms (Jonker and Volgenant, 1987;Kuhn, 1955) to solve the global optimization problem called the linear assignment problem (LAP). The LAP-based tracking method has proven to be accurate and robust, especially for data with higher particle density. To deal with particle splitting (by division or oversegmentation) or merging (by undersegmentation), which is common in live cell data, Jaqaman et al. (2008) further developed a two-stage LAP method, with the second stage dedicated to the connection of splitting and merging branches. The cost function in their case was the squared Euclidean distance between the positions of the objects, with additional intensityassociated costs for splitting and merging.
Tools have been developed to provide similar LAP-based algorithms with splitting and merging detection; TrackMate (Ershov et al., 2022;Tinevez et al., 2017), for example, provides distancebased LAP tracker with particle detection and segmentation workflow and a method to conduct manual correction, all within the Java-based framework in ImageJ (Schindelin et al., 2012;Schneider et al., 2012). Cell-ACDC (Padovani et al., 2022), which was originally designed for yeast analysis, also implements an overlap-based LAP tracker with splitting detection, as well as various functions ranging from image alignment to manual correction that support the entire analysis workflow in Python. In addition, TracX (Cuny et al., 2022) employs a multi-round tracking and correction workflow using a LAP tracker and mistracking detector by matching image features.
Although other highly accurate methods have been proposed to work for the tracking problem with cell divisions, no single tracking algorithm will be perfect for all the diverse experimental situations (Ulman et al., 2017). To obtain near-perfect segmentation and tracking for specific data, users must still optimize the segmentation and tracking steps, automatically or manually. In this regard, the LAP-based algorithm that robustly works with a small number of parameters continues to play a key role in generating the initial tracking data without large-scale manual annotation.
An adaptive improvement to the original LAP-based tracking with distance can be made by using additional features taken from the cell images. For example, we can extract the morphology of each cell, such as its shape and size, from typical live cell images, as well as the signal levels from multiple fluorescent channels. The consistency of cell shape and fluorescent signals across time frames is useful when tracking is conducted by human eyes, especially when the frame rate of the data is not high enough. Therefore, it is desirable to be able to implement arbitrary inputs and cost functions in the LAP-based tracking scheme, as well as to tune the parameters using partial ground-truth annotations.
These requirements motivated us to build a tool that recapitulates the LAP algorithm (Jaqaman et al., 2008;Tinevez et al., 2017) with additional flexibility and modularity; LapTrack is designed as a simple intermediate in the entire tracking pipeline that takes the positions and features of particles and returns LAP-optimized tracks. The three unique features of LapTrack are (i) arbitrary tunable cost functions for particle connection, (ii) integrability with other Python tools and (iii) the functionality to preserve ground-truth (annotated) connections. Within this framework, we can implement user-defined cost functions for connections that can take an arbitrary number of inputs. The tracking function is modularized and documented as an application programming interface (API) so that it can be integrated into any custom workflow in Python, allowing parallel parameter optimization as well as visualization of results in easy steps.
In this article, we demonstrate how this pipeline can be used not only to optimize the tracking in a supervised manner, but how it is also useful for efficient manual correction of the tracks when combined with visualization tools such as napari (Sofroniew et al., 2022).
Datasets
We here describe the data that we used to demonstrate the use cases of LapTrack: live cell images with ground truth segmentation and tracking (mouse paw epidermis dataset, cell migration dataset, Yeast Image Toolkit dataset and C2C12 dataset) and simulated data (colored particles) provided in https://github.com/NoneqPhysLivingMatterLab/lap track-optimisation. We also used high-density vesicles, yeast and 3D Drosophilla data to show that the tracking pipeline works for a wide range of applications.
Mouse paw epidermis dataset
The segmentation data and the ground truth tracking result collected and analyzed in Mesa et al. (2018); Yamamoto et al. (2022) were used as a reference. The dataset contains 236 to 327 cells in the observation area and has 15 frames.
Cell migration dataset
Images, segmentation data for a portion of frames and the ground truth tracking result were downloaded from Zenodo (Pylvänä inen et al., 2022). Segmentation was conducted by Cellpose (Stringer et al., 2021) and manually corrected in napari (Sofroniew et al., 2022). The ground-truth tracking result was also manually validated and corrected. The dataset contains 218 to 434 cells in the 648.95 mm  648.95 mm observation area and has 86 frames.
Yeast image toolkit dataset
The dataset was downloaded from the Yeast Image Toolkit website http://yeast-image-toolkit.org/ (Versari et al., 2017). The data included the ground-truth cell positions at each time frame, which were used for the tracking in the benchmark (Section 3.2).
C2C12 dataset
The dataset (Ker et al., 2018) was downloaded from the public repository (Ker, 2017). We used the first 780 frames of sequence 9 with the 'BMP2' condition for the benchmark (Section 3.2), since it included the annotation for all cells in the field. We manually validated the dataset and removed duplicated annotations on a single cell.
Colored particles
We simulated the Brownian motion of 400 particles with colors in a 2D box of size 20 Â 20 with periodic boundary conditions. The particles were split into two species, a and b, where the interaction between the particles was set as harmonic repulsion with the spring constants set as 1 for a and a pairs, 1.2 for a and b pairs, and 1.4 for b and b pairs. The dynamics was simulated with the simulate.brownian routine in Jax-MD (Schoenholz and Cubuk, 2020) with the parameters kT ¼ 0.1 and dt ¼ 0.001, where the mass and friction coefficient were set to the default values, 1 and 0.1. For each particle, labeled by i, a random integer n i between 0 and 7 is assigned. The feature vector c i 2 R 3 , corresponding to RGB colors, of each particle at each time step is then assigned as where n k i is the kth digit of n i in the binary representation and RðxÞ ¼ d x;0 N ð2; 0:5Þ þ d x;1 N ð6; 0:5Þ, where N ðl; rÞ is the normal random variable with mean l and the standard deviation r. When used for the tracking benchmark, particles crossing the boundary are regarded as disconnected and belong to different tracks.
For Figure 1c, the Yeast Image Toolkit data in IT-Benchmark2/ TestSet4/RawData were segmented by Cellpose 0.7.2 (Stringer et al., 2021) with the parameters model_type¼ 'cyto', net_avg¼True, and diameter ¼ 30 in the eval function. The centroids of each segmented region were tracked by LapTrack with the default metric and track_cost_cutoff ¼ 100, splitting_ cost_cutoff ¼ 2500.
Tracking implementation
The implemented particle tracking algorithm follows the method proposed in Jaqaman et al. (2008), with modifications following TrackMate (Ershov et al., 2022;Tinevez et al., 2017) and additional flexibility, as we describe in the following sections.
Frame-to-frame LAP
In the first step, the points in successive frames are connected by solving LAP, and then generating tracks without splits and merges (Fig. 1a, left top). Specifically, for every pair of points with properties (such as Euclidean coordinates) x i and x j at frames t and t þ 1, the costs l ij ¼ lðx i ; x j Þ are computed using a user-definable metric function l. The costs d and b are then assigned to the particles not connected to any of the particles in the next and previous timesteps, respectively. The optimal assignment is found by minimizing the cost (Jaqaman et al., 2008): where C is the set of all connected index pairs, B and D are the numbers of the points which does not have a connection to the previous and next timesteps, respectively, and l 0 ¼ minðl ij ; d; bÞ (see Supplementary Material for algorithm details). In the default setting, d and b are calculated as 1:05 Â c 90% , where c 90% is the 90% percentile value of the all finite entries in fl ij g ij (Jaqaman et al., 2008). The default metric for l is set to the squared Euclidean distance lðx i ; x j Þ ¼ kx i À x j k 2 2 (Ershov et al., 2022;Jaqaman et al., 2008;Tinevez et al., 2017) with which the cost-minimizing association can be interpreted as the maximum log-likelihood solution for Brownian particles when we ignore splitting and merging (Crocker and Grier, 1996).
Segment-connecting LAP
In the second step, another LAP is solved to predict splitting, merging, and gap closing (Fig. 1a, left bottom). Gap closing connects free segment ends with allowing frame skips. The gap closing cost g ab ¼ gðx a ; x b Þ is calculated by a user-definable metric g for all possible connections between free ends up to a specified frame difference, and the splitting and merging costs s ab ¼ sðx a ; x b Þ and m ab ¼ mðx a ; x b Þ are calculated for all possible connections between a free end and a track midpoint by user-definable metrics s and m. The metrics g, s and m default to the squared Euclidean distance. Then, the optimal assignment is calculated by minimizing the overall cost:
Freezing annotated tracks
We implemented an option to specify partial tracks within the data to be fixed as ground-truth verified connections (Fig. 1d). Fixing the correct tracks is especially useful when performing manual corrections using visualization tools such as napari. As we demonstrate (https://github.com/NoneqPhysLivingMatterLab/laptrack-optimisa tion) (Fig. 1e), track connections can be specified to be fixed by annotating the cell regions before rerunning the LAP-based tracking. The resulting track preserves the training data tracks due to the masking scheme (Fig. 1d).
Parameter optimization
In practice, we introduce the cut-off for the costs l ij , g ab , s ab and m ab , above which those values are regarded as infinity. The values of the cut-offs can affect the performance as demonstrated in Section 3.1, but it is difficult to optimize those values due to the nondifferentiability of the LAP algorithm (Xu et al., 2020) and the high computational cost for repeating the tracking routine. We therefore used non-gradient optimization methods to optimize the specified sets of the parameters in parallel using the package Ray Tune (Moritz et al., 2018) with the Optuna optimizer (Akiba et al., 2019) and random search. We selected the parameters that achieved the highest connection Jaccard index value or true positive rate, depending on the type of the training data (Section 2.3.1).
Analysis pipeline
LapTrack is written in Python with explicit API documentation and can be integrated with, for example, particle detectors in scikitimage and deep learning-based segmentation packages such as Cellpose (Stringer et al., 2021) (Fig. 1b and c). The output data is a networkx (Hagberg et al., 2008) directed graph, which can be analyzed using the network analysis functions in the package. We also implemented utilities to convert data into pandas dataframes (pandas development team, 2020; Wes McKinney, 2010) and shorthand functions to track coordinates organized in a dataframe. In this paper, we used the ground-truth segmentation for each dataset as the input and analyzed the result tracks by networkx and pandas. Python scripts for tracking and analysis are provided at https:// github.com/NoneqPhysLivingMatterLab/laptrack-optimisation.
Metrics for the tracking results
To measure the performance of tracking, we employed the following metrics, which can also be calculated within LapTrack.
Overall tracking scores
To measure the overall track consistency, we calculated the target effectiveness (TE) and track purity (TP) (Bise et al., 2011;Chen, 2021), which penalize the false negative and the false positive detections, respectively. Let us denote the set of ground truth tracks by fT g j g j and predicted tracks by fT p j g j . TE for a single ground truth track T g j is calculated by finding the predicted track T p k that overlaps with T g j in the largest number of the frames and then dividing the overlap frame counts by the total frame counts for T g j . The TE for the total dataset is calculated as the mean of TEs for all ground truth tracks, weighted by the length of the tracks. TP is defined analogously, with T g j and T p j being swapped in the definition. We also measured the mitotic branching correctness (Bise et al., 2011;Chen, 2021), defined as the fraction of the number of correctly detected divisions over the total number of the divisions.
Overlap between predicted and ground truth connections
During the parameter optimization, we used a less computationally expensive quantity, the Jaccard index and the true positive rate of the connections to measure how well the predicted connections overlap with the ground truth. The quantity is defined by jE p \ E g j=jE p [ E g j and jE p \ E g j=jE g j, respectively, where we denote the set of predicted and ground-truth connections by E p and E g , respectively, and the size of a set E by jEj. In the benchmark of the Yeast Image Toolkit dataset (Section 3.2), we additionally calculated the F-score of the assignment 2jE p \ E g j=ðjE p j þ jE g jÞ to compare the performance with previously reported results.
Distance cut-off points can be optimized to increase performance
We first investigated the performance against various cost cut-off points in the simplest cases where the costs for connecting, gap closing and splitting are the squared Euclidean distance between the centroids. Specifically, we varied the maximum distance allowed for frame-to-frame particle association (max_distance) and splitting and gap-closing association (splitting_max_distance), which defines the cut-off for l ij and s ab (g ab ), respectively, and investigated how the overall performance changes. In the mouse epidermis dataset (Fig. 2a), we performed grid search in the parameters max_distance and splitting_max_distance. We found that there exists a maxima in the TE around some finite length scale, suggesting that optimization is useful in performance improvement even for the cut-off parameters (Fig. 2b). We also found that the correlation of the tracking scores between mouse epidermis data from different regions are high upon changing of the parameters [r ¼ 0.96 (r ¼ 0.90) for TE (TP) using data with TE > 0:75 (TP > 0:75), respectively ( Supplementary Fig. S1)], meaning that the optimized parameters are transferable within similar data.
Distance-only LAP tracker can achieve comparable performance to data-specific methods
We then benchmarked the tracking performance of the simple distance-only LAP tracker with the Yeast Image Toolkit dataset. Since the published benchmark results in Versari et al. (2017) do not include divisions, we tracked ground truth segmentation positions without splitting, with different cut-off points max_distance and gap_closing_max_distance (the cut-off point for g ab ). We then calculated the TE, the assignment F-score (tracking F-score), and the F-score for the assignments between the first and the last frames (f) TE score for the colored particles dataset with different frame intervals, with or without the feature difference term in the metric. The error bar indicates the standard deviation of five trials. (g) Mitotic branching correctness score for the mouse epidermis dataset, tracked with the centroid distances (centroid) or the overlap ratio (overlap). The error bar indicates the standard deviation of five trials (long-term tracking F-score). We used the Evaluation Platform software (Versari et al., 2017) to calculate the F-scores. Supplementary Figure S2 shows that this simple tracker achieves TE higher than 0.9 for all the datasets, and the F-scores are comparable to or higher than most published methods (Versari et al., 2017), except the longterm tracking F-score for TestSet 3 and 4, which have frames with large cell displacements. Note that the previous methods track the cells after their segmentation pipeline, whereas we started with the ground-truth segmentation which can be advantageous. Nevertheless, this result suggests that the distance-based LAP tracker can generate tracks with accuracy comparable to data-specific tracking methods, as long as we start with sufficiently accurate segmentation.
We also performed a similar benchmark with the C2C12 dataset and found that the distance-only tracker yields the maximum TE of 0.998 when starting from ground-truth segmentation ( Supplementary Fig. S3). This is higher than the score from the cutting-edge graph neural network-based tracking method; 0.976, which was obtained from the test including segmentation and in a larger dataset (Ben-Haim and Riklin-Raviv, 2022).
Tunable cost function improves tracking performance
We next investigated if variable cost functions help improve the tracking score for different datasets.
In Figure 2c, we show a snapshot of the cell migration dataset. Here, the cells are moving collectively toward the upper open region. Due to this drift, LAP-based tracking based solely on Euclidean distance fails with large frame intervals, as demonstrated in Figure 2d using datasets with skipped frames. This situation can be easily fixed by changing the cost function by adding a drift term to the Euclidean distance as with the drift parameter d 2 R 2 and defining g and s analogously (Fig. 2d, Supplementary Fig. S4). We used 5% of the non-dividing and dividing connections to tune d as well as the cut-offs so that they optimize the true positive rate of the connections. The details are summarized in the Supplementary Material.
In real experimental data, particles may have features that help to identify species, such as the size, shape, and fluorescent intensities of genetic labels. In those cases, we can use those features in addition to the Euclidean distances to improve the performance. To illustrate this, we measured the tracking performance for simulated particles with eight species, characterized by different sets of feature values corresponding to RGB colors (Fig. 1e, see Section 2.1.5 for details). We then defined the cost function as where c i ; c j 2 R 3 are the feature vectors. We tuned the parameter w as well as the distance cut-off using the training data with 100 frames so that the tracking result maximizes the connection Jaccard index. We then measured the tracking scores for an independent dataset with 100 frames. As shown in Fig. 2f, with the features used in the metric, the target effectiveness with large frame interval remains above 0.8 while it drops to $0:4 when only Euclidean distance is in the metric (w ¼ 0), illustrating the performance improvement by including the particle features. We also observed an improvement in the other scores ( Supplementary Fig. S5).
For segmented images, we can also use the overlap between segmented regions to calculate the cost (Chalfoun et al., 2010;Ershov et al., 2022;Padovani et al., 2022). The flexible implementation allows us to integrate the overlap metric in addition to the distance in the LAP framework. We define l (with g and s analogously) as lðL i ; L j Þ ¼ Àlog which measures the overlap, where L i and L j are the set of pixel coordinates of the segmentation area for the particle i and j and A is a parameter. By comparing the tracking performance for the mouse epidermis dataset with the squared centroid Euclidean distance cases, we found that replacing the metric improves the mitotic branching correctness by $10% (Fig. 2g).
Conclusion
In this article, we showed how the LAP-based tracking pipeline with additional flexibility and optimizability can be useful in improving tracking performance in certain situations, and easily combined with visualization tools to conduct manual corrections. LapTrack, in large part, is complementary to TrackMate (Ershov et al., 2022), which has a useful graphical user interface, support for including feature value differences, and its own optimization pipeline. Compared with TrackMate, LapTrack can take arbitrary inputs and cost functions and is flexible in its output, making it easier to connect with other upstream and downstream analysis pipelines. Trackpy (Allan et al., 2021) provides a tracking routine based on the algorithm by Crocker and Grier (1996) in Python, as well as functions for particle detection, analysis and data input/output. One major difference is LapTrack's ability to detect splitting and merging particles, which makes it more suitable for cell tracking. The tracking function in LapTrack is designed to help make accurate and validated tracks quickly and efficiently, with the hope to increase the amount of ground-truth data that can be used in training more sophisticated tracking methods.
With a sufficient amount of manually annotated ground-truth data, machine learning-based approaches will likely outperform the current parameter optimization strategy of simple affinity metrics. Due to its flexibility, our package can be easily combined with strategies such as one-to-one association affinity learning (Emami et al., 2020;Li et al., 2009), structured learning (Lou and Hamprecht, 2011), and the metric learning approach combined with graph neural networks (Weng et al., 2020), serving as a reusable platform for implementation. | 5,455.8 | 2022-10-07T00:00:00.000 | [
"Computer Science"
] |
Minimally Supervised Number Normalization
We propose two models for verbalizing numbers, a key component in speech recognition and synthesis systems. The first model uses an end-to-end recurrent neural network. The second model, drawing inspiration from the linguistics literature, uses finite-state transducers constructed with a minimal amount of training data. While both models achieve near-perfect performance, the latter model can be trained using several orders of magnitude less data than the former, making it particularly useful for low-resource languages.
Introduction
Many speech and language applications require text tokens to be converted from one form to another. For example, in text-to-speech synthesis, one must convert digit sequences (32) into number names (thirtytwo), and appropriately verbalize date and time expressions (12:47 → twelve forty-seven) and abbreviations (kg → kilograms) while handling allomorphy and morphological concord (e.g., Sproat, 1996). Quite a bit of recent work on SMS (e.g., Beaufort et al., 2010) and text from social media sites (e.g., Yang and Eisenstein, 2013) has focused on detecting and expanding novel abbreviations (e.g., cn u plz hlp). Collectively, such conversions all fall under the rubric of text normalization (Sproat et al., 2001), but this term means radically different things in different applications. For instance, it is not necessary to detect and verbalize dates and times when preparing social media text for downstream information extraction, but this is essential for speech applications.
While expanding novel abbreviations is also important for speech (Roark and Sproat, 2014), numbers, times, dates, measure phrases and the like are far more common in a wide variety of text genres. Following Taylor (2009), we refer to categories such as cardinal numbers, times, and dateseach of which is semantically well-circumscribedas semiotic classes. Some previous work on text normalization proposes minimally-supervised machine learning techniques for normalizing specific semiotic classes, such as abbreviations (e.g., Chang et al., 2002;Pennell and Liu, 2011;Roark and Sproat, 2014). This paper continues this tradition by contributing minimally-supervised models for normalization of cardinal number expressions (e.g., ninetyseven). Previous work on this semiotic class include formal linguistic studies by Corstius (1968) and Hurford (1975) and computational models proposed by Sproat (1996; and Kanis et al. (2005). Of all semiotic classes, numbers are by far the most important for speech, as cardinal (and ordinal) numbers are not only semiotic classes in their own right, but knowing how to verbalize numbers is important for most of the other classes: one cannot verbalize times, dates, measures, or currency expressions without knowing how to verbalize that language's numbers as well.
One computational approach to number name verbalization (Sproat, 1996;Kanis et al., 2005) employs a cascade of two finite-state transducers (FSTs). The first FST factors the integer, expressed as a digit sequence, into sums of products of powers of ten (i.e., in the case of a base-ten number system). This is composed with a second FST that defines how the numeric factors are verbalized, and may also handle allomorphy or morphological concord in languages that require it. Number names can be relatively easy (as in English) or complex (as in Russian; Sproat, 2010) and thus these FSTs may be relatively easy or quite difficult to develop. While the Google text-tospeech (TTS) (see Ebden and Sproat, 2014) and automatic speech recognition (ASR) systems depend on hand-built number name grammars for about 70 languages, developing these grammars for new languages requires extensive research and labor. For some languages, a professional linguist can develop a new grammar in as little as a day, but other languages may require days or weeks of effort. We have also found that it is very common for these handwritten grammars to contain difficult-to-detect errors; indeed, the computational models used in this study revealed several long-standing bugs in handwritten number grammars.
The amount of time, effort, and expertise required to produce error-free number grammars leads us to consider machine learning solutions. Yet it is important to note that number verbalization poses a dauntingly high standard of accuracy compared to nearly all other speech and language tasks. While one might forgive a TTS system that reads the ambiguous abbreviation plz as plaza rather than the intended please, it would be inexcusable for the same system to ever read 72 as four hundred seventy two, even if it rendered the vast majority of numbers correctly.
To set the stage for this work, we first ( §2-3) briefly describe several experiments with a powerful and popular machine learning technique, namely recurrent neural networks (RNNs). When provided with a large corpus of parallel data, these systems are highly accurate, but may still produce occasional errors, rendering it unusable for applications like TTS.
In order to give the reader some background on the relevant linguistic issues, we then review some crosslinguistic properties of cardinal number expressions and propose a finite-state approach to number normalization informed by these linguistic properties ( §4). The core of the approach is an algorithm for inducing language-specific number grammar rules. We evaluate this technique on data from four languages. Figure 1: The neural net architecture for the preliminary Russian cardinal number experiments. Purple LSTM layers perform forwards transitions and green LSTM layers perform backwards transitions. The output is produced by a CTC layer with a softmax activation function. Input tokens are characters and output tokens are words.
Preliminary experiment with recurrent neural networks
As part of a separate strand of research, we have been experimenting with various recurrent neural network (RNN) architectures for problems in text normalization. In one set of experiments, we trained RNNs to learn a mapping from digit sequences marked with morphosyntactic (case and gender) information, and their expression as Russian cardinal number names. The motivation for choosing Russian is that the number name system of this language, like that of many Slavic languages, is quite complicated, and therefore serves as a good test of the abilities of any text normalization system. The architecture used was similar to a network employed by Rao et al. (2015) for graphemeto-phoneme conversion, a superficially similar sequence-to-sequence mapping problem. We used a recurrent network with an input layer, four hidden feed-forward LSTM layers (Hochreiter and Schmidhuber, 1997), and a connectionist temporal classification (CTC) output layer with a softmax activation function (Graves et al., 2006). 1 Two of the hidden layers modeled forward sequences and the other two backward sequences. There were 32 input nodes-corresponding to characters-and 153 output nodes-corresponding to predicted number name words. Each of the hidden layers had 256 nodes. The full architecture is depicted in Figure 1.
The system was trained on 22M unique digit se-quences ranging from one to one million; these were collected by applying an existing TTS text normalization system to several terabytes of web text. Each training example consisted of a digit sequence, gender and case features, and the Russian cardinal number verbalization of that number. Thus, for example, the system has to learn to produce the feminine instrumental form of 60. Examples of these mappings are shown in Table 1, and the various inflected forms of a single cardinal number are given in Table 2. In preliminary experiments, it was discovered that short digit sequences were poorly modeled due to undersampling, so an additional 240,000 short sequence samples (of three or fewer digits) were added to compensate. 2.2M examples (10%) were held out as a development set.
The system was trained for one day, after which it had a 0% label error rate (LER) on the development data set. When decoding 240,000 tokens of held-out test data with this model, we achieved very high accuracy (LER < .0001). The few remaining errors, however, are a serious obstacle to using this system for TTS. The model appears to make no mistakes applying inflectional suffixes to unseen data. Plausibly, this task was made easier by our positioning of the morphological feature string at the end of the input, making it local to the output inflectional suffix (at least for the last word in the number expression). But it does make errors with respect to the numeric value of the expression. For example, for 9801 plu.ins. (девятью тысячами восьмьюстами одними), the system produced девятью тысячами семьюстами одними (9701 plu.ins.): the morphology is correct, but the numeric value is wrong. 2 This pattern of errors was exactly the opposite of what we want for speech applications. One might forgive a TTS system that reads 9801 with the correct numeric value but in the wrong case form: a listener would likely notice the error but would usually not be misled about the message being conveyed. In contrast, reading it as nine thousand seven hundred and one is completely unacceptable, as this would actively mislead the listener.
It is worth pointing out that the training set used here-22M examples-was quite large, and we were only able to obtain such a large amount of labeled data because we already had a high-quality handbuilt grammar designed to do exactly this transduction. It is simply unreasonable to expect that one could obtain this amount of parallel data for a new language (e.g., from naturally-occurring examples, or from speech transcriptions). This problem is especially acute for low-resource languages (i.e., most of the world's languages), where data is by definition scarce, but where it is also hard to find highquality linguistic resources or expertise, and where a machine learning approach is thus most needed.
In conclusion, the system does not perform as well as we demand, nor is it in any case a practical solution due to the large amount of training data needed. The RNN appears to have done an impressive job of learning the complex inflectional morphology of Russian, but it occasionally chooses the wrong number names altogether.
Number normalization with RNNs
For the purpose of more directly comparing the performance of RNNs with the methods we report on below, we chose to ignore the issue of allomorphy and morphological concord, which appears to be "easy" for generic sequence models like RNNs, and focus instead on verbalizing number expressions in whatever morphological category represents the language's citation form.
Data and general approach
For our experiments we used three parallel data sets where the target number name was in citation form (in Russian, nominative case): • A large set consisting of 28,000 examples extracted from several terabytes of web text using an existing TTS text normalization system • A medium set of 9,000 randomly-generated examples (for details, see Appendix A) • The minimal set was intended to be representative of the sort of data one might obtain from a nativespeaker when asked to provide all the essential information about number names in their language. 3 In these experiments we used two different RNN models. The first was the same LSTM architecture as above (henceforth referred to as "LSTM"), except that the numbers of input and output nodes were 13 and 53, respectively, due to the smaller input and output vocabularies.
The second was a TensorFlow-based RNN with an attention mechanism (Mnih et al., 2014), using an overall architecture similar to that used in a system for end-to-end speech recognition (Chan et al., 2016). Specifically, we used a 4-layer pyramidal bidirectional LSTM reader that reads input characters, a layer of 256 attentional units, and a 2-layer decoder that produces word sequences. The reader is referred to Chan et al., 2016 for further details. Henceforth we refer to this model as "Attention".
All models were trained for 24 hours, at which point they were determined to have converged.
Results and discussion
Results for these experiments on a test corpus of 1,000 random examples are given in Table 3.
The RNN with attention clearly outperformed the LSTM in that it performed perfectly with both the medium and large training sets, whereas the LSTM made a small percentage of errors. Note that since the numbers were in citation form, there was little room for the LSTM to make inflectional errors, and the errors it made were all of the "silly" variety, in which the output simply denotes the wrong number. But neither system was capable of learning valid transductions given just 300 training examples. 4 We draw two conclusions from these results. First, even a powerful machine learning model known to be applicable to a wide variety of problems may not be appropriate for all superficially-similar problems. Second, it remains to be seen whether any RNN could be designed to learn effectively from an amount of data as small as our smallest training set. Learning from minimal data sets is of great practical concern, and we will proceed to provide a plausible solution to this problem below. We note again that very low error rates do not ensure that a system is usable, since not all errors are equally forgivable.
Number normalization with finite-state transducers
The problem of number normalization naturally decomposes into two subproblems: factorization and verbalization of the numeric factors. We first consider the latter problem, the simpler of the two. Let λ be the set of number names in the target language, and let ν be the set of numerals, the integers denoted by a number name. Then let L : ν * → λ * be a transducer which replaces a sequence of numerals with a sequence of number names. For instance, for English, L will map 90 7 to ninety seven. In languages where there are multiple allomorphs or case forms for a numeral, L will be non-functional (i.e., one-to-many); we return to this issue shortly. In nearly all cases, however, there are no more than a few dozen numerals in ν, 5 and no more than a few names in λ for the equivalent numeral in ν. Therefore, we assume it is possible to construct L with minimal effort and minimal knowledge of the language. Indeed, all the information needed to construct L for the experiments conducted in this paper can be found in English-language Wikipedia articles.
The remaining subproblem, factorization, is responsible for converting digit sequences to numeral factors. In English, for example, 97000 is factored as 90 7 1000. Factorization is also language-specific. In Standard French, for example, there is no simplex number name for '90'; instead this is realized as quatre-vingt-dix "four twenty ten", and thus 97000 (quatre-vingt-dix-sept mille) is factored as 4 20 10 7 1000. It is not a priori obvious how one might go about learning language-specific factorizations. For inspiration, we turn to a lesser-known body of linguistics research focusing on number grammars. Hurford (1975) surveys cross-linguistic properties of number naming and proposes a syntactic representation which directly relates verbalized number names to the corresponding integers. Hurford interprets complex number constructions as arithmetic expressions in which operators (and the parentheses indicating associativity) have been elided. By far the two most common arithmetic operations are multiplication and addition. 6 In French, for example, the expression dix-sept, literally 'ten seven', denotes 17, the sum of its terms, and quatre-vingt(s), literally 'four twenty', refers to 80, the product of its terms. These may be combined, in quatre-vingtdix-sept. To visualize arithmetic operations and associativities, we henceforth write factorizations using s-expressions-pre-order serializations of k-ary trees-with numeral terminals and arithmetic operator non-terminals. For example, quatre-vingt-dixsept is written (+ (* 4 20) 10 7).
Within any language there are cues to this elided arithmetic structure. In some languages, some or all addends are separated by a word translated as and. In other languages it is possible to determine whether terms are to be multiplied or summed depending on their relative magnitudes. In French (as in English), for instance, an expression XY usually is interpreted as a product if X < Y, as in quatre-vingt(s) '80', but as a sum if X > Y, as in vingt-quatre '24'. Thus the problem of number denormalization-that is, recovering the integer denoted by a verbalized numbercan be thought of as a special case of grammar induction from pairs of natural language expressions and their denotations (e.g., Kwiatkowski et al., 2011).
FST model
The complete model consists of four components: 1. A language-independent covering grammar F, transducing from integers expressed as digit sequences to the set of possible factorizations for that integer 2. A language-specific numeral map M, transducing from digit sequences to numerals 3. A language-specific verbalization grammar G, accepting only those factorizations which are licit in the target language 4. A language-specific lexical map L, transducing from sequences of numerals (e.g., 20) to number names (already defined) As the final component, the lexical map L, has already been described, we proceed to describe the remaining three components of the system.
Finite-state transducer algorithms
While we assume the reader has some familiarity with FSTs, we first provide a brief review of a few key algorithms we employ below.
Our FST model is constructed using composition, denoted by the • operator. When both arguments to composition are transducers, composition is equivalent to chaining the two relations described. For example, if A transduces string x to string y, and B transduces y to z, then A • B transduces from string x to string z. When the left-hand side of composition is a transducer and the right-hand side is an acceptor, then their composition produces a transducer in which the range of the left-hand side relation is intersected with the set of strings accepted by the righthand side argument. Thus if A transduces string x to strings {y, z}, and B accepts y then A • B transduces from x to y.
We make use of two other fundamental operations, namely inversion and projection. Every transducer A has an inverse denoted by A −1 , which is the transducer such that A −1 (y) → x if and only if A(x) → y. Any transducer A also has input and output projections denoted by π i (A) and π o (A), respectively. If the transducer A has the domain α * and the range β * , then π i (A) is the acceptor over α * which accepts x if and only if A(x) → y for some y ∈ β * ; output projection is defined similarly. The inverse, input projection, and output projection of an FST (or a pushdown transducer) are computed by swapping and/or copying the input or output labels of each arc in the machine. See Mohri et al., 2002 for more details on these and other finite-state transducer algorithms.
Grammar inference
Let M : (μ ∪ Δ) * → ν * be a transducer which deletes all markup symbols in μ and replaces sequences of integers expressed as digit sequences with the appropriate numerals in ν. Let D(l) = π i (M • L • l), which maps from a verbalization l to the set of all s-expressions which contain l as terminals. For example, D(4 20 10 7) contains: S → (7 | 90 | * | +) * → (7 | 90 | +) 1000 + → 90 7 Table 4: A fragment of an English number grammar which accepts factorizations of the numbers {7, 90, 97, 7000, 90000, and 97000}. S represents the start symbol, and '|' denotes disjunction. Note that this fragment is regular rather than context-free, though this is rarely the case for complete grammars.
(+ 4 20 10 7) (+ 4 20 (* 10 7)) (+ (* 4 20) 10 7) … Then, given (d, l) where d ∈ Δ * is an integer expressed as a digit sequence, and l ∈ λ * is d's verbalization, their intersection ( 2) will contain the factorization(s) of d that verbalizes as l. In most cases, E will contain exactly one path for a given (d, l) pair. For instance, if d is 97000 and l is ninety seven thousand, E(d, l) is (* (+ 90 7) 10000). We can use E to induce a context-free grammar (CFG) which accepts only those number verbalizations present in the target language. The simplest possible such CFG uses '*' and '+' as non-terminal labels, and the elements in the domain of L (e.g., 20) as terminals. The grammar will then consist of binary productions extracted from the s-expression derivations produced by E. Table 4 provides a fragment of such a grammar.
With this approach, we face the familiar issues of ambiguity and sparsity. Concerning the former, the output of E is not unique for all outputs. We address this either by applying normal form constraints on the set of permissible productions, or ignoring ambiguous examples during induction. One case of ambiguity involves expressions involving addition with 0 or multiplication by 1, both identity operations that leave the identity element (i.e., 0 or 1) free to associate either to the left or to the right. From our perspective, this ambiguity is spurious, so we stipulate that identity elements may only be siblings to (i.e., on the right-hand side of a production with) another terminal. Thus an expression like one thousand one hundred can only be parsed as (+ (* 1 1000) (* 1 100)). But not all ambiguities can be handled by normal form constraints. Some expressions are ambiguous due to the presence of "palindromes" in the verbalization string. For instance, two hundred two can either be parsed as (+ 2 (* 100 2)) or (+ (* 2 100)). The latter derivation is "correct" insofar as it follows the syntactic patterns of other English number expressions, but there is no way to determine this except with reference to the very languagespecific patterns we are attempting to learn. Therefore we ignore such expressions during grammar induction, forcing the relevant rules to be induced from unambiguous expressions. Similarly, multiplication and addition are associative so expressions like three hundred thousand can be binarized either as (* (* 3 100) 10000) or (* 3 (* 100 1000)), though both derivations are equally "correct". Once again we ignore such ambiguous expressions, instead extracting the relevant rules from unambiguous expressions.
Since we only admit two non-terminal labels, the vast majority of our rules contain numeral terminals on their right-hand sides, and as a result, the number of rules tends to be roughly proportional to the size of the terminal vocabulary. Thus it is common that we have observed, for instance, thirteen thousand and fourteen million but not fourteen thousand or thirteen million, and as a result, the CFG may be deficient simply due to sparsity in the training data, particularly in languages with large terminal vocabularies. To enhance our ability to generalize from a small number of examples, we optionally insert preterminal labels during grammar induction to form classes of terminals assumed to pattern together in all productions. For instance, by introducing 'teen' and 'power_of_ten' preterminals, all four of the previous expressions are generated by the same top-level production. The full set of preterminal labels we use here are shown in Table 5.
In practice, obtaining productions using E is inefficient: it is roughly equivalent to a naïve algorithm which generates all possible derivations for the numerals given, then filters out all of those which do not evaluate to the expected total, violate the aforementioned normal form constraints, or are otherwise ambiguous. This fails to take advantage of top-down constraints derived from the particular structure of the problem. For example, the naïve algorithm entertains many candidate parses for quatre-vingt-dixsept '97' where the root is '*' and the first child is '4', despite the fact that no such hypothesis is viable as 4 is not a divisor of 97.
We inject arithmetic constraints into the grammar induction procedure, as follows. The inputs to the modified algorithm are tuples of the form (T, ν 0 , . . . , ν n ) where T is the numeric value of the expression and ν 0 , . . . , ν n are the n + 1 numerals in the verbalization. Consider a hypothesized numeric value of the leftmost child of the root, T 0...i , which dominates ν 0 , . . . , ν i where i < n. For this to be viable, it must be the case that T 0...i ≤ T. And, if we further hypothesize that the root node is '+', then the remaining children must evaluate to T − T 0...i . Similarly, if we hypothesize that the root node is '*', then the remaining children must evaluate to T/T 0...i , and this quantity must be integral.
This approach can be implemented with a backtracking recursive descent parser enforcing the aforementioned normal form constraints and propagating the top-down arithmetic constraints. In practice, however, we implement the search using a straightforward dynamic programming algorithm. The algorithm proceeds by recursively generating all possible leftmost children of the tree and then using these top-down constraints to prune branches of the search space which have no viable completion (though our implementation does not fully propagate these constraints). While the number of left subtrees is exponential in the length of the verbalization, our implementation remains feasible since real-world examples tend to have verbalizations consisting of relatively few terminals. Pseudocode for our implementation is provided in Appendix B.
Grammar compilation
Once productions have been collected, they are used to specify a recursive transition network (Woods, 1970) which is then compiled into a pushdown acceptor (Allauzen and Riley, 2012) over ν * , henceforth G. An example is shown in Figure 2.
Final model and remaining issues
Then, the verbalization for d is given by As noted above, the lexicon transducer L is nonfunctional when there are multiple number names for a single numeral, as may arise in number systems with allomorphy or morphological concord. When this is the case, we compose the lattice produced by V with a language model (LM) of verbalized numbers (over λ * ) and then decode using the shortest path algorithm. Note that whereas the construction of G requires parallel data, the LM requires only "spoken" data. While it is not common in most languages to write out complex cardinal numbers in their verbalized form, it is nonetheless possible to find a large sample of such expressions at web scale (Sproat, 2010); such expressions can be identified by matching against the unweighted π o (F • M • G • L).
Materials and methods
The FST-based verbalizer V was constructed and evaluated using four languages: English, Georgian, Khmer, and Russian (the latter targeting citation forms only). The medium and minimal sets are used for all four languages; in Russian, we also reuse the large data set (see §3.1). In all cases the test sets consisted of 1,000 randomly generated examples, the same examples as in previous experiments.
The size of V varied by language, with the smallest, English, consisting of roughly 8,000 states and arcs, and the largest, Russian, measuring roughly 80,000 states and arcs and comprising approximately a megabyte of uncompressed binary data.
No LM was required for either English or Khmer as both have a functional L. However, the Georgian and Russian L are both ambiguous, so the best path through the output lattice was selected according to a trigram language model with Witten-Bell smoothing. The language models were constructed using the medium training set. Table 4. Arc labels that contain parentheses indicate "push" and "pop" stack operations, respectively, and must balance along a path.
Results and discussion
The results were excellent for all four languages. There were no errors at all in English, Georgian, and Khmer with either data set. While there were a few errors in Russian, crucially all were agreement errors rather than errors in the factorization itself, exactly the opposite pattern of error to the ones we observed with the LSTM model. For example, 70,477,170 was rendered as семьдесят миллион четыреста семьдесят семь тысяч сто семьдесят; the second word should be миллионов, the genitive plural form. More surprisingly, verbalizers trained on the 300 examples of the minimal data set performed just as well as ones trained with two orders of magnitude more labeled data.
Discussion
We presented two approaches to number normalization. The first used a general RNN architecture that has been used for other sequence mapping problems, and the second an FST-based system that uses a fair amount of domain knowledge. The RNN approach can achieve very high accuracy, but with two caveats: it requires a large amount of training data, and the errors it makes may result in the wrong number. The FST-based solution on the other hand can learn from a tiny dataset, and never makes that particularly pernicious type of error. The small size of training data needed and the high accuracy make this a particularly attractive approach for low-resource scenarios.
In fact, we suspect that the FST model could be made to learn from a smaller number of examples than the 300 that make up the "minimal" set. Finding the minimum number of examples necessary to cover the entire number grammar appears to be a case of the set cover problem--which is NP-complete (Karp, 1972)-but it is plausible that a greedy algorithm could identify an even smaller training set. The grammar induction method used for the FST verbalizer is near to the simplest imaginable such procedure: it treats rules as well-formed if and only if they have at least one unambiguous occurrence in the training data. More sophisticated induction methods could be used to improve both generalization and robustness to errors in the training data. Generalization might be improved by methods that "hallucinate" unobserved productions (Mohri and Roark, 2006), and robustness could be improved using manual or automated tree annotation (e.g., Klein and Manning, 2003;Petrov and Klein, 2007). We leave this for future work.
Above, we focused solely on cardinal numbers, and specifically their citation forms. However, in all four languages studied here, ordinal numbers share the same factorization and differ only superficially from cardinals. In this case, the ordinal number verbalizer can be constructed by applying a trivial transduction to the cardinal number verbalizer. However, it is an open question whether this is a universal or whether there may be some languages in which the discrepancy is much greater, so that separate methods are necessary to construct the ordinal verbalizer.
The FST verbalizer does not provide any mechanism for verbalization of numbers in morphological contexts other than citation form. One possibility would be to use a discriminative model to select the most appropriate morphological variant of a number in context. We also leave this for future work.
One desirable property of the FST-based system is that FSTs (and PDTs) are trivially invertible: if one builds a transducer that maps from digit sequences to number names, one can invert it, resulting in a transducer that maps number names to digit sequences. (Invertibility is not a property of any RNN solution.) This allows one, with the help of the appropriate target-side language model, to convert a normalization system into a denormalization system, that maps from spoken to written form rather than from written to spoken. During ASR decoding, for example, it is often preferable to use spoken representations (e.g., twenty-three) rather than the written forms (e.g., 23), and then perform denormalization on the resulting transcripts so they can be displayed to users in a more-readable form (Shugrina, 2010;Vasserman et al., 2015). In ongoing work we are evaluating FST verbalizers for use in ASR denormalization.
Conclusions
We have described two approaches to number normalization, a key component of speech recognition and synthesis systems. The first used a recurrent neural network and large amounts of training data, but very little knowledge about the problem space. The second used finite-state transducers and a learning method totally specialized for this domain but which requires very little training data. While the former approach is certainly more appealing given current trends in NLP, only the latter is feasible for lowresource languages which most need an automated approach to text normalization.
To be sure, we have not demonstrated than RNNs-or similar models-are inapplicable to this problem, nor does it seem possible to do so. However, number normalization is arguably a sequenceto-sequence transduction problem, and RNNs have been shown to be viable end-to-end solutions for similar problems, including grapheme-to-phoneme conversion (Rao et al., 2015) and machine translation (Sutskever et al., 2014), so one might reasonably have expected them to have performed better without making the "silly" errors that we observed. Much of the recent rhetoric about deep learning suggests that neural networks obviate the need for incorporating detailed knowledge of the problem to be solved; instead, one merely needs to feed pairs consisting of inputs and the required outputs, and the system will self-organize to learn the desired mapping (Graves and Jaitly, 2014). While that is certainly a desirable ideal, for this problem one can achieve a much more compact and data-efficient solution if one is willing to exploit knowledge of the domain. | 7,611.8 | 2016-11-21T00:00:00.000 | [
"Computer Science"
] |
Electrically driven frequency blue-chirped emission in Fabry–Perot cavity quantum cascade laser at room temperature
We present a method to produce a fast frequency swept laser emission from a monolithic mid-infrared laser. A commercially available Fabry–P erot cavity quantum cascade laser (QCL) operating at a wavelength of 8.15lm was electrically driven by a current pulse with a 10 ls duration and a slow front rising time of 2 ls. Due to the switching of the lasing emission from the vertical to the diagonal transition in the QCL and a strong quantum-confined Stark effect energy shift of the diagonal transition, the frequency of the emitted light was blue-shifting as the injection current continues to raise above the threshold. The temporal evolution of the laser spectrum was measured by a highresolution step-scan Fourier transform infrared spectrometer. The blue-chirped emission was strongly influenced by the heatsink temperature due to the high thermal sensitivity of the threshold current and slope efficiency. By optimizing carefully the QCL operating temperature and the amplitude of the current pulse, we demonstrate a high-speed self-sweeping laser emission under room temperature operation conditions, reaching the spectral tuning range of 25 cm 1 within 1.8 ls. VC 2021 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http:// creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/5.0033030 Over the last decade, significant progress was made in the development of wavelength-swept laser sources in the mid-infrared (mid-IR) spectral range due to their various applications. A particularly promising application relies on spectroscopy of strong and distinct rovibrational molecular absorption features in the mid-IR region giving unique fingerprints for detection of small concentration gas species even in complex mixtures. For instance, mid-IR spectroscopy sensing systems have been implemented worldwide for disease diagnosis by breath analysis or for detection of hazard chemical threats in air. To date, various mid-IR laser-based techniques have been proposed for spectroscopy applications, but all of them suffer from a trade-off between the covered spectral range and the spectral resolution. Among them, optical parametric oscillator (OPO), differencefrequency generation (DFG), and optical frequency comb (OFC) techniques can provide a wide wavelength range with high detection sensitivity. However, due to operational complexity and high cost, these approaches are practically limited to academic research applications. The fiber-based supercontinuum light sources are capable of providing multiple octave-spanning spectra in the mid-IR fingerprint-rich range. Yet, their power spectral density still requires improvement and the complexity of the detection schemes for resolving the spectra is the main obstacle preventing their widespread deployment. Dual-comb spectroscopy systems greatly simplify the detection apparatus but require complex schemes for the frequency stabilization of the two combs. Tunable external-cavity quantum cascade lasers (QCLs) provide a wide spectral range and power but spectral scanning is implemented mechanically. Therefore, they are relatively slow and expensive, which prevents their routine application. Nowadays, pulsed distributed feedback (DFB) QCLs provide the most practical spectroscopic technique for atmospheric gas monitoring or gas analysis sensors. The pulsed current driving leads to the transient self-heating process in the active region of the DFB QCL. This is the key mechanism for spectral tuning. However, rapid tuning and spectral measurements on the 0.1–1 ls timescale are possible over the limited spectral band of 2 cm 1 because of the moderate wavelength thermal sensitivity of 0.1 cm /K. For tuning over a wider spectral band, the laser operating temperature must be changed. For instance, a tuning over the 10 cm 1 spectral band would require a Appl. Phys. Lett. 118, 021108 (2021); doi: 10.1063/5.0033030 118, 021108-1 VC Author(s) 2021 Applied Physics Letters ARTICLE scitation.org/journal/apl
Over the last decade, significant progress was made in the development of wavelength-swept laser sources in the mid-infrared (mid-IR) spectral range due to their various applications. 1-3 A particularly promising application relies on spectroscopy of strong and distinct rovibrational molecular absorption features in the mid-IR region giving unique fingerprints for detection of small concentration gas species even in complex mixtures. 4 For instance, mid-IR spectroscopy sensing systems have been implemented worldwide for disease diagnosis by breath analysis or for detection of hazard chemical threats in air.
To date, various mid-IR laser-based techniques have been proposed for spectroscopy applications, but all of them suffer from a trade-off between the covered spectral range and the spectral resolution. Among them, optical parametric oscillator (OPO), 5 differencefrequency generation (DFG), 6 and optical frequency comb (OFC) techniques can provide a wide wavelength range with high detection sensitivity. However, due to operational complexity and high cost, these approaches are practically limited to academic research applications. The fiber-based supercontinuum light sources are capable of providing multiple octave-spanning spectra in the mid-IR fingerprint-rich range. 7,8 Yet, their power spectral density still requires improvement and the complexity of the detection schemes for resolving the spectra is the main obstacle preventing their widespread deployment. Dual-comb spectroscopy systems 9 greatly simplify the detection apparatus but require complex schemes for the frequency stabilization of the two combs. Tunable external-cavity quantum cascade lasers (QCLs) provide a wide spectral range and power but spectral scanning is implemented mechanically. Therefore, they are relatively slow and expensive, which prevents their routine application.
Nowadays, pulsed distributed feedback (DFB) QCLs provide the most practical spectroscopic technique for atmospheric gas monitoring or gas analysis sensors. [10][11][12][13][14] The pulsed current driving leads to the transient self-heating process in the active region of the DFB QCL. This is the key mechanism for spectral tuning. However, rapid tuning and spectral measurements on the 0.1-1 ls timescale are possible over the limited spectral band of $2 cm À1 because of the moderate wavelength thermal sensitivity of $À0.1 cm À1 /K. For tuning over a wider spectral band, the laser operating temperature must be changed. For instance, a tuning over the 10 cm À1 spectral band would require a temperature scan over 100 C. This is a slow process, making wideband spectroscopy impractical. In addition, such spectrometer systems utilize yet expensive in fabrication DFB QCLs.
In this paper, we propose a wide-range frequency-swept mid-IR laser source, using a commercially available FP QCL. The principle of frequency tuning relies on the high sensitivity, in general, of a diagonal inter-subband transition to the quantum confined Stark effect (QCSE). 15 However, instead of engineering a specific QCL structure operating on the diagonal transition, which may only provide a broadband emission (e.g., of $20-30 cm À1 FWHM in Ref. 15), or QCL incorporating an independently biased refractive index modulation layer, which enabled an electrically controlled tuning over 0.15 cm À1 , 16 or other alternative approaches for rapid electrical tuning of QCLs and ICLs that can be found in a review, 17 we use the mechanism of lasing wavelength switching and locking from the main broadband vertical transition in the active region QWs to the narrowband diagonal transition from the injector to the low laser level. 18 By carefully tuning the QCL injection current and the temperature of its active region, we made the narrow-band unsaturated gain curve of the diagonal transition spectrally superimposed on the broadband (saturated) vertical transition gain clamped at threshold. These results in the lasing wavelength being defined by the diagonal transition, along the lines of the lasing wavelength tuning phenomenon discussed in the model of Ref. 19. In our experiment, this interplay of saturated and unsaturated gain coefficients shows up as the lasing level switching from the vertical to diagonal transition. Due to the strong sensitivity of the diagonal transition energy on the bias field, by applying a carefully optimized pump current waveform, we achieve a transient narrowband emission of $2 cm À1 FWHM, which is blue-chirped over the spectral band as large as 25 cm À1 . In our realization, the wavelength sweeping time is limited by our pump source to a few microseconds, but generally it can be made much faster, even at the nanosecond scale. The proposed scheme is a demonstration of the high-speed spectral tuning of a narrow-band emission in the FP QCL. This technique can overcome limitations of temperature-tuned pulsed DFB QCLs or EC QCLs and provide a cost-effective, compact, and wide frequency-swept light source in the mid-IR range.
A standard InGaAs/InAlAs FP QCL (the sample labeled as #14556) operating at the wavelength of 8.15 lm, manufactured by Alpes Lasers, is used to demonstrate the frequency blue-chirped emission. This laser employs a double LO resonance epitaxial design N655 from Ref. 20. It is processed into a buried-heterostructure waveguide with an active region width of 9 lm and a cavity length of 3 mm, with an estimated FP cavity mode spacing of $0.5 cm À1 . The laser cavity has a metal coated back facet with a high reflectivity of $98%, while the front facet is left uncoated with a reflectivity of $27%, resulting in a low threshold modal gain. The laser chip is mounted on AlN submount on a copper baseplate whose temperature is controlled with a Peltier cooler. In the experiments reported here, the laser is driven in either CW or pulsed regimes.
In order to optimize the laser driving parameters for level switching, the laser CW spectrum is measured as a function of injection current at various copper baseplate temperatures. A FTIR spectrometer is used to capture high-resolution lasing spectra of the QCL as a function of its operation conditions. Figure 1 shows CW lasing spectrum evolution with the current at three different operation temperatures at the copper baseplate: À20 C, 0 C, and 20 C.
As expected, the baseplate temperature has a strong impact on the lasing threshold of the QCL measured to be of 0.54, 0.66, and 0.80 A for the three selected operating temperatures. In all cases, the spectrum is nearly single mode at the lasing threshold, but already at $50 mA excess above the threshold, it suddenly turns into broadband multimode emission and continues to broaden gradually with the injection current. In agreement with the estimated free spectral range of the laser cavity, the measured adjacent mode spacing is $0.5 cm À1 . The multimode spectrum broadened over the >20 cm À1 spectral range spans over more than 40 lasing modes centered at the initial single lasing mode frequency. Such a spectral broadening phenomenon of the initial lasing mode at small excess above the lasing threshold can be attributed to the multimode Risken-Nummedal-Graham-Haken (RNGH) instability eased by the spatial hole burning in the FP QCL. [21][22][23][24] This spectral broadening pattern is determined by the buildup of Rabi oscillations sidebands at the initial lasing mode. 23,24 However, the measured spectral maps in Fig. 1 do not always follow the expected behavior of gradual spectral broadening of the symmetric sidebands, as a square root of the pump excess above the instability point of the initial lasing mode. In the spectral map of Fig. 1(b) measured at the baseplate temperature of 0 C, starting from FIG. 1. Normalized CW lasing spectrum of the QCL #14556 as a function of current for three different temperatures: À20 C (a), 0 C (b), and 20 C (c). The spectral profile at each current is normalized by its maximum spectral power density. The spectral resolution is 0.2 cm À1 and the current is changed in steps of 5 mA.
ARTICLE
scitation.org/journal/apl the pump current of $0.77 A, the center frequency of the sidebands now reveals a blue shift with the pump current (and bias field) of $62 cm À1 /mA until the current value of 0.87 A. A similar blue chirped emission with the current variation is seen in Fig. 1(a) for a temperature of À20 C and for current above $0.85 A. However, in this case, the spectral broadening vanishes abruptly and the lasing modes become spectrally confined within a narrow band. The laser spectrum now moves toward higher frequency with the current (and quadratically with the bias field) at the rate of $153 cm À1 /mA. Such a spectral behavior has much in common with the energy shift due to the quantum confined Stark effect, which in QCLs is usually observed in diagonal intersubband transitions. 15 However, in our case, the active QW region is designed for lasing on the vertical transition. The possibility of tunneling resonances and intersubband transition switching with the applied bias field has been noticed since the pioneering work of Kazarinov and Suris. 25 Based on band structure modeling of the N655 epitaxial design from Ref. 18, the experimentally observed lasing behavior can be attributed to the interplay between the optical gain on the main transition clamped at threshold and unsaturated gain of the diagonal transition between the injector state and low lasing level. The spectral overlap is possible due to the combination of two conditions: (i) the vertical transition is almost insensitive to the QCSE; (ii) at the same time, the diagonal transition is sensitive to it and its frequency varies over a wide range. In addition, the sensitivity of vertical transition to the bias field is reduced due to photon-assisted transport 26 as well as clamping the gain transition "picture" (populations, scattering rates, etc.) in the configuration encountered at the lasing threshold. 15 The QCL behavior, when two gain spectra start to overlap as a result of the tunneling resonance between the injector and upper lasing subband in the active QW region, can be considered along the model discussed in Ref. 19 on the interplay of two gain media in a laser. In our case, the interaction of the two optical gains is defined by the pump rate at each transition, gain saturation, and photon-assisted transport on the vertical transition, and by the excess of the small signal gain on diagonal transition. This interplay defines the resulting lasing wavelength. 19 What is also important in our case is that it should define the net gain relaxation time T 1 as well. According to Ref. 18, the gain relaxation tine on the diagonal transition could be half as long as the vertical one, which has a direct impact on the occurrence of multimode RNGH instability and strong spectral broadening. 24 At a higher temperature [0 C in Fig. 1(b)], the lasing threshold is reached at a higher current, while in general, the voltage drop decreases with the temperature. Therefore, within the bias field range leading to spectral overlap of the unsaturated gain on the diagonal transition and clamped gain on the vertical transition, the small signal gain on the diagonal transition is not that strong as at À20 C. Correspondingly, its impact on the lasing behavior is weaker. On the contrary, at the baseplate temperature À20 C, the contribution of the diagonal transition is stronger and, therefore, it significantly impacts the spectral properties, as well as the effective relaxation time T 1 of the net gain. Since the gain relaxation time T 1 on the diagonal injector-low active level transition is short, 18 it gets harder to reach the RNGH multimode instability point. [22][23][24] These simple considerations allow us to speculate about a possible origin of the narrowband blue-chirped emission at À20 C in Fig. 1(a) and broadband blue-chirped emission at 0 C in Fig. 1(b) when both vertical and diagonal transitions contribute to lasing. A comprehensive model for transition switching will be developed elsewhere.
In the rest of this Letter, we focus at a practical application of the narrowband spectral tuning behavior in the quasi-CW operating FP QCL, to realize a high-speed wide-spectral-range frequency-swept mid-IR laser source. It turns out that all scattering time constants relevant to the gain in QCLs are on the picosecond scale or shorter. 21 Therefore, sweeping the driving current over the blue-chirped CW emission range on a microsecond or even nanosecond scale allows one to realize a frequency-swept source. The key point is to find such quasi-CW operating conditions under pulsed current pumping so that the lasing frequency inherently shows blue shifts in time along the smooth rising slope of the injection current pulse.
Under the pulsed current operation, the average temperature rise in the active region of the QCL is different from CW operating conditions. It is strongly affected by the duty cycle of the pump pulse train. To simplify tuning and to independently adjust the current pulse amplitude, its waveform, and the baseplate temperature, a pulse train with low duty cycle is used to produce negligible (on average) selfheating in the active region over the pulse train period.
Following the evolution of individual FP cavity modes in Fig. 1(b) with the current, we extract the heat spreading thermal resistance (R T ) 27 and the temperature rise in the active region of our QCL. Using the broadband multimode emission regime as in Fig. 1(b) and the separately measured thermal wavelength coefficient of 0.65 nm/K, what is comparable to the value reported in Ref. 28, the extracted R T in our laser is of 8.2 K/W. The estimated active region temperature rise is about 70 C in the CW spectral tuning range of interest. Therefore, the baseplate temperature of 50 C under the pulsed pump current and a low duty cycle should allow one to reproduce the CW spectral behavior seen in Fig. 1(b).
As a driver, we use a home-made current pulser producing a pulse train at 100 Hz repetition rate with 10 ls FWHM duration and $2 ls rise time on the front edge, resulting in the duty cycle of 0.1%. An example of the current pulse and voltage drop waveforms can be found in Fig. 2(c). The transient spectral behavior of the QCL is characterized by a step-scan FTIR (Vertex 70, Bruker) with the output monitored on external infrared detector. In this study, we use DC-coupled detector PVI-4TE-10.6 from VIGO System SA and preamplified IR detector module Q-MACS IRDM-1GA from neoplas control GmbH.
The time-resolved spectrum is measured with 2.5 ns time sampling (the detector response time is 4.5 ns) and 1.0 cm À1 spectral resolution (insufficient to resolve cavity modes). The time-resolved data were acquired over 12 ls window encompassing the excitation 10 ls pump pulse with an advance of 1 ls. Figure 2 shows the spectrochronograms measured with the pulse amplitude of 1050 mA at three different temperatures on the copper baseplate (0 C, 20 C, and 50 C). As expected, the time-resolved spectra are significantly influenced by the operating temperature. At all set temperatures, the laser spectrum starts with a strong single mode at threshold and it becomes broader above the threshold, in agreement with the trends seen in CW spectra of Fig. 1. The interplay of the diagonal and vertical transitions yielding a blue chirped emission is visible at all temperatures; however, its appearance is different. For the case when the two transition energies overlap throughout the current sweep in Fig. 2(c), showing a promising linear chirp behavior on a rising front of the current pulse as expected for the operating temperature of 50 C. At low temperatures, because the voltage drop and Applied Physics Letters ARTICLE scitation.org/journal/apl bias field lower at low temperatures, a higher current is needed to sweep entirely across the region of spectral overlap in Fig. 2(b) and, especially, in Fig. 2(a). These considerations are in agreement with the temperature trend seen from the comparison of CW spectra in Figs. 1(a) and 1(b). However, in Figs. 2(a) and 2(b), the maximum current pulse amplitude is insufficient to compensate for the behavior of the spectral overlap with temperature. As result, at an intermediate temperature of 20 C, a frequency blue shift is observed at higher delay, closer to leveling of the pulse waveform and with a very limited spectral sweeping range. At the lowest temperature of 0 C, the light emission shows just a slight inclination to the blue shift, which occurs at even longer delay, while the instantaneous spectrum is relatively broad during the entire duration of the current pulse. The spectral properties of lasing modes in the blue-chirped emission regime of the pulsed FP QCL are further characterized in Fig. 3. This figure shows time-resolved measurements made with the resolution of 0.2 cm À1 and at 40 C of the copper baseplate temperature for which the largest spectral sweeping range and the narrowest instantaneous spectral width are obtained. As in the previous measurements, the 2.5 ns time sampling is used, while the temporal resolution is limited by the detector response time of 4.5 ns. To aid the comparison with the normalized CW spectrum from Fig. 1(a), the instantaneous spectrum at each acquisition time point in Fig. 3(a) is normalized at its maximum spectral power density. The evolution of the time-resolved spectrum under the pulsed current operation reproduces the main trends of the CW spectrum evolution with the pump current. Within the initial 1 ls stage of the current pulse rising (corresponding to the time axis labels from 1 ls to 2 ls in Fig. 3), the laser exhibits a broadband emission caused by multimode RNGH instability. The lasing modes spread over the $50 cm À1 spectral range, from approximately 1200 cm À1 to 1250 cm À1 , corresponding to excitation of about 100 longitudinal modes. Then, at a time axis label 2.88 ls, the multimode RNGH lasing regime instability is swiftly switched off and the lasing spectrum turns into a narrowband emission. The emission wavelength is highly sensitive to the applied bias field, as seen in Fig. 3(b). As discussed above, we attribute this spectral behavior to the diagonal transition from the injector to the low laser level with a very low gain relaxation time and, as a consequence, requiring a high pump current for occurrence of the multimode RNGH instability. 18,[22][23][24] Opposite to this, the vertical transition in the active QWs is characterized by a longer gain relaxation time and, hence, by a significantly eased excitation of the broadband multimode RNGH emission. Yet another difference between the two lasing transitions is in their apparent sensitivity to the QCSE energy shift. In Fig. 3(a), within the time window from 2.88 ls to 4.68 ls, the emission spectrum is linearly blue-chirped in time, shifting from 1216 cm À1 to 1241 cm À1 with just a few visible mode hops, thus performing a wide spectral sweep of 25 cm À1 over 1.8 ls. For the rest of the pump pulse, starting at the time axis label of 4.68 ls right after the end of the self-frequency tuning, the laser returns to the steady-state regime of the broadband multimode emission. The dominant lasing mechanism switches back to the usual vertical transition with relatively long gain relaxation time and, hence, a small excitation current of the multimode RNGH instability. 18,[22][23][24] However, it must be noted that such a multimode emission is useless for spectroscopy applications. It can be eliminated by shortening the applied current pulse. Figure 3(b) shows the evolution of the peak frequency of the lasing mode distribution during the blue-chirped emission plotted vs QCL bias. The right axis of the figure displays the corresponding instantaneous current indicated on a blue curve. The red curve depicts a second-order polynomial fit of the laser frequency vs the voltage drop, showing good agreement with the expected behavior due to the QCSE energy shift. Figure 3(c) shows several instantaneous spectral snapshots taken during the blue-chirped emission. It can be seen that several modes are present simultaneously in the lasing spectrum. We expect that the number of modes can be minimized by varying the length of the gain chip and the lasing threshold conditions. For this particular QCL sample, we observe spectral width equivalent to four longitudinal modes on average [see Fig. 3(d)] corresponding to spectral FWHM of $2 cm À1 . This is enough for mid-IR spectroscopy of congested spectra in aqueous solutions and high-pressure gasses or low-pressure gasses with vibrational line separation of more than 2 cm À1 . The instantaneous optical power during the blue-chirped emission continuously grows with the pump current from 120 mW to 150 mW, providing attractively high spectral power density for various spectroscopic applications. Figure 4 reports on blue chirped emission in another nominally identical QCL sample. The tuning range of 22 cm À1 and the FWHM
ARTICLE
scitation.org/journal/apl of $3 cm À1 reproduce very close the results of the first sample (Fig. 3). The standard deviation of the wavelength and the FWHM over 20 repeated scans is much smaller than the average FWHM, while the standard deviation of FWHM itself is smaller than the cavity mode separation (0.5 cm À1 ). Such close reproducibility of the blue-chirped emission in different, although nominally identical QCL samples, and the repeatability of the spectra with scanning, attests the feasibility of practical application of this effect in spectroscopy. The observed slight difference in the wavelength indicates the necessity for selection and calibration of individual QCLs. Concerning the temperature and current stability requirements and aging of the thermal resistance, those are not more crucial than in the commercial pulsed DFB QCLs. Interestingly, in contrast to DFBs, where device aging and temperature instabilities cause changes of mode hopping current over the tuning range, rendering DFB device unsuitable for spectroscopy, for bluechirped FP QCLs, the mode hoping is not devastating, as it is always continuous (between adjacent modes) and is permanently present during the spectral scan. Shifting of mode hoping currents by one cavity mode has no impact on the wavelength scan range or spectral FWHM (see Fig. 4).
We have described a method to achieve a narrow-line bluechirped emission from a standalone FP cavity QCL with double LO resonance design and operating at a wavelength of 8.15 lm at room temperature. The threshold gain, the current, and the operating temperature of the FP QCL play an important role in activation of the efficient switching of the lasing frequency from the vertical transition to the QCSE-sensitive diagonal transition. Once the lasing frequency is locked on the diagonal transition, the peak frequency of the laser is self-swept over a spectral range of $25 cm À1 within a 1.8 ls time interval when the laser is driven by a current pulse train. Preliminary modeling results indicate that the lasing frequency locking range on the diagonal transition and, respectively, the spectral tuning range of the FP QCL can be increased by lowering cavity losses (to be published elsewhere). To avoid confusion, we stress that although almost all QCL designs are using injector-active region tunneling, not all of them are capable of demonstrating such spectral features. Presently, we observe the blue-chirped emission for one design and the effect is reproducible in about a half of QCL samples. An important future work should be devoted to optimization of doping, interface roughness, and operating temperature to achieve similar spectral behavior in other QCL designs.
We believe that the proposed frequency-swept mid-IR light source based on the QCSE energy shift in cost-effective FP QCLs has large potential for a wide range of industrial applications such as realtime trace gas sensing and monitoring of the semiconductor manufacturing process. This approach reduces dramatically the power consumption, complexity, size, and cost of the frequency-swept mid-IR light source, even in comparison to DFB and EC QCLs. Offering competitively attractive high spectral power density for spectroscopic applications, it combines some advantages of both DFB and EC QCLs. Thus, like DFB QCLs, it provides a rapid spectral frequency sweeping capability, which enables in situ monitoring. Like the EC QCL, it allows spectral tuning over a few tens of wavenumbers. However, with the instantaneous spectral width of $2 cm À1 , it is not possible to resolve the structure of a single rovibrational absorption line of a molecule in a low-pressure gas cell like it is possible using DFB and EC QCLs, in particular for lightweight molecules. Thus, for some molecules (e.g., carbon tetrafluoride or sulfur dioxide), the challenge would be to measure a congested spectrum, ideally the entire vibrational band. On the other hand, there are many other gas molecules exhibiting main vibrational line separation of a few wavenumbers. Therefore, in comparison to narrow-band DFB QCLs offering a rapid tuning over the range of only 2 cm À1 , our FP QCL being rapidly swept over the spectral band of $25 cm À1 may be more attractive for sensing gas molecules species with congested spectra distributed over a range of more than 2 cm À1 or for detection of vibrational molecular spectra in high pressure gas cells or aqueous solutions. In this way, the proposed technique may offer an attractive alternative to the DFB and EC QCLs in various spectroscopic applications. | 6,595.4 | 2021-01-12T00:00:00.000 | [
"Physics",
"Engineering"
] |
Magnetic field-assisted solidification of W319 Al alloy qualified by high-speed synchrotron tomography
Magnetic fields have been widely used to control solidification processes. Here, high-speed synchrotron X- ray tomography was used to study the effect of magnetic fields on solidification. We investigated vertically upward directional solidification of an Al-Si-Cu based W319 alloy without and with a transverse magnetic field of 0.5 T while the sample was rotating. The results revealed the strong effect of a magnetic field on both the primary α -Al phase and secondary β -Al 5 FeSi intermetallic compounds (IMCs). Without the mag- netic field, coarse primary α -Al dendrites were observed with a large macro-segregation zone. When a magnetic field is imposed, much finer dendrites with smaller primary arm spacing were obtained, while macro-segregation was almost eliminated. Segregated solutes were pushed out of the fine dendrites and piled up slightly above the solid/liquid interface, leading to a gradient distribution of the secondary β -IMCs. This work demonstrates that rotating the sample under a transversal magnetic field is a simple yet effective method to homogenise the temperature and composition distributions, which can be used to control the primary phase and the distribution of iron-rich intermetallics during solidification.
Introduction
Compared with the primary aluminium production from bauxite core, secondary aluminium ingots produced from recycled aluminium scraps, including end-of-life automotive and used beverage cans, can save up to 95% of the energy [1]. However, impurities such as Iron and Silicon accumulate during the recycling process [2]. Iron in particular is expensive to remove. Further, iron has a very low solid solubility (max 0.05 wt%) in aluminium [3] and can form hard and brittle Fe-rich intermetallic compounds (IMCs) such as Al 13 Fe 4 [4,5] and β-Al 5 FeSi [6]. Their morphologies, sizes and distribution, if not controlled, can negatively impact castability [7] and reduce mechanical and corrosion properties of the final components [8,9]. IMCs in aluminium alloys can be controlled by their crystal structures and solidification conditions such as cooling rates, temperature gradients and external fields (e.g. mechanical shearing, ultrasonic processing, or magnetic stirring) [10][11][12][13][14].
Synchrotron X-ray radiography has been widely used to reveal the growth process of various IMCs in Al alloys during solidification, for instance, α-Al(FeMnCr)Si in Al-Si-Cu alloys [15] and Al 13 Fe 4 in Al-3%Fe alloys [4,16]. Recently, high-speed synchrotron X-ray tomography has been developed allowing us to unravel the microstructures in 4D (3D plus time) [17][18][19][20][21][22], and the approach was applied to study Fe-rich intermetallics in Al alloys. Terzi et al. [19] reported the growth of irregular α-Al/β-Al 5 FeSi eutectic. Cai et al. [20] studied the coupled growth of primary α-Al and secondary βphase in W319 alloy, which found that the growth rates and sizes of β-phase were restricted by the available inter-primary dendritic space. Puncreobutr et al. [21] illustrated the secondary β-phase has great effects on blocking liquid flows and decreasing the permeability of the semi-solid Al-Si-Cu alloys. Cao et al. [22] have revealed the growth process of the primary Fe-rich IMCs with and without a weak magnetic field (0.07 T) of Al-Si-Fe alloy. Those examples show the power of high-speed synchrotron tomography coupled with advanced imaging processing and numerical simulation to study the growth dynamics and kinetics of Fe-rich IMCs in Al alloys.
Magnetic fields have been widely used in altering fluid flows during solidification processes [23][24][25][26][27][28], making use of physical effects including electromagnetic damping [29,30], electromagnetic stirring [31,32] and thermoelectric magnetohydrodynamic (TEMHD) [33,34]. Previous studies showed that when applying a rotating magnetic field, both the temperature and solute distribution can be homogenised during solidification, leading to structure refinement of the primary α-Al phase [26,35]. However, several questions remain unanswered. Will the modified morphologies of the solidification microstructure influence the permeability in the mushy zone? In Al-Si-Cu based alloys, as the β-intermetallic is the secondary phase following the primary α-Al phase [20], will the growth behaviours of β-IMCs be influenced by the application of magnetic fields? In view of these questions, the present study aims to reveal the growth dynamics of both the primary α-Al phase and secondary β-IMC during solidification of W319 (Al-Si-Cu based) alloys under a constant transversal magnetic field of 0.5 T while the sample is rotating via high-speed synchrotron tomography. Then we calculated the absolute permeability of the solidified structures using imagebased simulation with and without the presence of the β intermetallic compounds under different solidification conditions. The results can be used to validate simulation models, especially the growth behaviours of secondary IMCs during the casting of aluminium alloys. It also demonstrates that a magnetic field can be used to control the distribution of iron-rich intermetallics.
Materials and methods
The Al-Si-Cu alloy W319 (Al-5.50Si-3.40Cu-0.87Fe-0.27Mg, in weight per cent) was provided by Ford Motor Company. The alloy is frequently used in engines such as engine blocks and cylinder heads [36]. Cylindrical specimens with diameters of 1.8 mm and lengths of 100 mm were cut via wire electrical discharge machining. Each sample was placed into an alumina tube with a 2 mm inner diameter and 3 mm outer diameter. A bespoke temperature gradient furnace (MagDS) [20,37] was used to perform the solidification experiment. MagDS furnace consists of a small bespoke temperature gradient stage and a magnet yoke, which was used to control the solidification conditions (cooling rates (CR), temperature gradients (TG) and the strength of the magnetic field (B)). Two experiments were performed. In the first experiment, the sample was heated up until melting. The specimen was then held at the melting state for 20 min before it was cooled at a constant rate (CR) of 0.1 /s until fully solidified [26]. A temperature gradient (TG) 2.5 /mm was applied to the specimen (the top part was set at a higher temperature). In the second experiment, the magnet yoke was installed close to the furnace, which was able to produce a transverse magnetic field of B = 0.5 T. The solidification experiment was carried out using the same heating and cooling conditions. Thereby, solidification condition I is B = 0 T, CR = 0.1 /s, TG=2.5 /mm, whereas solidification condition II is B = 0.5 T, CR = 0.1 /s, TG=2.5 /mm. In both conditions, the samples were rotating continuously during cooling for tomography.
The in situ solidification experiments were performed at the ID19 beamline of the European Synchrotron Radiation Facility (ESRF), with a 31 keV pink X-ray beam [38]. A high-resolution high-speed detector (PCO.dimax) was used, achieving a pixel size of 2.2 µm, and a field of view (FOV) of 2.2 × 2.2 mm 2 . During solidification, rapid tomographic images were acquired when the sample was rotating at a speed of π(rad/s) continuously. Each tomogram required a collection time of 1 s and was composed of 1000 projections (radiography), collected over °180 . Another 15 s of waiting time were consumed for downloading the tomograms between two consecutive scans.
For absorption contrast-based synchrotron X-ray tomography, the phases with higher density (higher absorption coefficients) have higher grey values [39,40], allowing the distributions of density/ composition before and during solidification to be mapped. Using this concept, we have plotted the 2D vertical slices with an artificial colour map based on 16-bit unit images. In this work, the regions with low attenuation values are shown in blue and green (e.g. in Fig. 1-a), which normally mean a higher concentration of light elements such as Al and Si in this experiment. Conversely, yellow to red regions should contain heavier elements such as Fe and Cu, which strongly attenuate the X-rays. Avizo 2020.1 (Thermo Fisher, U.S.) was used to segment and quantify the phases and perform the absolute permeability simulation. For the primary α-Al phase, 3D anisotropic diffusion was chosen to reduce the noises [41], followed by the function of interactive thresholding whose values were based on the Ostu method implemented in ImageJ [42]. To segment the secondary β-Al 5 FeSi IMCs, 3D anisotropic diffusion and morphological Laplacian filters were employed. More details about image processing can be found in the supplementary note. Absolute permeability was calculated by solving the Stokes equations as shown below using the Avizo XlabSuite Extension toolbox [43].
Where V L is the velocity of the fluid, µ L is the dynamic viscosity of the fluid, . Is the divergence operator, is the gradient operator, 2 is the Laplacian operator and P L is the pressure of the fluid. 8 groups of tomographic data from each experiment were chosen. Sub-volumes of 882× 882× 882 µm 3 from the bottom centre of the FOV were cropped out for the simulation. Inlet flow along the vertical direction provided the permeability of fluid to flow parallel to the primary dendritic arms as the dendrites were aligned with the vertical direction of the samples. The flow simulations were set with the input pressure of 1.3 bar and the output pressure of atmospheric pressure by default. The fluid viscosity was set to be 0.001 Pa.s [6,21]. after the temperature has decreased further, forming well-developed dendritic structures (Figs. 1-b4 and b5).
α-Al phase
For solidification condition II with the application of a transversal 0.5 T magnetic field, Cu and Fe elements were shown to be concentrated at the bottom centre of the FOV before solidification show the growth orientation of α-Al dendrites in solidification conditions I and II, respectively. We manually selected the primary arm of the dendrites and represent them as cylinders, which allows us to measure the titling angle (φ), which is the angle between the direction of the length of cylinders and the vertical direction (Z) of the sample. Without the magnetic field, 5 out of 6 dendrites grew almost parallel to the vertical direction of the sample with tilted angles between 3º and 17º (Fig. 5a). The remaining dendrite grew with a large tilting angle of 37º. Under the magnetic field, we have selected 41 dendrites as representatives. They grew upwards with tilting angles (Fig. 5-b1) ranging from 6º to 37º. In Fig. 5-b2, it seems the dendrites grew with a pattern: some in the centre of the specimen in a clockwise direction as pointed by red arrows, while some in an anti-clockwise direction as pointed by blue arrows in the peripheral. This behaviour might be attributed to the solute flow caused by the application of the magnetic field and sample rotation.
Supplementary material related to this article can be found online at doi:10.1016/j.jallcom.2022.168691. The horizontal cross-sections (Figs. 1-b5 and 2-b5) show a significant difference in dendrite sizes between the two different solidification conditions. Quantification of the primary dendritic arm spacing can be obtained by Eq. 1 [44]: Where c is 0.5 for a random array of points, A is the cross-section area, which is 4.54 mm 2 (Figs. 1-b5) and 4.72 mm 2 (Figs. 2-b5), respectively; n is the number of dendrites, which are 6 (Figs. 1-b5) and 60 (Figs. 2-b5) for the two conditions, respectively. Therefore, the primary dendrite arm spacing solidified without and with a magnetic field is 435 µm and 140 µm, respectively.
β-Al 5 FeSi intermetallic compounds (IMCs)
Fig . 3-a shows the growth process of β-Al 5 FeSi IMCs at different timestamps under solidification condition I. t a1 is the time when β-Al 5 FeSi IMCs first appeared in the FOV. The bright and thin-plate structure in Figs. 3-a2 is β-Al 5 FeSi, which is the secondary phase after the primary α-Al phase formed in W319 alloys [20]. The light grey regions between the primary α-Al dendrites are liquid, whereas the dark grey is the α-Al phase. 3D volume rendering of IMCs after segmentation is shown in Fig. 3 3-c and 3d. The IMCs (in red) nucleated at the inter-dendritic space of primary α-Al phases (in green). β-Al 5 FeSi has very high lateral growth rates and low thickening rates, leading to impingement at the later stage of solidification. It also shows that the intermetallic compounds grew within the available spacing between the arms of the dendrites. Fig. 3-d presents the 3D volume of β-Al 5 FeSi while the primary α-Al phase is transparent. IMCs grew vertically towards the direction of the applied temperature gradient. Few IMCs grew horizontally.
Supplementary material related to this article can be found online at doi:10.1016/j.jallcom.2022.168691.
2D vertical slices and 3D rendered volumes of β-Al 5 FeSi at different timestamps under the magnetic field (solidification condition II) are shown in Fig. 4 and video-4. t b1 is the time when β-Al 5 FeSi IMCs first appeared in the FOV. The growth behaviour of IMCs seems The growth orientation of IMCs was quantified based on the principal component analysis [20]. The angle θ was defined as the difference between the through-thickness direction of the plate-let intermetallics and the vertical direction of the sample (the schematic diagram is shown in the insert of Fig. 5-c, the z-axis is the vertical direction of the sample). Without the magnetic field, 65% of the IMCs grew between 60º and 90º in Fig. 5-a1, showing that the growth orientation of β-Al 5 FeSi preferably grew towards the vertical direction of the sample. The preferable growth direction towards the temperature gradient can be further influenced under a higher temperature gradient of 10 /mm in ref [20]. When the magnetic field is on, a higher percentage of β-Al 5 FeSi IMCs (around 79%) grew between 60º to 90º in Fig. 5-a2. These indicate that the application of the magnetic field might affect the growth orientation of IMCs.
The volume fractions of β-phase versus time are plotted in Fig. 6a. Without the magnetic field, the volume fraction of the β-phase gradually increased during solidification, with an average growth rate of about 0.7% per second. In 272 s, the volume fractions of βphase achieved 1.9% which is consistent with a previous study [20]. The volume fraction of IMCs during solidification with the magnetic field grew at a rate of about 0.88% per second, slightly higher than the sample solidified without a magnetic field. The volume fraction increased to 2.4% in 272 s Under the magnetic field, we noticed, in Figs. 4-b4, that the number of IMCs which grew at the bottom region of the FOV is lower than that at the top. We then quantified the volume fractions of the IMCs at the bottom (0-0.554 mm) and the top (1.662-2.216 mm) of the FOV as a function of time, as shown in Figs. 6-b and 6c. Fig. 6-b presents that the average growth rates (the tendency of the curves) of the IMCs of the bottom and top regions under solidification condition I are very close (about 0.0020% per second and 0.0018% per second, respectively). However, with the application of the magnetic field, the average growth rate of IMCs at the bottom region is 0.0013% per second, much slower than that at the top (0.0024% per second). The top region achieved a higher volume fraction (0.64%) than the bottom one (0.34%). The gradient volume distribution of the β-phase formed along the vertical direction of the FOV, suggesting the concentration of the Fe element might be higher at the top region than at the bottom. This behaviour is different from the sample solidified without the application of the magnetic field. Fig. 7 presents the simulated absolute permeabilities of the semisolid W319 alloys during solidification without and with the magnetic field, based on solving the Stokes equations. Without the magnetic field, the solid volume fraction of α-Al dendrites increased from 39.6% to 43.3% within the chosen 8 tomographic scans. The absolute permeabilities reduced from 77.6µm 2 to 41.6µm 2 . The permeability decreased monotonically with the increase of solid volume fraction of α-Al dendrites as expected. The permeabilities are significantly lower in the presence of intermetallics, as was found by Puncreobutr et al., [21]. At a solid volume fraction of α-Al dendrites of 43.3%, the absolute permeability was 41.6 µm 2 without IMCs but reduced to 27.3µm 2 by the intermetallics. As the intermetallics nucleate and grow at the inter-dendritic spacing [19,20,45], they can block the liquid flow through the channels.
Permeability estimation
For solidification condition II, the permeability shows the same tendency. It decreased from 63.6µm 2 to 32.5µm 2 while the solid volume fraction increased from 36.5% to 41.2%. Permeability loss due to the presence of intermetallics also occurred. At a solid volume fraction of α-Al dendrites of 41.2%, the absolute permeability was 32.5 µm 2 without IMCs but was reduced to 25.6 µm 2 due to IMCs.
However, at the same solid volume fraction of α-Al phase of around 40.7%, the permeability is 60.5 µm 2 for the sample without the magnetic field applied, higher than the one with the magnetic field (36.0µm 2 ). The permeability is not only monotonically related to the volume fraction but can also be influenced by the inter-dendrite-arm spacing. Under the magnetic field, much finer dendrites were formed, resulting in narrower liquid channels. This can block melt to flow through the semi-solid structures, leading to lower absolute permeability. The reduction of primary dendrite arm spacing of the α-Al phase has a higher impact on the reduction of the absolute permeability than the blocking effect from the intermetallics. Fig. 8 shows the schematic diagram of the solidification process under both solidification conditions without and with the magnetic field. The presence of a magnetic field has changed the morphologies of the primary α-Al phase from coarse to fine dendritic structures. The distribution of the β-Al 5 FeSi IMCs was also altered.
Discussion
Under solidification condition I without the magnetic field, initially, the primary α-Al was formed near the surface of the sample (Fig. 8-a1). Solute elements such as Cu, Fe and Si were ejected into the melt. Due to the gravity effect, heavy elements such as Cu and Fe settled down to the bottom of the FOV, forming a macro-segregation zone, which retards the upwards growth of the primary α-Al into this region. Therefore, a concavely curved solid/liquid interface was formed. As solidification progressed, the primary dendrite arms continued to grow upwards, while the secondary arms branched into the central region during cooling. The macro-segregation became more serious, which is because more Cu and Fe were ejected and accumulated into this region during solidification. The solid/liquid interface becomes more concave in Fig. 8-a2. The curved interface due to solute accumulation was observed previously in Al-Cu alloys [46] by examining the solidified sample, which for the first time demonstrated that fluid flow as a result of gravitational force can change the solid/liquid interface shape. In-situ X-ray radiography later was used by A. Bogno et al. [47] to reveal the initial transient of the solid/liquid interface due to solute segregation. As the solidification proceeded, the secondary β-IMCs started to nucleate and grew at the inter-dendritic spacing of the primary α-Al dendrites in Fig. 8-a3. The size of the secondary β-IMCs is constrained by the available spacing [20]. Its growth orientation is strongly controlled by the temperature gradient. Under the magnetic field, initially, during solidification, the segregated heavy elements of Cu and Fe also accumulated at the centre region of the sample to form a curved solid/liquid interface in Figs. 2-a2 and schematically shown in Fig. 8-b1. A rotating magnetic field can induce a strong azimuthal flow that came from the convection damping effect [48,49]). Here, the sample was rotating under a transverse magnetic field. During solidification, the liquid flow was impeded due to the convection damping effect, while the solid α-Al dendrites were still rotating at angular velocity of π/s. Due to the speed difference between the liquid and solid part (α-Al dendrite), a rotational flow can be generated (in the red circle) in the mushy zone ( Fig. 8-b1) in the direction of rotation.
Furthermore, due to the different electrical conductivities of the primary α-Al phase and the melt, current loops at the solid/liquid interface can be generated as shown in Fig. 8-b1 (in green), which is known as the Seebeck effect [22,33,50]. The interaction between the Seebeck current loops and the rotating magnetic field can induce Lorenz forces acting on both dendrites and the melt in the mushy zone. This can produce meridional flow ahead of the solid/liquid interface [51]. The changed interface from tilted to flat in Al-Cu alloys under a transverse magnetic field was observed by Wang et al. [52] by using in-situ radiography, and a corresponding 3D simulation was also performed to confirm that the changed interface was attributed to the induced thermoelectric magnetohydrodynamic flows (TEMHD) [53]. In their work, the sample was static, TEMHD alone was attributed to being the main mechanism for changing the solid/liquid interface. In our previous study, we observed the formation of a helical structure of the α-Al phase in Al-15 wt%Cu alloys, in which the screw structure was formed due to the formation of the helical channel enriched with Cu solute [26], suggesting that rotation inside a magnetic field can induce strong solute flow. In this experiment, the sample was also rotating inside a magnetic field. The combination of the two forces from the convection damping effect and TEMHD might homogenise the temperature distribution via mixing of the solute [26] and ease the macro-segregation, which allows more primary dendrites to grow into this region (Fig. 8-b2).
The solid/liquid interface transformed from a curved one to flat one during solidification. As a result of these combined magnetohydrodynamic effects, the number of dendrites was increased, and PDAS was reduced. Smaller PDAS led to narrower liquid channels, which can reduce the absolute permeability [21]. The refined primary dendrites led to the growth orientation of the IMCs towards the vertical direction of the sample. Finally, segregated solute was pushed above the solidified structures in Fig. 8-b3. This leads to a higher solute concentration at the top region of the sample, which promotes more β-Al 5 FeSi IMCs to form at the top region, resulting in a gradient volume distribution of IMCs [51]. This study demonstrates that applying a static magnetic field to a rotating sample can not only successfully reduce the PDAS but also alter the distribution of IMCs. It provides a cost-effective method for controlling the iron-rich intermetallic in recycled aluminium alloys and thus improving mechanical properties and corrosion resistance.
Conclusions
High-speed synchrotron X-ray tomography was used to reveal the growth dynamics of both the primary α-Al phase and secondary β-Al 5 FeSi IMCs in W319 alloys directionally solidified without and with a transversal magnetic field. This work reveals that the application of a transverse magnetic field during solidification processes can effectively change the morphology of the primary α-Al phase and subsequently alter the distribution of the secondary β-Al 5 FeSi IMCs. The following conclusions can be drawn from this study: 1. Without the magnetic field, a high concentration of heavy elements such as Cu and Fe at the periphery of the sample was revealed by mapping the solute distribution. Under a magnetic field, the peripheral macro-segregation was removed. A concave solid/liquid interface was observed at the initial stage of solidification under both conditions. However, the solid/liquid interface transformed from curved to flat when the magnetic field is applied. 2. Without the magnetic field, the primary α-Al phase has coarse and well-developed dendritic structures with large primary dendritic arm spacing of about 435 µm. Under the magnetic field, the primary α-Al phase becomes fine dendritic structures with smaller dendritic arm spacing of about 140 µm. 3. Absolute permeabilities were simulated using image-based simulation based on the tomographic data. The simulations show β-Al 5 FeSi IMCs can reduce the absolute permeability of the semisolid structure by blocking flows in both solidification conditions. In addition, under an applied magnetic field, fine dendritic structures also reduce permeability due to the formation of narrower liquid channels. 4. Without a magnetic field, the volume fraction and growth rate of β-Al 5 FeSi IMCs have almost the same values at different regions of the sample. However, under the magnetic field, at the top region of the sample, the β-Al 5 FeSi IMCs have higher growth rates and volume fractions than the bottom.
Emerging Technology (CiET1819/10). The authors gratefully acknowledge the European Synchrotron Radiation Facility for Beamtime MA2989 and the ID19 team and collaborators who helped with the experiments. | 5,616.8 | 2022-12-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Surface change assessment of Co-Cr alloy subjected to immersion in denture cleansers
Choosing the right chemical cleanser for removable partial dentures is a challenge, because they present an acrylic and a metallic portion, which should be cleaned and not damaged. Aim: The aim of this study was to assess surface changes of cobalt chromium alloys immersed in diferente cleaners solutions: 0.05% sodium hypochlorite, 4.2% acetic acid, 0.05% sodium salicylate, sodium perborate (Corega Tabs®) and 0.2% peracetic acid. Material and Methods: One hundred and twenty circular specimens (10 mm in diameter) of two commercial available Co-Cr alloys were tested: GM 800 ® (Dentaurum) and Co-Cr® (DeguDent). The samples were randomly divided into tem experimental groups (n=10), according to the trend mark of alloy and cleaners solutions in which they were immersed, and two control groups, in which the samples of the two alloys were immersed in distilled water. Evaluations were performed through roughness measurement (rugosimeter Surftest 211, Mitutoyo), visual evaluation with stereomicroscope (Stereo Discovery 20, Carl Zeiss) and scanning electron microscope surface (JSM, 6360 SEM, JEOL), at experimental times T0 – before immersions, T1 - after one immersion, and T2 - after 90 immersions. Intergroup comparison for the effect of immersion in the different cleanser agents was evaluated through ANOVA/Tukey tests (p≤0.05). The effect of the time in the immersion of each alloy was evaluated by t-pared test (p≤0.05). The two alloys were compared using the t-Student test. Results: The analysis of roughness and microscopy showed that surface changes were significantly greater in groups submitted to 0.05% sodium hypochlorite after 90 immersions (T2). When comparing the two alloys, a similar behavior of roughness was observed for the cleaning agents. However, alloy GM 800® showed significant statistical difference for roughness variations in experimental times (Δ1 and Δ2), when immersed in sodium 0.05% hypochlorite. The number of exposures of the alloys to the cleaning agents showed a negative influence when using sodium hypochlorite solution. Conclusions: It is possible to conclude that 0.05% sodium hypochlorite has caused the greatest apparent damage to alloy surface.
Introduction
Despite advances in materials and techniques in dental rehabilitation, removable partial dentures (RPD) remain as an important tool for public health, because they are a less costly option 1 . Upon its installation in the oral cavity of patients, it is a dentist's duty to instruct them about hygiene 2 to avoid the accumulation of biofilm, which is an etiological factor of oral diseases, such as caries and stomatitis. Patients can make use of mechanical and chemical cleaning methods. Their association has been reported in the literature as the best choice 2-7 , especially for special and geriatric patients, who find it difficult to properly brush their dentures 1 . Techniques and materials should be effective in cleaning, and should not affect the components of the prosthesis.
Of the chemical cleansers used for full dentures, sodium hypochlorite solutions deserve special attention, since they degrade mucin and allow for greater removal of bacterial biofilm in depth 2,8 . Solutions of sodium salicylate, sodium perborate, and acid peracetic are also used due to their antimicrobial potential [8][9][10] . A home-made option is vinegar (4.2% acetic acid) capable of reducing the number of bacteria on the surface 11 .
Removable partial denture, however, present metallic components on their composition, normally cobalt chromium alloy 12 . Choosing the right cleanser is a challenge, because solutions containing hypochlorites can cause some corrosion, staining, and even loss of physical properties 13,14 .
Thus, the aim of this study was to assess surface changes in cobalt chromium alloys subjected to immersion in different cleanser solutions: 0.05% sodium hypochlorite, 4.2% acetic acid, 0.05% sodium salicylate, sodium perborate (Corega Tabs®) and 0.2% peracetic acid.
Immersion in cleansers
The cleanser solutions were prepared and 15 ml were poured into test tubes (Pyrex No. 9820, Corning Inc., USA), in which the specimens of each group were fully immersed.
For 0.05% sodium hypochlorite solution preparation, 5 ml of 2.5% Sodium hypochlorite solution (Q-Boa®, Anhembi S/A Osasco, São Paulo, Brazil) was diluted in 200 ml of distilled water 6 . Immersion time for this solution was 10 min per cycle 15 . The 4.2% acetic acid solution consisted of pure white vinegar (WMS Supermercados do Brasil S/A, Porto Alegre, RS, Brazil). Immersion time for this solution was 10 min 16 .
Effervescent sodium perborate solution was prepared by diluting one tablet of Corega Tabs® (Stafford-Miller Ind., Rio de Janeiro, RJ, Brazil) in 150 ml of water at 45°C, as recommended by the manufacturer. Its immersion time was 15 minutes 17 .
The 0.2% peracetic acid solution was prepared by diluting 13.4 ml of 15% peracetic acid (Sigmasul, Cachoeirinha, RS, Brazil) in one liter of distilled water. Immersion time for this solution was 15 minutes.
Surface assessments were held before immersions (T0), after one immersion (T1), and after 90 immersion cycles (T2), simulating the daily use of these solutions for three months. The samples were cleaned with spray of distilled water and dried on absorbent sheet, between each immersion, and the interval between consecutive immersions were merely the time necessary to wash and dry the specimens.
Surface Roughness
Surface roughness was measured using a rugosimeter (Surftest SJ 211, Mitutoyo Corp., Kanagawa, Japan), with 6 readings with cut-off of 0.25 mm in each specimen, 3 on the x-axis (x) and 3 on the ordinate axis (y). The Ra parameter, which provides the means of peaks and valleys, was assessed in all experimental time intervals (T0, T1, and T2), using the center of the sample.
Microscopic assessment
Two microscopic analyses were performed. Initially, a stereomicroscope was used with magnification of 8.5x to assess samples in 3 experimental times. A damage index was created for the surface, where 0 indicates the absence of any signs of changes; 1, the loss of brightness and light surface deposition; 2, the occurrence of spots in more than two thirds of the surface of the specimens; and 3, the total darkening of the specimens. After that, Scanning Electron Microscopy (SEM) was performed with magnification of 500x, in order to view the topographic surface appearance of alloys. With the use of X-ray energy dispersive spectroscopy (EDS), it was possible to determine which chemical elements were present on the surface.
Statistical analysis
Data were tabulated and statistically analyzed using SPSS (Statistical Package for Social Sciences, version 13.0). Normality was verified by the Shapiro-Wilk test. Surface roughness after the application of cleaning protocols at times T1 and T2, as well as the difference found by subtracting roughness after immersion (in both Surface change assessment of Co-Cr alloy subjected to immersion in denture cleansers times) by the initial roughness (baseline), were compared between different experimental groups by analysis of variance and multiple comparison Tukey test (p≤0.05). Roughness data after immersion in cleanser solutions were compared to initial (baseline) by paired t test (p≤0.05). The two alloys were compared with respect to roughness in the various protocols through t-Student test (p≤0.05).
The captured images of sample surface changes were visually assessed twice in an optical stereomicroscope by one observer, to yield a Kappa coefficient of 0.87. The scores to visual changes underwent transformation "rank" to then be compared between the different experimental groups by analysis of variance and multiple comparison test of Tukey (p≤0.05). The two alloys were compared in scores of visual changes on different protocols by Mann-Whitney test (p≤0.05).
Surface Roughness
Results showed no statistically significant difference between the methods of cleaning after the first immersion as to roughness (T1). After 90 immersions (T2), the means of Ra (µm) in the groups submitted to 0.05% sodium hypochlorite were significantly higher (Table 1). Other cleansers did not cause surface roughness changes in the alloy over time (Table 1). When comparing the two alloys, we have found similar behavior in roughness for cleansers. However, alloy GM 800® showed significant statistical difference between Δ1 and Δ2 when immersed in 0.05% sodium hypochlorite ( Table 2).
Surface change assessment of Co-Cr alloy subjected to immersion in denture cleansers
Microscopic Assessment
After one immersion (T1) no clear visual change was noted on the surface of any of the groups. However, after 90 immersions (T2), the two alloys submitted to 0.05% sodium hypochlorite presented the highest scores, pointing to the major changes (Table 3). In further analysis by SEM, it was possible to observe sharpening occurrence, suggesting a slight texturing of the surface of the Co-Cr® alloy after the first immersion. However, after the ninetieth immersion, widespread surface change was noticed, with the presence of protruding clusters and occasional depressions (Figure 1). For the GM 800® alloy, after the first immersion, minimal superficial change was noticed. However, after the ninetieth immersion, the image shows more abrupt changes in the structure of the sample (Figure 2). EDS surface analysis of both alloys, when immersed in 0.05% sodium hypochlorite, showed the presence of oxygen and chlorine, which indicates corrosion. Iron and tungsten were also found in the composition of alloy GM 800®, not reported by the manufacturer. Above there is the stereomicroscope image (magnification 8.5x). Below, SEM images (magnification 500x). T1 shows little change in the surface brightness (score 1), while T2 is observed to such darkening the surface and abrupt changes in relief.
Discussion
When choosing a metal for facing different challenges in hostile environments, its corrosion behavior its corrosion behavior is the most important factor to be considered 18 . Thus, this work has compared the Co-Cr® e GM 800® metal cobalt chromium alloys, after being immersed in 5 cleaners.
Regarding cleanser comparison, 0.05% sodium hypochlorite solution caused more obvious changes to alloy, generating higher roughness values and higher scores on the analysis with stereomicroscope. Analysis of roughness after immersion of the alloy in 4.2% acetic acid solution, 0.05% sodium salicylate, sodium perborate (Corega Tabs®) and 0.2% peracetic acid, showed no statistically significant difference between experimental periods with no increased roughness over time. It was also observed that water-immersed alloys (control) had scores of 0 and 1, with slight loss of brightness (score 1) only seen with a microscope and not to the naked eye. Therefore, we decided to consider these two as having no damage. Thus, 4.2% acetic acid solution, 0.05% sodium salicylate, sodium perborate (Corega Tabs®), and 0.2% peracetic acid did not cause visible damage to alloys at different experimental times.
When the alloys were compared, it was observed that, with regard to roughness, the nominal values of Ra (µm) were higher for GM 800®, but with no statistically significant difference. However, statistically significant difference has been found for Δ1 and Δ2 for this alloy. With the stereomicroscope, clearer changes were observed for alloy GM 800® after 90 immersions (T2). The evaluation by SEM confirmed most surface changes for this alloy at T2. As for the CoCr® alloy, it showed superficial changes similar to those that occur when superficial electrochemical attack is conducted with acid solution to metallographic analysis 19 , with the view of protruding beads on the surface of the alloy. GM 800® alloy showed greater degree of change with suggestive image of detachment of surface oxidation plates. It is believed that the observed difference for the two alloys at T2 may be related to the fact that GM 800® has shown iron and tungsten in its composition, which was identified by EDS, since the presence of other metals in the alloy can modify its corrosion resistance and increase the speed of etching 20 .
It is believed that surface roughness of the alloy reaches clinical significance when reaching 0.2 µm, which favors the adhesion of biofilm. Thus, values higher than this cannot be clinically accepted 21 . In this study, the two alloys exceeded this cut-off point after 90 immersion in 0.05% sodium hypochlorite (Co-Cr®=0.446 µm; GM 800®=1.202 µm), which suggests that 0.05% sodium hypochlorite may cause damage the Co-Cr alloys used in RPD, which agrees with the literature 2,13,14,22 . Although sodium hypochlorite has fungicidal 8,16,23 and bactericide effect, and is able to penetrate up to 3 mm in the resin, not only eliminating the surface bacteria, but also those in depth, if allowed to act for ten minutes, at a concentration of 0.525% 15 , its use in RPD should be cautious due to the deleterious effects on metal infrastructure. Recent studies have demonstrated the damaging effect of sodium hypochlorite on the alloy Co-Cr by weight and ion loss 22 , and by reducing the modulus of elasticity and ultimate strength. In the latter study, however, the property of bending was found satisfactory according to ADA specification No.14 24 .
With respect to the quantity of infused over time, only groups exposed to sodium hypochlorite 0.05% showed obvious changes after the first immersion. Comparing evaluations in SEM's first exposure to hypochlorite (T1) with the evaluation after 90 exposures (T2), it is clear that there was a real deterioration of the surface of the two alloys, which is higher in the alloy (GM 800 ®). The visual scores evaluation showed scores 2 and 3 after dipping 90 cycles, while after the first immersion the score was 0, agreeing with results of previous studies 25,26 .
With the exception of the groups submitted to 0.05% sodium hypochlorite solution, there was no occurrence of surface damage to the alloy. Therefore, it is possible to perform removable partial denture cleaning with most solutions used in the study. However, further studies are needed for evaluating the mechanical properties of alloys, as well as evaluate more immersions.
Conclusion
The solution of 0.05% sodium hypochlorite showed significant surface changes, suggestive corrosion, while other solutions did not present such deleterious effects. Both alloys showed similar surface changes after 90 immersion cycles for different cleansers. Increased contact with cleansers caused greater surface changes on the alloy only when 0.05% sodium hypochlorite solution was used. | 3,210.2 | 2017-08-11T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Search for weakly decaying $\bar{\Lambda\mathrm{n}}$ and $\Lambda\Lambda $ exotic bound states in central Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
We present results of a search for two hypothetical strange dibaryon states, i.e. the H-dibaryon and the possible $\bar{\Lambda\mathrm{n}}$ bound state. The search is performed with the ALICE detector in central (0-10%) Pb-Pb collisions at $ \sqrt{s_{\rm{NN}}} = 2.76$ TeV, by invariant mass analysis in the decay modes $\bar{\Lambda\mathrm{n}} \rightarrow \bar{\mathrm{d}} \pi^{+} $ and H-dibaryon $\rightarrow \Lambda \mathrm{p} \pi^{-}$. No evidence for these bound states is observed. Upper limits are determined at 99% confidence level for a wide range of lifetimes and for the full range of branching ratios. The results are compared to thermal, coalescence and hybrid UrQMD model expectations, which describe correctly the production of other loosely bound states, like the deuteron and the hypertriton.
Introduction
Particle production in Pb-Pb collisions at the Large Hadron Collider (LHC) has been extensively studied [1][2][3].The observed production pattern is rather well described in equilibrium thermal models [4][5][6][7].Within this approach, the chemical freeze-out temperature T chem , the volume V and the baryo-chemical potential µ B are the only three free parameters.Even loosely bound states such as the deuteron and hypertriton and their anti-particles have been observed [8][9][10] and their rapidity densities are properly described [11][12][13][14][15][16][17].Consequently other loosely bound states 1 such as the H-dibaryon and the Λn are expected to be produced with corresponding yields.
The discovery of the H-dibaryon or the Λn bound state would be a breakthrough in hadron spectroscopy as it would imply the existence of a six-quark state and provide crucial information on the Λ-nucleon and Λ-Λ interaction.We consequently have started the investigation on the possible existence of such exotic bound states in pp and Pb-Pb collisions at the LHC.Searches for Λ-nucleon bound states in the Λp and Λn channels have been carried out (see references [18][19][20]).The H-dibaryon, which is a hypothetical bound state of uuddss (ΛΛ), was first predicted by Jaffe using a bag model approach [21].Experimental searches have been undertaken since then, but no evidence for a signal was found (see [22,23] and references therein).Recently, the STAR collaboration investigated the Λ-Λ interaction through the measurement of ΛΛ correlations [24]; this and a theoretical analysis of these data [25] did not reveal a signal.Many theoretical investigations of the possible stability of the H-dibaryon have been carried out, but predicting binding energies in the order of MeV for masses of around 2 GeV/c 2 is extremely difficult and challenging [26][27][28][29].
Our approach is to search for such bound states in central Pb-Pb collisions at LHC energies where rapidity densities can be well predicted by thermal [16,17,30] and coalescence [31] models.The model predictions for rapidity densities of these particles are used and tested against the experimental results.
In this paper the analysis strategies for the searches of the Λn → dπ + bound state and the H-dibaryon → Λpπ − are presented.The analysis focuses on the Λn bound state because production of anti-particles in the detector material is strongly suppressed and thus secondary contamination of the signal is reduced.For the H-dibaryon both the Λ and the p originate from secondary vertices where knock-out background is less likely.No search for the anti-H is performed yet, although it is assumed to be produced with equal yield but the measurement depends strongly on the absorption correction.We begin with a short introduction to the ALICE detector and a description of the particle identification technique used to identify the decay daughters and reconstruct invariant mass distributions.To assess the possible existence of these states we compare the experimental distributions with the model predictions.
Detector setup and data sample
The ALICE detector [32] is specifically designed to study heavy-ion collisions.The central barrel comprising the two main tracking detectors, the Inner Tracking System (ITS) [33] and the Time Projection Chamber (TPC) [34] is housed in a large solenoidal magnet providing a 0.5 T field.The detector pseudorapidity coverage is |η| ≤ 0.9 over the full azimuth.An additional part of the central barrel are detectors in forward direction used mainly for triggering and centrality selection.The VZERO detectors, two scintillation hodoscopes, are placed on either side of the interaction point and cover the pseudorapidity regions of 2.8 < η < 5.1 and −3.7 < η < −1.7.The centrality selection is based on the sum of the amplitudes measured in both detectors as described in [35] and [36].The ITS consists of six cylindrical layers of three different types of silicon detectors.The innermost part comprises two silicon pixel (SPD) and two silicon drift detector (SDD) layers.The two outer layers are double-sided silicon microstrip detectors (SSD).Due to the precise space points provided by the ITS a high precision determination of the collision vertex is possible.Therefore, primary and secondary particles can be well separated, down to 100 µm precision at low transverse momentum (p T ≈ 100 MeV/c).The TPC is the main tracking detector of ALICE and surrounds the ITS.It has a cylindrical design with a diameter of ≈ 550 cm, an inner radius of 85 cm, an outer radius of 247 cm and an overall length in the beam direction of ≈ 510 cm.The 88 m 3 gas volume of the TPC is filled with a mixture of 85.7% Ne, 9.5% CO 2 and 4.8% N 2 .When a charged particle is travelling through the TPC, it ionizes the gas along its path and electrons are released.Due to the uniform electric field along the z-axis (parallel to the beam axis and to the magnetic field) the electrons drift towards the end plates, where the electric signals are amplified and detected in 557568 pads.These data are used to calculate a particle trajectory in the magnetic field and thus determine the track rigidity p z (the momentum p of the particle divided by its charge number z).The TPC is also used for particle identification via the energy deposit dE/dx measurement (see section 3).A complete description of the performance of the ALICE sub-detectors in pp, p-Pb and Pb-Pb collisions can be found in [37].
The searches carried out and reported here are performed by analysing the data set of Pb-Pb collisions from 2011.In the described analyses we use 19.3 × 10 6 events with a centrality of 0-10%, determined by the aforementioned VZERO detectors from the previously mentioned campaign.
Particle identification
The precise Particle IDentification (PID) and continuous tracking from very low p T (100 MeV/c) to moderately high p T (20 GeV/c) is a unique feature of the ALICE detector at the LHC.The PID used in the analysis described in this letter takes advantage of two different techniques.The energy deposit (dE/dx) and rigidity are measured with the TPC for each reconstructed charged-particle trajectory.This allows the identification of all charged stable particles, from the lightest (electron) to the heaviest ones (anti-alpha).The energy deposit resolution of the TPC in central Pb-Pb collisions (investigated here) is around 7%.The corresponding particle separation power is demonstrated in Fig. 1.This technique was used in the following to identify the deuterons, protons and pions.The second method makes use of specific topologies from weak decays, which result in typical V 0 decay patterns.This is used here for the detection of the Λn bound state and the two V 0 decay patterns of the ΛΛ, namely for the Λ identification and the proton-pion decay vertex.
Analysis
The strategies of investigation for the two exotic bound states discussed here are quite similar.They both require the detection of a secondary vertex, which in one case is a pure V 0 and in the second a double V 0 decay pattern.We discuss them separately in the following sub-sections.First we describe briefly the common aspects of both analyses.The tracks used in the analyses have to fulfil a set of selection criteria to ensure high tracking efficiency and dE/dx resolution.Each track was required to have at least 70 of up to 159 clusters in the TPC attached to it, with the (rather loose) requirement, that the χ 2 of the momentum fit is smaller than 5 per cluster.Tracks with kinks due to weak decays of kaons and pions are rejected.To achieve final precision the accepted tracks are refit while the track finding algorithm is run inwards, outwards and inwards again (for more details on the ALICE tracking see [37] and section 5 of [41]).V 0 decays are determined by two (or more) tracks which are emitted from a secondary vertex and which might come close to each other (the minimum distance is called Distance-of-Closest-Approach DCA) while each of the tracks has a certain minimum distance (DCA of the track to a vertex) to the primary vertex.A powerful selection criterion for detecting proper V 0 candidates is the restriction of the pointing angle, namely the angle between the reconstructed flight-line and the reconstructed momentum of the V 0 particle.More details of the secondary vertex reconstruction can be found in [3,37,41], where also the clear and effective identification of Λ baryons is displayed using the aforementioned technique.The selection criteria, described below, are optimised using a Monte Carlo set where the simulated exotic bound states are assumed to live as long as a free Λ baryon.This is a reasonable assumption for all strange dibaryons, which are expected to live around 2-4×10 −10 s [42][43][44] in the regions of binding energies investigated here.
Λn bound state
In analogy to recent hypertriton measurements [8,9] we focus here on the expected two-body decay Λn → dπ + .For the data analysis the following strategy is used: first displaced vertices are identified using ITS and TPC information.In a second step the negative track of the V 0 candidate is identified as an anti-deuteron via the TPC dE/dx information.If the second daughter is identified as a pion, the invariant mass of the pair is reconstructed.Both particles are required to lie within a 3 standard deviations (σ ) band of the expected Bethe-Bloch lines of the corresponding particles.To identify the secondary vertex the two daughter tracks have to have a DCA smaller than 0.3 cm.Another condition is that the maximum pointing angle is smaller than 0.045 rad (see description above).Deuterons are cleanly identified in the rigidity region of 400 MeV/c to 1.75 GeV/c.To limit contamination from other particle species, the dE/dx has to be above 110 units of the TPC signal, shown in Fig. 1.The selection criteria are summarised in Table 1.The resulting invariant mass distribution, reflecting the kinematic range of identified daughter tracks, is displayed in Fig. 2.
H-dibaryon
The search for the H-dibaryon is performed in the decay channel H → Λpπ − , with a mass lying in the range 2.200 GeV/c 2 < m H < 2.231 GeV/c 2 (see Fig. 3 below).The analysis strategy for the H-dibaryon is similar as for the Λn bound state described above, except that here a second V 0 -type decay particle is involved.One V 0 candidate originating from the H-dibaryon decay vertex has to be identified as a Λ decaying into a proton and a pion.In addition another V 0 decay pattern reconstructed from a proton and a pion is required to be found at the decay vertex of the H-dibaryon.First the invariant mass of the Λ is reconstructed and then the candidates in the invariant mass window of 1.111 GeV/c 2 < m Λ < 1.120 GeV/c 2 are combined with the four-vectors of the proton and pion at the decay vertex.A 3σ dE/dx cut in the TPC is used to identify the protons and the pions for both the Λ candidate and the V 0 topology at the H-dibaryon decay vertex.
To cope with the huge background caused by primary and secondary pions additional selection criteria have to be applied.Each track is required to be at least 2 cm away from the primary vertex and the tracks combined to a V 0 are required to have a minimum distance below 1 cm.The pointing angle is required to be below 0.05 rad.All selection criteria are summarised in Table 2.The resulting invariant mass is shown in Fig. 3.The shape of the invariant mass distribution is caused by the kinematic range of the identified daughter tracks.
Systematics and absorption correction
Monte Carlo samples have been produced to estimate the efficiency for the detection of the Λn bound state and the H-dibaryon.The kinematical distributions of the hypothetical bound states were generated uniformly in rapidity y and in transverse momentum p T .In order to deal with the unknown lifetime, different decay lengths are investigated, ranging from 4 cm up to 3 m.The lower limit is determined by the secondary vertex finding efficiency and the upper limit by the requirement that there is a significant probability for decays inside the TPC 2 (the final acceptance×efficiency drops down to 1% for the Λn and 10 −3 for the H-dibaryon).The shape of transverse momentum spectra in heavy-ion collisions is described well by the blast-wave approach, with radial flow parameter β and kinetic freeze-out temperature T kin as in [46].The true shape of the p T spectrum is also not known, therefore it is estimated from the extrapolation of blast-wave fits to deuterons and 3 He spectra at the same energy [10].To obtain final efficiencies, the resulting blast-wave distributions constructed for the exotic bound states are normalised to unity and convoluted with the correction factors (efficiency × acceptance).
Typical values of the final efficiency are of the order of a few percent assuming the lifetime of the free Λ.The uncertainty in the shape of the p T distributions is the main source of systematic error.Blast-wave fits of deuteron and 3 He spectra are employed to explore the range of systematic uncertainties.Analyses of these results lead to a systematic uncertainty in the overall yield of around 25%.
Other systematic uncertainties are estimated by varying the cuts described in Table 1 and Table 2 within the limits consistent with the detector resolution.The contributions of these systematic uncertainties are typically found to be in the percent range.The combination of the different sources leads to a global systematic uncertainty of around 30% for both analyses, when all uncertainties are added in quadrature.
For the Λn bound state analysis the possible absorption of the anti-deuterons and the bound state itself when crossing material has to be taken into account.For this, the same procedure as used for the antihypertriton analysis [9] is utilised.The absorption correction ranges from 3 to 40% (depending on the lifetime of the Λn bound state, which determines the amount of material crossed) with an overall uncertainty of 7%.
Results
No significant signal in the invariant mass distributions has been observed for both cases, as visible from Fig. 2 and Fig. 3 3 .The shape of the invariant mass distribution of dπ + is of purely kinematic origin, reflecting the momentum distribution of the particles used.The selection criteria listed in Table 1 are tuned to select secondary decays.The secondary anti-deuterons involved in the analysis originate mainly from two sources: The first and dominating source are daughters from three-body decays of the anti-hypertriton ( 3 ΛH → dpπ + and 3 ΛH → dnπ 0 ) where the other decay daughters are not detected.The invariant mass spectrum is obtained by combining theses anti-deuterons with pions generated in the collision.The second source is due to prompt anti-deuterons which are incorrectly labelled as displaced, because they have such low momenta that the DCA resolution of these tracks is not sufficient to separate primary from secondary particles.
Since no signal in the invariant mass distributions is observed upper limits are estimated.For the estimation of upper limits for the rapidity density dN/dy the method discussed in [47] is utilised.In particular, we apply the software package T Rolke as implemented in ROOT [48].This method needs as input mass and experimental width (3σ ) of the hypothetical bound states.The observed counts are therefore compared to a smooth background as given by an exponential fit outside the signal region (as indicated by the line in Fig. 2 and Fig. 3).For both candidates Λn and H-dibaryon we assume a binding energy of 1 MeV.The width is determined by the experimental resolution and obtained from Monte Carlo simulations.In addition, the final efficiency which is discussed in section 5 is required.Further, values of branching ratios of the assumed bound states are needed.These depend strongly on the binding energy.With a 1 MeV binding energy for the Λn bound state the branching ratio in the d + π + decay channel is expected to be 54% [49].The branching ratio for a 1 MeV or less bound H-dibaryon decaying into Λpπ − is predicted to be 64%, see [44].
The resulting upper limits, for 99% CL, are shown in Fig. 4 as a function of the different lifetimes; for the Λn bound state in the upper panel and for the H-dibaryon in the lower panel.These upper limits include systematic uncertainties.For the Λn the absorption corrections are also considered in the figure, which causes the upper limits to be shifted upwards.
The obtained upper limits can now be compared to model predictions.The rapidity densities dN/dy from a thermal model prediction for a chemical freeze-out temperature of, for example, 156 MeV, are dN/dy = 4.06 × 10 −2 for the Λn bound state and dN/dy = 6.03 × 10 −3 for the H-dibaryon [16].These values are indicated with the (blue) dashed lines in Fig. 4. For the investigated range of lifetimes the upper limit of the Λn bound state is at least a factor 20 below this prediction.For the H-dibaryon the upper limits depend more strongly on the lifetime since it has a different decay topology and all four final state tracks have to be reconstructed.The upper limit is a factor of 20 below the thermal model prediction for the lifetime of the free Λ and becomes less stringent at higher lifetimes since the detection efficiency becomes small.For a lifetime of 10 −8 s, corresponding to a decay length of 3 m, the difference between model and upper limit reduces to a factor two.
In order to take the uncertainties in the branching ratio into account, we plot in Fig. 5 the products of the upper limit of the rapidity density times the branching ratio together with several theory predictions [16,30,31,50].The curves are obtained using the value for the Λ-lifetime of Fig. 4.
The (red) arrows in the figures indicate the branching ratio from the theory predictions [44,49].The obtained upper limits are a factor of more than 5 below all theory predictions for a branching ratio of at least 5% for the Λn bound state and at least 20% for the H-dibaryon.
Discussion
The limits obtained on the rapidity density of the investigated exotic compound objects are found to be more than one order of magnitude below the expectations of particle production models, when using a realistic branching ratio and a reasonable lifetime.It has to be noted that simultaneously, a clear signal was observed for the very loosely bound hypertriton (binding energy < 150 keV) for which production yields have been measured [9].These yields along with those of nuclei A = 2,3,4 agree well with the Upper limit H-dibaryon (99% CL) Preferred BR from theory Fig. 5: Experimentally determined upper limit, under the assumption of the lifetime of a free Λ.In the upper panel shown for the Λn bound state and for the H-dibaryon in the lower panel.It includes 30% systematic uncertainty for each particle and 6% correction for absorption with an uncertainty of 7% for the Λn bound state.The theory lines are drawn for different theoretical branching ratios (BR) in blue for the equilibrium thermal model from [16] for two temperatures (164 MeV the full line and 156 MeV the dashed line), in green the non-equilibrium thermal model from [30] and in yellow the predictions from a hybrid UrQMD calculation [50].The H-dibaryon is also compared with predictions from coalescence models, where the full red line visualises the prediction assuming quark coalescence and the dashed red line corresponds to hadron coalescence [31].
predictions of the thermal model discussed above and decrease with each additional baryon number by roughly a factor 300.One would therefore assume that the yield of the Λn, if such particle existed, should also be predicted by this model and with a value for the rapidity density of about a factor 300 higher than the measured hypertriton yield.Similar considerations hold for the H-dibaryon.
Conclusion
A search is reported for the existence of loosely bound strange dibaryons ΛΛ and Λn whose possible existence has been discussed widely in the literature.No signals are observed.On the other hand, loosely bound objects with baryon number A =3 such as the hypertriton have been measured in the same data sample.The yields of nuclei [10] and of the hypertriton [9] are quantitatively understood within a thermal model calculation.The present analysis provides stringent upper limits at 99% confidence level for the production of H-dibaryon and Λn bound state, in general significantly below the thermal model predictions.The upper limits are obtained for different lifetimes.The values are well below the model predictions when realistic branching ratios and reasonable lifetimes are assumed.Thus, our results do not support the existence of the H-dibaryon and the Λn bound state.
4 Fig. 1 :
Fig.1: TPC dE/dx spectrum for negative particles in a sample of three different trigger types (minimum bias, semicentral and central).The dashed lines are parametrisations of the Bethe-Bloch-formula[38][39][40] for the different particle species.
Fig. 2 :
Fig. 2: Invariant mass distribution for dπ + for the Pb-Pb data corresponding to 19.3 × 10 6 central events.The arrow indicates the sum of the mass of the constituents (Λn) of the assumed bound state.A signal for the bound state is expected in the region below this sum.The dashed line represents an exponential fit outside the expected signal region to estimate the background.
Fig. 3 :
Fig. 3: Invariant mass distribution for Λpπ − for the Pb-Pb data corresponding to 19.3 × 10 6 central events.The left arrow indicates the sum of the masses of the constituents (ΛΛ) of the possible bound state.A signal for the bound state is expected in the region below this sum.For the speculated resonant state a signal is expected between the ΛΛ and the Ξp (indicated by the right arrow) thresholds.The dashed line is an exponential fit to estimate the background.
Fig. 4 :
Fig. 4: Upper limit of the rapidity density as function of the decay length shown for the Λn bound state in the upper panel and for the H-dibaryon in the lower panel.Here a branching ratio of 64% was used for the H-dibaryon and a branching ratio of 54% for the Λn bound state.The horizontal (dashed) lines indicate the expectation of the thermal model with a temperature of 156 MeV.The vertical line shows the lifetime of the free Λ baryon.
Table 1 :
Selection criteria for Λn analysis. | 5,299 | 2015-06-24T00:00:00.000 | [
"Physics"
] |
Nanoparticle-doped electrospun fiber random lasers with spatially extended light modes
Complex assemblies of light-emitting polymer nanofibers with molecular materials exhibiting optical gain can lead to important advance to amorphous photonics and to random laser science and devices. In disordered mats of nanofibers, multiple scattering and waveguiding might interplay to determine localization or spreading of optical modes as well as correlation effects. Here we study electrospun fibers embedding a lasing fluorene-carbazole-fluorene molecule and doped with titania nanoparticles, which exhibit random lasing with sub-nm spectral width and threshold of about 9 mJ cm^-2 for the absorbed excitation fluence. We focus on the spatial and spectral behavior of optical modes in the disordered and non-woven networks, finding evidence for the presence of modes with very large spatial extent, up to the 100 micrometer-scale. These findings suggest emission coupling into integrated nanofiber transmission channels as effective mechanism for enhancing spectral selectivity in random lasers and correlations of light modes in the complex and disordered material.
Introduction
Random lasers are complex photonic devices which rely on multiple scattering of light in microscopically non-homogeneous media with optical gain [1]. When relevant interference effects are present for the scattered light, some degree of spatial localization can be predicted for optical modes, which reduces inter-mode coupling [2]. These phenomena have also been associated to the presence of very narrow (full width at half maximum, FWHM, down to sub-nm) lasing peaks [3]. Such sharp resonances are largely independent oscillations, typically observed in various systems which spontaneously embed random cavities, such as semiconductor powders [3,4], clusters of colloidal nanoparticles [2], and conjugated polymer films [5]. Methods to achieve so-called resonant feedback random lasing from independent modes include using excitation beams with high directionality, and designing weakly diffusive media as widely studied with solutions of lasing dyes with titania scatterers [2,6]. These aspects have motivated a debate about how much the modes of random lasers spread in the disordered medium, with various possible configurations proposed, ranging from highly localized states with exponentially decaying amplitude to extended modes related to scattering resonances [3,4,[7][8][9].
Correlated emission and non-locality may also arise in random lasers, associated to mode competition and interplay through open cavities leading to extended enough and eventually overlapping light states [2,6,8,10]. In this case, the propagation of light is more likely to be describable as a diffusive process [6,11], with a characteristic photon transport mean free path, in analogy with random walking particles. The 3 interaction, until only cooperative modes survive and condensate into a single wavefunction. Such mode-locking regime is promising for applying these devices to several types of photonic chips, including platforms with all-optical control [2,10].
Random lasing has been achieved in a wide range of three-dimensional (3D) systems, largely using inorganic nanostructures and powders (e.g., ZnO [12,13], GaN [12], BaSO 4 [14] etc.), photonic glasses [15], or dispersions of elastic scatterers in solutions with lasing dyes [2,6,8,10]. Organic crystals [16,17] and epitaxial nanowires [18], biopolymers [19,20], as well as conjugated polymers [5,21], have also been found to efficiently work as random lasing materials. Affordable, large scale and easy processing procedures make organic building blocks, and their nanocomposites at the solid-state [13,22], especially sound for building low-cost and versatile devices, whose possible applications might span from speckle-free imaging schemes [23] to thermal [1], bio-chemical [24], and biological [25] sensing and diagnostics. In addition, some organic nanostructured systems, such as light-emitting polymer nanofibers [26], would allow unexplored aspects of mode condensation to be probed, since in quasi-one dimensional organic filaments a guided way of propagation for the lasing modes is added to diffusion and scattering.
In this work we analyze the spatial mode behavior in random lasing from disordered, non-woven networks of nanoparticle-doped polymer fibers based on a molecular active material with optical gain. This system undergoes lasing with very narrow emission peaks as typical of resonant feedback, and simultaneous presence of spatially extended modes, whose spreading might be supported through emitting regions connected by polymer nanofibers. Detailed space/spectrum cross-correlation Electrospinning is performed in air at ambient conditions, and fibers are collected on quartz substrates placed on top of the metal collector surface. The setup used for SSCC is schematized in Fig. 1. A 250 µm spot from the Nd:YAG laser is obtained by an iris pin-hole, a subsequent 3.3× expansion with a telescope system and a focalization onto fibers (L1 lens in Fig. 1). Part of the excitation is redirected (BS1) and monitored in real-time by a pyroelectric detector (J3-09, Coherent). The radiation emitted by fibers is imaged onto the entrance slit of the monochromator with a 5× magnification through an objective lens (L2) and a focusing lens (L3). The collected radiation is imaged by the Peltier-cooled CCD with a 1024×256 array of 26×26 µm 2 , vertically-binned pixels allowing emission spectra to be collected for each coordinate along the direction defined by slit long axis. This leads to a resolution on the sample surface, parallel to the Y-direction, of about 10 µm (Y S coordinate, calculated by taking into account the magnification applied). Moving the L3 lens along the X direction perpendicular to the direction of the laser beam propagation and to the slit long axis (Y), with steps of 100 µm through a translation stage, leads to a resolution of 20 µm for the corresponding coordinate at the sample surface (X S ). The whole excitation field is reconstructed by the composition of 15 maps which are then shown by using contour lines as color-fill boundary. Image processing involves calculating a set of values by interpolating the slope (steepness) intensity between two adjacent pixels, which is in turn defined as the ratio of rise (difference in the intensity values) and run (pixel width). Spatial information is simultaneously acquired with far-field fluorescence imaging. To this aim a beam splitter (BS2 in Fig. 1) sends a part of the emitted radiation to a 405 nm longpass filter
Results and Discussion
Our devices are built by disordered samples of electrospun PS fibers doped with a Fl-Cz-Fl. The molecular structure of Fl-Cz-Fl is shown in Fig. 2(a). Fl-Cz-Fl dispersed in a PS matrix features a broad absorption band at 3.54 eV, a fluorescence spectrum with clearly resolved vibronic bands from 3.12 eV to 2.77 eV, and a corresponding high quantum yield of 0.86 [28]. In the nanocomposite mat, Fl-Cz-Fl serves as gain material, whereas additional scattering can be provided in the system by doping with TiO 2 nanoparticles. Electrified Fl-Cz-Fl/PS solution jets biased at 13 kV are used to deposit fibers on quartz substrates, which are in turn placed onto the metal collector.
The resulting morphology of a network of randomly oriented Fl-Cz-Fl/TiO 2 -doped PS fibers is shown in Fig. 2 The fibers have typical diameter ranging from 2 to 5 µm, and form a dense nonwoven mat. In the dark-field micrograph in Fig. 2(b), with illumination from the top fibrous surface and signal being collected in back-scattered configuration, the different focusing conditions highlight the 3D character of the network, and light-scattering can be appreciated from the bright spots decorating fibers along their length, which is given by incorporated particles. These are better inspected by STEM, unveiling clusters distributed in the organic filaments, with mutual distances from a few to many tens of µm, as displayed in Fig. 2(c). 9 cylindrical or ring-shaped cavities formed during fiber deposition, is characterized by well-defined and stable spectral features determined by the cavity geometry (i.e., fiber segment length or ring perimeter), here the overall number and intensity of the lasing modes is found to change from shot-to-shot, as displayed in Fig. 3 and in Fig. 4(a). In particular, in Fig. 3 we show two couples of single-shot emission spectra obtained with excitation fluence 17 mJ cm -2 and 42 mJ cm -2 , respectively. The inter-mode shot-toshot variability in the emission is not related to instabilities in the excitation fluence.
Indeed, fluctuations of the pumping laser are below 2% (evaluated as the ratio between standard deviation and average value) in the time interval typical of spectroscopic measurements (a few minutes), whereas the measured shot-to-shot intensity variations of the fiber emission is in the range 5-20%, similarly to other random laser systems [31]. These fluctuations have been observed in various systems at the solid state, both in resonant-feedback [4,16,32], and in intensity-feedback random lasers [13], and A broader and less spiky spectrum is found by increasing the excitation area, attributable to a larger number of activated lasing modes. In fact, controlling the spatial shape of the excitation beam constitutes an effective method for tailoring the spectral features of the random lasing emission, and for eventually selecting specific modes by active control [33].
The spatial distribution of modes, and the SSCC [5] is studied in our work by recording the dispersed spectrum by a CCD array, with resolution on the sample emission plane of ~10 µm/pixel, in binned configuration along the detector vertical direction (Y, parallel to the slit long axis, as detailed in the Experimental Section). A typical map of the spectrally-and Y S -resolved emission is given in Fig. 5(a), showing that most of the emission comes from the inner part of the imaged region (Y S 310 µm 50 µm in Fig. 5(a)), and that brighter lasing spots are appreciable in the intensity map (e.g., at Y S 290 µm and 340 µm). The overall spectrum from the excited nanofibers is obtained by the line-to-line sum of the intensity data from Fig. 5(a), performed for each single wavelength. This is shown in Fig. 5(b), where we highlight three sample wavelengths ( , , ) whose optical mode spatial profiles will be considered in the following. We then collect the emission from many vertical sections of the entire excited region, according to a detailed SSCC method [5]. This is carried out by translating the collecting lens (L3 in the scheme of Fig. 1) along the direction, composed along the X direction as a function of wavelength (Fig. 5(c)). Here X S is the Y S -perpendicular coordinate on the sample. The map evidences that most of the random lasing peaks are distributed along a few tens of µm along X S , which is of the order of the delocalization measured for conjugated polymer films [5]. Far-field fluorescence micrographs highlight unique features of random lasing nanofibers ( Fig. 5(d)). Following patterns much more complex than in clusters of nanoparticles [2,34], in random media with bubble structure [35] and in conjugated polymer films [5], emission is not found to occur from all the illuminated field of the 13 cavities in competition in the electrospun mat, and that emission from the lightemitting fibers is also detected well far away from the directly excited area, due to waveguiding of light along the organic filaments. In Fig. 5(d), the bright filaments are clearly visible due to outcoupling of waveguided photons from the lateral surface of fiber bodies, which is supported by Rayleigh scattering from surface roughness or embedded nanoparticles. As opposed to breaks at the tips of nanofiber waveguides [29,36], here the scatterers might redistribute a fraction of radiation also forwardly, namely along the longitudinal axis of the organic filaments. The self-absorption of the gain molecule, potentially hindering waveguiding, is below 5×10 2 cm -1 in our fibers due to the significant Stokes shift of Fl-Cz-Fl (about 0.55 eV). In principle, scattering of the incident UV excitation light by the electrospun fibers could also contribute to the broadening of the excitation laser spot when it impinges onto the fibrous sample [35]. However, given the micrometer size of the fibers (2-5 µm), which is an order of magnitude larger than the excitation wavelength (0.355 µm), such effect is not expected to be dominant in our material. For instance, calculating the angular dependence of the light scattering form factor, f(θ), for cylindrical polymer fibers [37], evidences that for ka>>1 (a is the fiber radius and k=2π/λ is the wavevector of the incident light) most of the incident light is scattered at forward angles < 60°. Instead, light scattering at angles around 90° (i.e. within the plane of the fiber network), which can contribute to spatial broadening of the excitation spot on the fibrous samples, becomes relevant for ka≈1, namely for fiber with size below 100 nm.
Integrating each (, Y S ) image (as that in Fig. 5(a)) over the whole width of the CCD (i.e., over wavelengths), and composing the resulting profiles obtained for each where the entirety of modes (i.e., at all wavelengths and across the whole emitting region) is compelled. Isolated cavities cannot be distinguished in Fig. 6(a) Here, different profiles are found, corresponding to differently shaped cavities, different degrees of delocalization within the excited region, and possibly different coupling extent for resonances across the complex material. The localization length of the mode at i , ℓ loc-λi , defined as the length of the portion of the system in which the amplitude of the optical state differs appreciably from zero, can be estimated from each mode map, by the inverse participation ratio, IPR λi (how many sites the optical state is distributed over), as [10,38]: where ( , ) is the λ i -mode intensity. IPR λi are found to be below 10 -4 μm -2 , indicating spread states. The resulting localization lengths (155 µm, 123 µm, and 148 µm for , , and , respectively), much larger than in cluster of nanoparticles [4,10], lead to the conclusion that modes broadly extend over the network of fibers. The overall spatial broadening of these modes, partially overlapping in space, is of the order account the extra contribution of fibers to spectral selection, with some wavelengths possibly being better supported by waveguiding and formation of loops with relatively lower optical losses [34]. Hence, the corresponding modes might benefit from a higher lifetime, thus lasing from spectral regions where a relatively lower gain is measured for Fl-Cz-Fl/PS nanofibers. Overall, electrospun non-wovens are likely to sustain mode coupling and non-locality in the fiber random laser, selecting specific wavelengths from a background of uncorrelated and spatially-decoupled modes, which could be attributed to internal waveguiding along fibers [36].
Conclusions
In conclusion, we show electrospun fibers with optical gain and light-scattering properties, which features random lasing with sub-nm spectral width as well as modes with very high spatial extent, up to the 100 µm-scale. The emission patterns allow in principle the guiding behavior of the fibers to be linked with their spectral features, active cavities, and morphology. This is an interesting outlook for future experiments.
Many applications might be opened by coupling modes from random lasers, possibly with different degrees of localization, with other sites of the same complex material or with external receivers through efficient transmission channels as provided by integrated nanofibers. Platforms especially benefiting from such architectures would include chemical and biochemical sensors, which would exploit electrospun non-wovens with ultra-high surface-to-volume ratio to detect agents affecting random lasing emission, and promptly transmit information from extended optical modes to coupled detectors. Also, fibers could be used as input components triggering emission, by altering the oscillation frequency of spatially-defined regions in the disordered material [8] in a controlled way. | 3,546.2 | 2017-10-02T00:00:00.000 | [
"Physics"
] |
Digital Education as a System Strategy for Saving the Nation
The immediacy of the problem research is due to the increased attention of the Russian government to the education issues, which is reflected in the National Doctrine of Education in the Russian Federation until 2025, where the digital education is defined as the general direction of the country; economic and socio-cultural development. The purpose of the article is to identify the digital education role in supporting the implementation strategy of saving the nation. The modern education is viewed as “soft power” from the actor-network and connectivism approaches; position in the modern society integration processes; as a way to achieve the goals of the national youth development in the global space of cooperation; as a functional mechanism that develops a digital society and overcomes the digital inequality; as a new type of the subject-subject interaction, provided by the subject continuous education in the field of telecommunication technologies, as well as the communication between subjects and social institutions; as the teaching staff training for the scientific and technological development of the society, which became digital in the 21st century. The leading method in the problem study was the educational practice reflection, which allows identifying the mechanisms of the digital education diversification and upgrading; to determine the degree of the digital product influence on the educational strategy realization; to evaluate the educational organizations; responsibility for the results, the quality of the subject education. As a result of the study, the authors will determine the cognitive barriers existing in the education practice which actualize the situation “here and now”, which contribute to the appeal to the digitalization phenomenon as a change in the thinking paradigm; to the study of the new technologies, the living projection of the virtual world with its inherent mutations and contradictions by the human subject; to creation the practice-oriented educational environment that can form competencies among students which allow the educational subject to be competitive in the labor market and indemand in the global digital world.
Introduction
The reflection of the education growing role in the informational society contributed to the emergence of new pedagogical concepts, determining other projections of the educational development; philosophical approaches, ensuring the implementation systematic strategy for saving the nation; ethical regulations of the subject's activities aimed at increasing the level of modern person's responsibility, at developing decision-making speed, at mastering the skills of designing one's activity in situations of the values diversity and conflict; requirements for the education subject of the XXI century; new format for the professional educational programs development (Lubkov & Karakozov, 2017).
The paradox of the modern digital education is the following in the organization of the subject-subject interaction in the digital education, people have no advantage over objects or tools; the actor-network approach rightly implies the equality of all network joints, since both are involved in the action; the relations between people, mediators, computer programs are completely symmetrical.
Along with the actor-network and connectivist approaches, the essential approach is applied in connection with the concepts ambiguity and eclecticism, introduced into the text of our article. We'd turn to the definition of the concepts.
1.
The category "digital material" means the presentation of texts, pictures, maps, etc. in the high quality digital format to the education subject; 2. The digital education is a complex concept that does not have an unambiguous interpretation nowadays due to its application in various fields of activity. Professor MIPT Platonov considers the digital education as the education in which there are two aspects: digital format (digital process, digital learning, digital transmitted content, etc.); digital resources, digital management, digital communications, different levels of education, interaction with IT engineering, business, science, society (Platonov, 2004).
Professor Kondakov understands the digital education as a system of opportunities that opens up through the digital technologies application (2018).
According to the education expert, winner of the all-Russian contest "i-teacher" 2018 Pogodin, the digital education is the use of computer tools and information technologies in various educational contexts (2017).
In our article, we'll adhere to the last interpretation of the concept as, in our opinion, most fully reflecting the phenomenon holographic essence.
3. An actor is a person or a legal entity, a set of organizations, etc. and the relationship between them.
4. We'd consider the concept of "network". We understand the network as a complex of geographically placed computers, interconnected with data transmission channels and network software. The network is a temporary (for performing one concrete task) or permanent (for long-term work) set of certain agents (people, computers, mechanisms, etc.), interacting in a single space, whose partnership activity is aimed at solving a common problem.
5. The science of networks (the science of connectedness) is a modern scientific discipline that studies the artificial networks general features (informational, social, biological and other networks).
6. We'd turn to the concept "agent" meaning. The Explanatory Dictionary interprets this concept as а) "A person authorized by an institution, enterprise to perform official, business assignments"; b) A person acting in one's interests (Vajndorf-Sysoeva & Subocheva, 2018).
Comparing the etymology and the range of meanings presented in some European languages, it can be confirmed that in German and French the concept of "agent" has a narrower, applied load, and it is used in the meaning of "representative" (Bulyko, 2007). In Russian, the concept of "agent" acquires an additional coloring with the meaning of "cause", which determines the essential meaning, the process origins.
In English, the word "agent" has, in addition to the indicated contact man (contact person), the meanings of man of business (business person) and, the most important for our study, the meaning of medium (storage medium, intermediary) (Krysin, 2005). In English, there is a definition of the concept of "agent" as an "acting force" and in Russian -"reason". In the research interest, we'll focus on the first and the fourth meanings of the concept.
To organize the digital education complex impact on children and youth at the initial stage, it is necessary to "introduce" the subject into the digital atmosphere. This can be done by creating a digital environment.
According to Bykasova, "the educational environment is a structure with the properties of connectivity, integrity, controllability, depending on the richness of its educational, educative, informational and other resources" (2013).
The digital educational environment has its own nuances: it is a set of informational systems meant to solve the spectrum of educational process problems in the logic of its quantification and qualification.
Upon completion of the problem-solving process, the digital educational environment properties may undergo some changes in accordance with the network agents new goal.
The digital educational environment is, in our opinion, the space of the subject's thinking structurally incomplete practice, and therefore it is advisable to consider its genesis and evolution. The digital educational environment genesis and evolution is influenced by many factors: accessibility, unity, openness, usefulness, integration, etc. Automation and artificial intelligence are the digital education key principles, which make it possible to satisfy the population demand for the educational diversity; to develop human resources considerably; to reduce the educational migration through the access to various educational resources over the network; to motivate the subject to learning on the basis of the individual educational trajectories; to get the subject a high-quality education in the place, where he lives, that solves a lot of problems, among which, in modern conditions, the main one is logistics; to monitor the digital education results of children and youth through the network, etc.
The modern education is characterized by a high degree of the procedural foundations mutation of the everyday practices subject understanding. We'd consider that for the educational process successful implementation it is necessary to create an artificial environment. The environment's development has the following pattern: the phenomenon of problems turnover and synchronization, and it occurs due to the following principles: 1. Stereotyping and social inclusion (the subject accepts stereotypes of behavior in society and he is included in the circle of communication without conflict);
2.
Subject activity (mastering the existing rules, the "spirit" of recreation, the desire to occupy one's own niche in microsocium, etc.);
3.
Continuity and consistency (the relationship of sensory and logical, rational and irrational, conscious and unconscious in the subject's behavior);
4.
Integrativity (in the artificially created environment, the best didactic samples are concentrated, the new techniques are practiced, the innovations are tested);
5.
Communicativeness (a specially created environment is comfort and psychological relaxation, which puts the subject to sacred conversation, removing certain difficulties during socialization); 6. Meta-subject matter (a set of knowledge obtained in the artificial environment in horizontal and vertical structures).
The digital environment creation makes sense because of the communication organization at a qualitatively different level: it is a connection of the students creative activity discursive and intuitive elements in the communication process; theory and practice of the education subject.
Purpose and objectives of the study
The purpose of our study: to identify the digital education role in supporting for the implementation of the nation's saving strategy. To achieve this goal, it is necessary to solve the following tasks: to determine the education subject role in the preferential culture's conditions; to identify the conditions of the education subject existence in the XXI century; to indicate clearly the new competencies that are mastered by generations Y and Z.
Literature review
As part of the digital environment and digital education development, the existing advantages and disadvantages of the network are the subject of our research, which will allow us to make forecasts for the development of various kinds of public, social, physical and other phenomena.
The network research is carried out by many domestic and foreign scientists: the didactic potential of the local computer networks is reflected in the works by Gershunsky and Khutorskoy (2003) (2015) and Vasenkov (2017).
With a certain degree of conditionality, the scientists' works reflection allows us to synthesize and group the network positive aspects: 1. The brain regions development that are quickly switched to different tasks (multitasking); 2. Change in the type of the individual's thinking; 3. Expanding the subject educational opportunities; 4. Using a wide range of teaching aids and narratives sources: interactive posters, virtual boards, intelligence cards, kanban boards, animated videos, etc.
The network negative sides, in our opinion, are: 1. Introduction into the education practice of certain unverified technologies;
2.
Loss of the writing skills by learner; 3. Screen dependence of the education subject;
4.
Lowering in students' social skills;
5.
Problems with speech development and visual impairment in children, adolescents, young people (Twissell, 2018).
Methodology
The modern education new paradigm as a platform for the individual's socio-cultural development, forming on the basis of the strategic audit of the main pedagogical models, methodological principles, didactic methods, educational ideas developed by the modern pedagogical science and practice, actively involves the education subject in the digitalization, for which the future of economics and education stand.
The purpose of our study is to identify the digital education role in supporting for the implementation of the nation's saving strategy. To achieve this goal, we'd consider the role of the education subject in the context of a prefigurative culture (Tham & Werner, 2005). The education subject of the XXI century exists in conditions that are qualitatively different from the previous formations. So, for example, the generations Y and Z need to master the new competencies: network competence (the ability of a person to exist in the digital environment); digital competence (subject's responsibility for the network behavior), etc. These competencies are integral parts of the forming modern network culture (personal self-realization), closely intertissued into a new type of culturea prefigurative culture, in which the education occupies one of the leading positions.
In the format of the modern education practice development, we'll consider the digital education as "soft power", which 1) provides for national security and national savings; 2) increases the priority of the Russian education; 3) accumulates the human resource that is important for the national education competition; 4) improves the society infrastructure; 5) develops the new standards of the modern educational design; 6) optimizes the vital activity and sustainability of the ecosystem, affecting the education subject formation (Pogodin, 2017).
The current situation imperatively and a priori aims the pedagogical community to create an educational ecosystem by 2024a combination of efforts to interaction between state, society, business, science in order to build up intensively the human capital and its rational, humane, economical application. For the non-conflict functioning of the ecosystem, the quantiums network is expanding; the program material customization is carried out; automation and robotics are being introduced; emotional intelligence and cognitive flexibility of the subject are developing; partnerships and cooperation with the other institutions are developing (Bersin, 2017).
The modern pedagogical science focus is changing under the influence of its epistemic potential, which allows teachers to use the latest technology in education; to enhance the subject's receptive ability; to actualize the practice-oriented nature of the education. This can be achieved through the deployment of the local computer networks didactic potential. This potential is the basis for reengineering and forming of the universities' new organizational models: academic virtual universities, industrial virtual universities, regional electronic campuses, etc. (Bykasova, 2016).
The modern education role changes cardinally under the influence of the knowledge transfer and perception methods, alterations in the system of their formation (practice-oriented master classes, distance learning courses, webinars, projects, events, reference and teaching materials, etc.). As a result, the education role in the social development is prominently displayed 1) the education role vector is changing: from the culture of observing the people's activities products, the modern subject of the education is moving to the culture of direct participation in the objects' creation and alteration; 2) the collective creativity of students in crisis decision-making is developing; 3) the subject has an access to the scientific information arrays for application in their activities; 4) the skill of navigation in competencies is forming; 5) the degree of collaborative creative processes is increasing; there is a shift in the technology development (the information services socialization, the network formation is a platform for the joint activities) (Serikov, 2015).
Results
The most important place in then digital education as a systematic strategy of the nation saving is occupied with such an artificially created environment that promotes the moral and aesthetic values, it has an ideological and organizational impact on the worldview and social behavior of people, and it also protects the sacred landscape of an individual from the text (media) manipulation.
The education modern subject develops an independent thinking; it seeks to cultivate an aesthetic taste; to enhance culture; to develop their own experience based on the ethnic group moral values; to adapt flexibly to society; to resist emerging cyber threats and cyber-attacks. Soft competences, acquired by the educational subject, demonstrate "reference points", contributing to updating the education content, which is necessary for the national school in connection with the pedagogical stability emerging risks in creating the digital education (Samojlov & Bykasova, 2014).
In the experience formation of the subject establishment in the digital education, an important role belongs to the teacher. A modern teacher is an architect of the transmedia products whose tasks are as follows: 1. To anticipate the influence of the media text on the education subject. This can be done with the media text preventive analysis and objective assessment by the teacher, who is able to develop the appropriate recommendations to counter manipulation and to make the media text an assistant in the self-identification of the society members, individual's adaptation to life in a rapidly changing world, and person's harmonization. Without harmony, it is difficult for a subject to perceive comprehensively the world around him, to form himself as a personality of the 21st century (Kuznetsov, Vovchenko, Samojlov, & Bykasova, 2018).
2. To build a modern transmedia product. For this, the new soft competencies that are expanding through the use of computer, laptop, TV screen, other gadgets and the media, develop among the students (Gibson, Broadley, Downie, & Wallet, 2018).
3. To reflect the media competencies. It is known that the media competencies are capable of mutating.
This process is expressed in a greater share of the subject's independence: the search for narrative, the use of the libraries electronic resources, the Internet, etc. for creating a real product. The teacher accompanies the process of education subject establishment, mastering not only multimedia (several forms and one channel is used to describe one story) and cross-media (the story of one story is broadcast through the several channels), but also transmedia (one large-scale topic includes several stories that are used to transmit various forms and numerous channels) (Gleason & Gillern, 2018).) 4. To promote the non-conflict entry of the education modern subject into the digital society of the XXI century, since the modern teacher is a "didactic engineer", collaborating with the "digital" students and schoolchildren (Krewsoun, Vovchenko, & Bykasova, 2019).
5.
To create the long-lasting transmedia products. The education quality and effectiveness increase significantly in the age of the informational technology (for example, making presentations in Power Point involves the use of not only video clips, drawings, diagrams, but also animations, etc.) (Ellison, 2007).
The created modern transmedia product has a complex architecture, saturated with the various resources.
This architecture is mobile; the borders are open; its development is multi-vector and controllable; the holistic content is constantly updated (Dholakia, Bagozzi, & Pearo, 2004).
Discussions
According to all preparatory work will be carried out to determine the parameters for introducing the education models; the calculation of transfers; the necessary information aggregation; recording risks; defining the criteria for the universities' effectiveness (to provide the state support) (Boguslavskij, 2016).
In the format of the digital education development project, a reflection of the level of the subject's network literacy is supposed. The network literacy is not only a set of skills related to the use of the modern information computer technologies, but also the mastery of the networks science. In the education practice, the networks science is a structured education of the various academic disciplines with the aim of attracting the attention of students to science, engineering, mathematics; mastering the computer technology; expanding the subject's informational literacy; developing the competencies provided by the Federal State Educational Standard, which formation takes place in the integrated learning process on the various subjects (Barinova & Karunas, 2015).
The modern education subject network literacy involves the skills formation, necessary for life in the 21st century: ability to find, to analyze the network patterns in the surrounding systems; using a network approach to overcome the framework of the separate disciplines and to compare the processes, occurring in different fields of knowledge; using a discrete language to be able to display visually the computer programs data; knowledge of the modeling basics; possessing research skills, network competencies, etc. by the digital education (Mardahaev, 2016).
Conclusion
As a result of the relevant retrospections review, we identified: the relevance of the Education National Doctrine in the Russian Federation (until 2025), which defines the digital education as one of the main directions of the country's economic and socio-cultural development through the significant changes to the entire education system based on its accelerated development and innovative technologies; the digital education importance, contributing to the educational migration processes; the education system transformation, based on the further evolution of the innovative technologies; building one's trajectory of educational, cognitive, creative research activity and development by the subject; intentions of homo cognoscensa cognitive person living in the 21st century and possessing a set of competencies, necessary for a non-conflict existence in the informational society; trends in the informational society, generating the changes that significantly update the education nature, goals and place in the society, providing an opportunity for the informational and theoretical knowledge to act as a strategic resource in the post-industrial society; development factors of modern digital education. The most important factors are: exponential growth in the informational volumes; the available, but not fully applicated potential of the Internet; a network as a community of users, the education personalization, a park of training machines and simulators; the instrumental potential formation in the informational society; the digital education role as a systemic strategy of the nation saving: design and experimental activities as a mechanism for the crisis management in modeling the information polygons; situational analysis as a method of obtaining the factual information; network as a tool for creating the social reality presenters; the absolutization of the digitalization value, modifying the methodological basis of the modern school, making information available in its various forms (textual, sound, visual). The information availability leads to the need for the constant search and selection of the relevant content, high processing speeds, indicating its complete and, most importantly, qualitative restructuring; imprinting the exposing apperception of the signifier as an integral part of the education modern subject strategy of the questioning, design, team, problem-oriented training with a highly developed emotional intelligence; -socialization of the subject's knowledge due to the digitalization as a systemic strategy of the nation saving (informal knowledge translation into formalized ones, externalization, and internalization of knowledge, a combination of knowledge, etc.); stochastic development of the digitalization in the educational system and the society, for which the scientific community needs to develop a modern strategy of еру institutional pragmatism, where the main regulatory principle will be the development principle: socio-technical systems, the Internet and mobile devices (as fundamental digitalization technologies) of the labor market; the futuristic orientation of the education modern subject, which requires the information processes accelerated development, the spiritual culture improvement the educational process, further informatization, and information support; the meaning of the post-industrial society information, which consists that in the XXI century, the theoretical knowledge is a strategic resource, technological innovation; a key tool for the system analysis and decision making; the basis of economic, social development, as well as the human existence conditions; the dynamism of the matter and actors existence heterogeneous forms; new ethical regulations of the human activity, its innovative practices. | 5,105.8 | 2020-11-25T00:00:00.000 | [
"Education",
"Computer Science",
"Political Science"
] |
Radiation Modes in FRB 20220912A Microshots and a Crab PSR nanoshot
A microshot from FRB 20220912A \citep{H23} satisfies the uncertainty relation $\Delta \omega \Delta t \ge 1$ by a factor of only $\lessapprox 3$. A Crab pulsar nanoshot \citep{HE07} exceeds this bound by a similar factor. The number of orthogonal plasma modes contributing to the coherent radiation is also $\approx \Delta \omega \Delta t$, placing constraints on their excitation and growth.
INTRODUCTION
The recent discovery in Westerbork observations of FRB 20220912A (Hewitt et al. 2023) of microshots with temporal width ≤ 31.25 ns in a spectral channel of width 16 MHz leads to the question of how close microshots can approach the uncertainty bound ∆ω∆t = 2π∆ν∆t ≳ 1.The instrumental temporal and spectral resolutions are only upper bounds on the actual pulse duration and bandwidth, so only an upper bound ∆ω∆t ≲ 3 can be set.
This uncertainty bound is inescapable mathematics, although its quantitative value depends on the pulse shape and the definitions of ∆ω and ∆t.For a Gaussian pulse, if ∆ω and ∆t are defined as full widths at 1/e of maximum, then ∆ω∆t = 4 π . (1) The condition is satisfied for a Gaussian pulse if ∆ω and ∆t are defined as the full widths at exp − π/4 ≈ 0.412 of maximum.
If ∆ω∆t > 1 we consider the natural generalizations of the Gaussian to larger widths, the Gauss-Hermite functions, the eigenstates of the one-dimensional harmonic oscillator with Hamiltonian H = (kx 2 /2 + p 2 /2m): where Hn(x) is the n-th Hermite polynomial.Like the Gaussian (G0(x)), the Gauss-Hermite functions are their own Fourier transforms (Cincotti, Gori & Santasiero 1992;Horikis & McCallum 2006), as they must be because the harmonic ⋆ E-mail<EMAIL_ADDRESS>oscillator Hamiltonian is symmetric under the interchange kx 2 ←→ p 2 /m.Here we replace √ kx by t and p/ √ m by ω.It is readily seen, either from the properties of the Hermite polynomials or from the fact that the n-th excited state of the harmonic oscillator has energy En = (n + 1/2)ℏω and classical width 2En/k, that for these more complex pulse frequency and temporal profiles ∆ω∆t ∼ n. (4) Eq. 4 gives the approximate number of orthogonal eigenmodes whose superposition makes a pulse with ∆ω∆t > 1.
THE OBSERVATIONS
The lower panel of Fig. 5 of Hewitt et al. (2023) shows microshots in the 1304 MHz band at approximately 11 µs (with respect to the arbitrary zero time of the plot) whose temporal widths are no more than a single 31.25 ns resolution element and that are almost completely confined to a single 16 MHz wide spectral band.Their intensity in the 1288 MHz band is at least an order of magnitude less than in the 1304 MHz band and their intensity in the 1320 MHz band is a few times less than at 1304 MHz, implying an intrinsic ∆ν less than the band spacing of 16 MHz (the quantitative value depending on the assumed spectral shape).Conservatively taking ∆ν = 16 MHz implies ∆ω∆t ⪅ 3.By Eq. 4, no more than about three modes of the electromagnetic field, and of the plasma waves that coherently radiated it, contributed significantly to the observed microshot.A similar result holds for a ≤ 0.2 ns nanoshot of the Crab pulsar (Hankins & Eilek 2007).
PLASMA PHYSICS
Coherent emission, necessary to explain the extraordinary brightness temperatures of FRB (Katz 2014) and of pulsars, requires "bunching" of the radiating charges that must result from the exponential growth of plasma waves.The many efoldings of exponential growth raise the the amplitudes of the few fastest growing modes far above those of other modes.An observed brightness temperature of 10 36 K requires N ⪆ 50 efoldings if the initial brightness temperature was mec 2 /kB (an arbitrary but plausible initial thermal value).
Linear Growth
Assume a plasma instability grows exponentially with growth rate where ζ is a parameter of the plasma wave (perhaps its wavevector) and ∆ζ the width of γ(ζ).This is a general form that assumes nothing about the specifics of the plasma instability, but that does require that the plasma modes interact only weakly; the governing equations can be linearized so that eigenmodes grow exponentially and essentially independently of each other.From the fact that after N e-folds the width of the microshots of FRB 20220912A the fact that ∆ν/ν ≈ 0.01, and the plausible assumption that dζ/dν ∼ ζ/ν, we estimate This is an approximate constraint that can be placed on any linearized theory of the plasma instability.
Coupled Waves
In an alternative model, the plasma waves are strongly coupled as they approach saturation, resembling a soliton (Zabusky & Kruskal 1965), rather than being described as a superposition of several weakly interacting eigenmodes.This would suggest that ∆ω∆t ≈ 1 because the radiation is produced by a single nonlinear wave, rather than by the sum of multiple weakly interacting waves; n ≈ 1 would be required.
The observations of FRB microshots (Hewitt et al. 2023) and Crab pulsar nanoshots (Hankins & Eilek 2007) cited here are consistent with this hypothesis, which is supported by the fact that both of these extreme phenomena are described by similar bounds on ∆ω∆t.
BASE-BAND SIGNALS
If base-band voltages are measured in a microshot or nanoshot, they will show ∼ ω∆t ∼ ∆ω∆t(ω/∆ω) ∼ n(ω/∆ω) cycles of oscillation.This is a mathematical consequence of ∆ω∆t ∼ n, and does not depend on a physical model.However, such a direct measurement might provide other illuminating information about the radiation process and the plasma physics that drives it.The dependence of electric field on time may permit distinguishing the weakly and strongly coupled models.
As an example of what might be seen in base-band signals, Fig. 1 shows the time dependence of the electric field Even if the signal-to-noise ratio is insufficient to resolve the base-band oscillations, their envelope may reveal the number of contributing modes n = ∆ω∆t.The signal-to-noise ratio of the envelope is greater than that of the base-band signal by a factor O( ω0/n∆ω), which is O(16/ √ n) for the parameters shown; use of a matched filter would eliminate the factor of 1/ √ n.
DISCUSSION
In a weakly coupled wave model, because N is large a narrowly peaked distribution of amplitudes of modes with ∆ν ≪ ν does not require a narrowly peaked growth rate.Eq. 6 may be inverted to find N from the spectral width of a FRB microshot; spectral narrowness demands many e-folds of exponential growth of the underlying plasma waves.The secondary intensity peaks (in the envelope of the ≈ 1308 MHz oscillation in Fig. 1) are required by values of n = Radiation Modes in FRB 20220912A Microshots 3 ∆ω∆t ≳ 2. Hence the number n of contributing modes may be inferred from the time dependence of the intensity if it is sufficiently resolved, as well as from direct measurement of ∆ω and∆t of the central peak.The temporal resolutions of the microshots of Hewitt et al. (2023) and the nanoshot of Hankins & Eilek (2007) were insufficient to determine n, but future observations with better temporal resolution should be able to show this structure and determine the number of contributing modes.
Figure 1 .
Figure 1.Baseband signals for ∆ω∆t = n = 1, 2, 3 and ∆t = 31.25 ns.The wider bandwidth pulses have narrower main peaks (a mathematical necessity).The secondary peaks may be observable as frequency-averaged functions of intensity vs. time even if noise prevents measurement of the baseband signal.Fine structure is the result of beats between the signal and the discrete resolution of the graphics screen, and is not physical. | 1,768.2 | 2023-10-09T00:00:00.000 | [
"Physics"
] |
Annals of Clinical Microbiology and Antimicrobials Open Access Helicobacter Pylori and Gastroduodenal Pathology: New Threats of the Old Friend
The human gastric pathogen Helicobacter pylori causes chronic gastritis, peptic ulcer disease, gastric carcinoma, and mucosa-associated lymphoid tissue (MALT) lymphoma. It infects over 50% of the worlds' population, however, only a small subset of infected people experience H. pylori-associated illnesses. Associations with disease-specific factors remain enigmatic years after the genome sequences were deciphered. Infection with strains of Helicobacter pylori that carry the cytotoxin-associated antigen A (cagA) gene is associated with gastric carcinoma. Recent studies revealed mechanisms through which the cagA protein triggers oncopathogenic activities. Other candidate genes such as some members of the so-called plasticity region cluster are also implicated to be associated with carcinoma of stomach. Study of the evolution of polymorphisms and sequence variation in H. pylori populations on a global basis has provided a window into the history of human population migration and co-evolution of this pathogen with its host. Possible symbiotic relationships were debated since the discovery of this pathogen. The debate has been further intensified as some studies have posed the possibility that H. pylori infection may be beneficial in some humans. This assumption is based on increased incidence of gastro-oesophageal reflux disease (GERD), Barrett's oesophagus and adenocarcinoma of the oesophagus following H. pylori eradication in some countries. The contribution of comparative genomics to our understanding of the genome organisation and diversity of H. pylori and its pathophysiological importance to human healthcare is exemplified in this review.
Introduction
Helicobacter pylori is a bacterium that colonizes the harshly acidic milieu of the human stomach. More than half of the world's population carries this infection. Infection rates vary among the developed and developing countries of the world. H. pylori infection is on a steep decline in most of the western countries mainly due to the success of combination therapies and improved personal hygiene and community sanitation to prevent re-infection. However, the situation is not improving in many of the devel-oping countries due to failure of treatment regimes and emergence of drug resistance. The infection in some cases leads to chronic superficial gastritis, chronic active gastritis, peptic ulcer disease and gastric adenocarcinoma [1][2][3][4]. One of the most distinctive features of H. pylori is the genetic diversity between clinical isolates obtained from different patient populations. Most H. pylori isolates can be discriminated from others by DNA profiling [5][6][7][8] or sequencing of corresponding genes due to mainly a high degree of sequence divergence between orthologs (3-5%) [9,10]. Also, H. pylori has a panmictic or freely recombining population structure [11] and is naturally competent [12]. These characteristics facilitate an inter-strain recombination due mainly to horizontal exchange of alleles from other strains colonising the same niche, which is extremely common in H. pylori chromosome. However, such genetic recombinations in the H. pylori genome might not be deleterious because they occur in the plasticity zone, a special cluster of DNA rearrangements that protects the essential complement of genes by acting as a bed for foreign DNA insertion or abrogation. DNA loss and rearrangement are therefore a norm for H. pylori, and flexibility and diversity in gene content may contribute to bacterial fitness in different members of the diverse human host population. Post genomic analyses have revealed interesting attributes of H. pylori pathogenicity and novel mechanisms of causation of ulcer disease and cancer have been envisaged. Efforts to know the cause and potential benefits of the genetic diversity of this bacterium has led to some interesting discoveries relating to its coevolution with the human host, microevolution while during infection and quasi-species development, virulence determinants and eradication strategies. Recent studies reviewed herein collectively aim at testing the speculation about whether H. pylori may be beneficial to human health in certain circumstances and whether eradication of this organism is always necessary? Epidemiological studies are needed in the context of such intriguing hypotheses. Results obtained from such studies might enable the development of a high-throughput screening system for high-risk groups within the huge population of H. pylori-infected individuals. Recent studies have shown that H. pylori infection protects against gastro-oesophageal reflux and oesophageal carcinoma. So it will be important to selectively eradicate H. pylori in people that are at the highest risk of developing gastric carcinoma. Eradication of highly pathogenic H. pylori specifically from high-risk groups would markedly reduce the worldwide incidence of ulcer and gastric cancer.
Epidemiology and Evolution
H. pylori infection is usually acquired during childhood, where transmission occurs predominantly within families [13]. A couple of recent studies demonstrated the possible co-existence of a large array of clonal lineages within H. pylori populations that are evolving in each individual separately from one another [14,10]. It is therefore probable that via this semi-vertical transmission of H. pylori strains, there are distinct sets of H. pylori genotypes colonising different human populations. With different strains evolving separately of one another and the fact that H. pylori is a genetically diverse (panmictic) organism, distinct genotypes have been found to be associated with particular geographic regions [15,16]. For example, the shuffling of variant regions within the vacA gene (a gene encoding a vacuolating cytotoxin) within a local H. pylori population has led to predominant vacA genotypes being characteristic of isolates from different geographic regions. In addition, worldwide studies encompassing H. pylori isolates from many geographic regions have demonstrated weak clonal groupings and geographic partitioning of H. pylori isolates [9,17]. If recombination only occurs between a resident H. pylori population, exchange of genetic sequences can genetically homogenise this population. As H. pylori is naturally competent and recombination occurs frequently [11], specific genotypes associated with different geographic regions occur as a result of this homogenising force.
Introduction of polymorphisms and sequence variants from one H. pylori population from a particular geographic region to another H. pylori population from another geographic region via human migration makes the association of particular genotypes with specific geographic locations more difficult. Although the introduction of new polymorphisms into a particular H. pylori population poses a problem with identifying specific genotypes within certain geographic locales, it may, however, provide information on the ancestry of the hosts in whose stomachs the strains were carried. Studies have been aimed at demonstrating the path of human migration to Latin America with conflicting results regarding whether European or Asian populations brought H. pylori to South America [16,11]. However, a recent and comprehensive study by Flaush et al. [19] demonstrated that sequence analysis of H. pylori isolates recovered from twenty-seven countries displayed geographic partitioning. Thus, polymorphisms within the H. pylori genome can serve as useful markers for studying ancient human migrations. However, a mix up of H. pylori strains between migrated and native populations can sometimes complicate analysis. Accordingly, the study of migrated populations that have remained isolated from the native populations is essential.
Genome organization, genetic diversity and microevolution
Since its successful isolation in 1983 by Warren and Marshall, H. pylori has been linked to various pathologies and a strong association with gastric carcinoma and mucosaassociated lymphoid tissue lymphoma [1,3] has been established. However, although H. pylori is definitely responsible for these diseases, only less than 10% of people colonized with H. pylori portray disease symptoms. This suggests that specific H. pylori strains may be responsible for virulence in different hosts. Many studies have shown that certain allotypes of the vacA gene and the presence a functional cagA gene are associated with an increased risk of peptic ulceration and gastric cancer, respectively [20][21][22]. However, these correlations vary based on the host population studied and efforts to correlate other H. pylori alleles with clinical diseases have failed.
So how do a few H. pylori strains trigger higher virulence as compared to other strains? Current approaches in functional genomics based on protein-protein interaction and microarray based transcription profiling are helping to decode this mystery. Functional genomics often uses the gene chip based expression profiling to provide a condition-dependent and time-specific genome wide profile of an organism's transcriptome [23,24]. Whereas, comparative genomics juxtaposes two or more genome sequences at the level of gene content and organization [25,26]. Both the approaches harness extensive computer algorithms and in silico modelling to summarise gene encoded (or putative) functions. H. pylori has been the first prokaryote wherein full genome sequence of two different patient isolates [J99 and 26995] were characterized and compared [27,28]. As H. pylori is a freely recombining or panmictic organism [11] the question of whether the two genome sequences would accurately represent the myriad of genetic diversity found among the strains was posed. Since the 2 sequenced strains were obtained a decade apart from the two different continents and cultured from the lesions of different gastric disorders, it has been widely assumed that their genome sequences will more likely portray the genetic diversity exhibited by clinical isolates. Comparative genomic analysis of the two completely sequenced strains revealed a significant amount of genetic variation between their genomes. For instance, the J99 genome is shorter (1,643,831 bp) than that of strain 26695 (1,667,867 bp) and has 57 less predicted ORF's [27,28]. Strains 26695 and J99 contain 110 and 52 strain specific genes, respectively [29,30], in which more than a half reside within a locus termed the plasticity cluster. A recent approach helped revised annotation and comparison of the two sequenced H. pylori genomes [31] and reclassified the coding sequences. Based on this study the total number of hypothetical proteins was reduced from 40% to 33%. A large amount of size variation was also discovered between orthologous genes mostly due to natural polymorphisms arising as a result of natural transformation and free recombination within H. pylori chromosome. Recombinational events including the presence of insertion elements, pathogenicity islands, horizontally acquired genes (restriction recombination genes), mosaics and chromosomal rearrangements were frequently annotated in subsequent bioinformatics based attempts. It has been argued that such diversity is a result of a lack of direct competition between strains, even when resident within different individuals within the same community [32]. However, a recent study has demonstrated that integration of foreign gene fragments acquired via natural transformation is often prevented by the well-developed restriction-modification systems in H. pylori genome [33].
It has been demonstrated that H. pylori has extensive, nonrandomly distributed repetitive chromosomal sequences, and that recombination between identical repeats contributes to the variation within individual hosts [34]. That H. pylori is representative of prokaryotes, especially those with smaller (<2 megabases) genomes, that have similarly extensive direct repeats, suggests that recombination between such direct DNA repeats is a widely conserved mechanism to promote genome diversification [33]. In addition, although H. pylori has been termed as a panmictic organism [11,35], it is surprising that clonal lineages within H. pylori populations exist [36][37][38][39]. Recent reports demonstrated that H. pylori in some populations shows a clonal descent and suggest that a large array of H. pylori clonal lineages co-exist, which evolve in isolation from on another [14]. Moreover, in certain parts of the world H. pylori isolates have been shown to exhibit little genetic heterogeneity, based on fingerprint profiles [17,40]. Functional genomics, utilising microarray technology, has provided researchers with a powerful tool to investigate the genetic diversity of clinical isolates [41], the transcriptional profiles of isolates grown under different conditions [42], the identification of strain-specific and speciesspecific genes [29,30] and the diversity between strains giving rise to differing clinical illnesses [43]. One of the interesting findings using microarray based genotyping has been the discovery that H. pylori isolates undergo 'microevolution' and give rise to sub-species during prolonged colonisation of a single host [44][45][46]. The presence of stable sub-species within a single individual suggests an adaptation of a H. pylori population to specific host niches, facilitated by unknown advantages conferred to them by select plasticity region genes. Bjorkholm et al. [45] demonstrated that several loci differed within two genetically related isolates from the same host, one major difference being the presence of the cag pathogenicity island (cag PAI) in one isolate and not the other. As the cagA gene and cag PAI are principle virulence factors within the strains, the excision or abrogation of the cag PAI within a strain may indicate that attenuating the virulence of a strain could be a favourable adaptation.
The conundrum of strain diversity: How many more genome sequences do we need to understand this bug? Within the bacterial populations, genome content may not be fixed, as changing selective forces favour particular phenotypes; however, organisms well adapted to particular niches may have evolved mechanisms to facilitate such plasticity. The highly diverse H. pylori is a model for studying genome plasticity in the colonization of individual hosts. For H. pylori, neither point mutation, nor intergenic recombination requiring the presence of multiple colonizing strains, is sufficient to fully explain the observed diversity.
The two H. pylori genomes sequenced to date are each from ethnic Europeans, and genomic comparisons modelled on these data are sufficient to identify novel loci from new strains, especially from understudied Asian populations. However, these genome sequences may not be fully representative of the entire diversity of the gene pool. Identification and characterization of such loci which are more abundant in the Asian gene pool may lead to newer insights into the mechanisms of H. pylori colonization, carriage and virulence in the countries of Asia which are more seriously under threat from H. pylori. Therefore, future high throughput efforts involving a large number of strains are clearly needed. Taking the Indian example for instance, according to the Ethnologue database http://www.ethnologue.com, there are about 1683 languages and dialects ('mother tongues') in this country and H. pylori diversity therefore can be assumed to coincide with this figure. So one has to roughly look at the inter-strain genomic diversity contributed by approximately 1683 different strains representing each dialect and or a community.
Nonetheless, genotypic data from each geographic area or a community is extremely vital and might constitute a missing piece of a large, biologic jigsaw puzzle.
Natural competence and transformation
Independent of the other two pathogenesis associated type IV systems, H. pylori harbours a dedicated type IV apparatus, the comB gene cluster [47] linked to the natural transformation and competence. The comB gene cluster is essential for the bacterium to take up plasmid and chromosomal DNA during natural transformation. To identify the genes essential for natural transformation competence in H. pylori, genetic approach of transposon shuttle mutagenesis has been used and the comB locus was located, consisting of orf2 and comB1-comB3 [48,49]. This cluster contains four tandemly arranged genes, ORF2, comB1, comB2 and comB3 as a single transcriptional unit. Subsequently, the components of comB cluster namely Orf2, comB1, comB2 and comB3 were renamed (according to homology with the Agrobacterium tumifaciens type IV secretory apparatus) as comB7, comB8, comB9 and comB10 respectively (Figure 1). Another ORF in HP26695, HP0017 was found to be homologous to the virB4 gene in Agrobacterium tumifaciens type IV secretory apparatus and was named as comB4 [50]. From this study it also appeared that each of the gene products of ORFs comB8 to comB10 were absolutely essential for the development of natural transformation competence. It appears that the comB transformation apparatus has evolved conservatively and is typically present in all the strains. This conservation is interestingly in agreement with the need for genomic fluidity in H. pylori where deletions and rearrangements due to natural transformation and transposi-tion are the norm. This is therefore necessary for the pathogen to keep the gene content flexible and as diverse as possible to probably acclimatise itself to diverse host niches during the process of infection. Both these systems, the cag-PAI encoded type IV export system and the transformation-associated type IV system seem to act completely independently, since the deletion of one system from the chromosome does apparently not affect the function of the other system.
Pathogenic apparatuses
The cag pathogenicity island Molecular analysis of bacterial transport has been attempted in several bacterial pathogens. Among such transport systems, Type IV secretion systems have been described in greater detail in diverse bacteria. In H. pylori, 3 different kinds of type IV secretion apparatuses have been identified. The first such secretion system identified in H. pylori was the one comprised of 29 genes encoding the cag pathogenicity island (cag-PAI). One of the principal virulence factors of H. pylori, the cagA antigen is contained in the 40 kb cag-PAI. The tyrosine-phosphorylated cagA protein is translocated to the epithelial cells by the type IV secretion system (forming a sort of syringe like structure) [51][52][53]. Upon tyrosine phosphorylation, the cagA protein elicits growth factor like stimuli in epithelial cells (hummingbird phenotype) coupled with interleukin-8 induction for the recruitment of neutrophils. Mutations in several genes of the cag-PAI interfere with tyrosine phosphorylation and induction of interleukin-8 secretion [54]. In recent studies, in order to analyse which genes of the cag-PAI are essential for cagA translocation and/or interleukin 8 induction, a complete mutagenesis of the cag-PAI was performed [55]. In general, it appears that most of the cag genes are involved in assembly and arrangement of the secretory apparatus. Five of these genes namely HP0524 (virD4), HP0525 (virB11), HP0527 (virB10), HP0528 (virB9) and HP0544 (virB4/ cagE) constitute the main apparatus of the type IV secretory system of H. pylori [56]. All these genes except HP0524 are associated with IL8 production [55]. However, the presence of strains eliciting IL 8 responses irrespective of intactness of the cag-PAI underlines the fact that it's 'not' the only factor linked to IL8 secretion [57].
Very recently, and for the first time, ultrastructure analysis of the surface of H. pylori 26695 has revealed a sheathed, surface organelle, coded by the cag-PAI genes, HP0527 (forms sheath around the pilus needle) and HP0532/ cagT (forms the base of the pilus) [58]. This structure, although uncommon, could be the special adaptation of H. pylori to the host niches and this might mediate biological as well as transport functions of the cag-PAI encoded proteins. Computational analyses to predict the macro-molecular assemblies of such apparatuses are needed to have a more simplified understanding of the entire model of the H. pylori type IV secretion mechanism Link with cancer: the oncogenic cagA protein A large-scale prospective study revealed that the risk for development of gastric carcinoma was much greater in the H. pylori-infected population than in the H. pylori-uninfected population [59]. The cagA gene of H. pylori is assumed as partially responsible for eliciting signaling mechanisms that lead to the development of gastric adenocarcinoma. Based on the carriage of a functional cagA as a marker for the cag PAI, the H. pylori species is divided into cagA-positive and cagA-negative strains. The cagApositive strains are associated with higher grades of gastric or duodenal ulceration and are more virulent than the cagA-negative strains [60]. Some epidemiological studies have demonstrated roles of cagA positive H. pylori in the development of atrophic gastritis, peptic-ulcer disease and gastric carcinoma [61,62]. The cagA gene product, cagA, is translocated to the gastric epithelial cells to undergo tyrosine phosphorylation by SRC family kinases [63]. Tyrosine phosphorylation is known to occur at the EPIYA motifs on the cagA. The cagA protein upon phosphorylation binds and activates a SHP2 phosphatase that acts as a human oncoprotein. As SHP2 transmits positive signals for cell growth and motility, deregulation of SHP2 by cagA is an important mechanism by which cagA-positive H. pylori promotes gastric carcinogenesis. Cag A is noted for its variation at the SHP2 binding site and, based on the Different outcomes of H. pylori infection Figure 1 Different outcomes of H. pylori infection. Some studies argue that eradication of H. pylori might trigger some of the worst forms of heart burns and increased acidity, leading ultimately to oesophageal cancer and or GERD. pylori strains might be involved in determination of the type and severity of disease. As discussed above, East-Asian and Western forms of cagA possess the distinctly structured tyrosine phosphorylation/ SHP2-binding sites -EPIYA-D and EPIYA-C, respectively [64]. Notably, the grades of inflammation, activity of gastritis, and atrophy are significantly higher in patients with gastritis who were infected with the East-Asian cagA-positive strain than in patients infected with the cagA-negative or Western cagApositive strain [65]. Furthermore, the prevalence of the East-Asian cagA-positive strain is associated with the mortality rate of gastric cancer in Asia. Therefore, populations infected with East-Asian cagA positive H. pylori are at greater risk for gastric cancer than those infected with Western cagA-positive strains.
Among Western CagA species, the number of EPIYA-C sites directly correlates with levels of tyrosine phosphorylation, SHP2-binding activity and morphological transformation [64]. Furthermore, molecular epidemiological studies have shown that the number of EPIYA-C sites is associated with the severity of atrophic gastritis and gastric carcinoma in patients infected with Western CagA-positive strains of H. pylori [66].
The number-2 virulence determinant: vacuolating cytotoxin (VacA) of H. pylori
H. pylori has a single copy of the vacA gene. Screening of H. pylori chromosomal fragments permitted the identification of a 3864-base pair open reading frame (vacA) that encoded the vacuolating cytotoxin [67]. The sequence of the vacA gene includes a 33 amino acid signal sequence.
With the exception of a hydrophobic region at the N terminus, the mature 90-kDa protein (amino acids 34 to 842) is mainly hydrophilic [67]. The cytotoxic activity of VacA has been shown to increase substantially under acidic conditions. VacA protein, a secreted 95 kD peptide, varies in the signal sequence (alleles s1a, s1b, s1c, s2) and/ or its middle region (alleles m1, m2) between different H. pylori strains. The different combinations of s and m regions determine the production of cytotoxic activity. Strains with the genotype s1 m1 produce high levels of vacuolating cytotoxin in vitro. Strains with the genotype s2 produce an inactive toxin. Whereas, strains with the genotype m2 produce toxic activity with a different target cell specificity from those of m1 genotype. Genotypic variations in the vacA gene structure specific to a geographic locale have been recognised. While the vacA m1a allele is specific for the European strains [16], the vacA m1b genotype is typical of the Asian strains [68]. Yet another signal region genotype, s1c is also characteristic of the East Asians [69]. Among other functions, VacA selectively inhibits the invariant chain (Ii)-dependent pathway of antigen presentation mediated by the MHC class II and might induce apoptosis in epithelial cells. VacA, so far mainly regarded as a cytotoxin of the gastric epithelial cell layer, now turns out to be a potent immunomodulatory toxin, targeting the adapted immune system. Thus, in addition to the well-known vacuolating activity, VacA has been reported to induce apoptosis in epithelial cells, to affect B lymphocyte antigen presentation, to inhibit the activation and proliferation of T lymphocytes, and to modulate the T cell-mediated cytokine response.
The plasticity region cluster
The fascinating genomic landmark discovered post genomic era in both the sequenced strains is the one where 48% and 46% of the strain specific genes are located in J99 and 26695 respectively. This region is called as the 'plasticity zone' [28]. Genome sequence comparisons have revealed that nearly half of the strain specific genes fall in this zone. Recently, a new type IV secretion apparatus has been located in this plasticity zone [70]. This type IV cluster is comprised of 7 genes, homologous to the vir B operon of A. tumifaciens carried in a 16.3 kb genomic fragment called tfs3 (Figure 1). This cluster was discovered by Kersulyte et al. as a result of subtractive hybridization and chromosome walking and sequence homology. They also tested conservation of this island in clinical isolates and found that full length and partially disrupted tfs3 occur in 20% and19% of the strains respectively, from Spain, Peru, India and Japan. Although there is no correct role assigned to this cluster, it might be an unusual transposon linked to many deletion events occuring in the plasticity region that contribute to bacterial fitness in diverse host populations via exercising flexibility in gene content and gene order. The plastic nature of H. pylori and the evidence of horizontal transfer of genes from other H. pylori isolates and bacterial species could explain the ability of this organism to persist in a changing environment and why only a subset of clinical isolates exert an adverse effect on patients.
Link with cancer: are plasticity region genes involved?
The plasticity region as a whole displays certain characteristics of pathogenicity islands [71] with relatively low G+C content (35%) compared to the rest of the genome (39%). This region is about 45 kb long in J99 and 68 kb long in strain 26695. Genomic analysis revealed the region to be highly mosaic with a majority of the genes being transcribed suggesting their functional role. They also express protein level homology to various other recombinases, integrases and topoisomerases [72] accounting to natural transformation and recombination.
In addition to these, many ORFs were identified as differentially expressed (JHP0927-JHP0928-JHP0931 and JHP-042-JHP0944-JHP0945-JHP0947-JHP0960). They share same chromosomal orientation and therefore they potentially represent a bacterial operon. It is interesting to study the expression or suppression of these ORFs in strains linked to different clinical conditions. Recent studies have posed a possibility to explore the presence of any new pathogenicity markers in the plasticity zone, although the functions of most of the putatively encoded proteins in this cluster are unknown. But they are thought to play a role in increasing the virulence capacity of H. pylori strains either directly or by encoding factors that could lead to variance in the clinical out come of the infection. More interestingly, it is also noted that some of the genes of the plasticity regions were co-inherited along with cagA. However their co-association with the disease status or with the severity of gastric inflammation was not established either due to small sample size or lack of clinical information [73]. Interestingly, a novel pathogenicity marker, JHP947 has been detected within the plasticity zone [74]. Many genes putatively linked to the development of gastric cancer have been assigned to the plasticity zone [72]. Researchers have looked for genetic markers in H pylori strains isolated from patients with gastric extranodal marginal zone B cell lymphoma (MZBL) of the mucosa-associated lymphoid tissue (MALT)-type and strains from age matched patients with gastritis only [75]. Two ORFs were significantly linked with gastric MZBL over gastritis strains: JHP950 (74% v 49%) and JHP1462 (26% v 3%). JHP950 proved specific for gastric MZBL when tested against a group of strains from patients with duodenal ulcer and patients with adenocarcinoma, with significant prevalence (49% and 39%, respectively), and is therefore the candidate marker for gastric MZBL. Interestingly, the candidate ORF JHP950 is located in the plasticity region of the J99 genome [75,76]. In view of such findings it can be speculated that some members of the plasticity region cluster provide selective advantage to some of the strains to adapt to changing host niches and become more and more invasive. In what way such advantage is gained? This needs to be discovered.
Do we need to eradicate H. pylori from this earth?
How long humans carried H. pylori is still controversial. However, it is accepted that this organism has colonized humans possibly for many thousands of years, and the successful persistence of H. pylori in human stomach for such a long period may be because this organism offers some advantages to the host. Unfortunately, the H. pylori infection is on steep decline in the western world. This is mainly due to the success rate of combination therapies and subsequent prevention of re-infection due to improvement in sanitation and personal hygiene. This may seem good news to many gastro-enterologists around the world, but having a H. pylori infection may be advantageous. A study has shown that H. pylori produces a cecropin-like peptide (antibacterial peptide) with high antimicrobial properties [77]. A German study revealed that children infected with H. pylori were less likely to have diarrhoea than children without an infection [78], implying that H. pylori may have beneficial properties to human hosts. Interestingly, there has been a marked decline in the instances of peptic ulcer disease and gastric cancer in the 20 th century. Concurrent with this is a dramatic increase in the incidences of gastro-oesophageal reflux disease (GERD), Barrett's oesophagus and adenocarcinoma of the oesophagus in Western countries [79]. This observation led to the speculation that H. pylori may in some way be associated with these diseases and perhaps capable of preventing their onset. Studies have also shown that cagA + H. pylori strains have a more protective effect than cagAstrains [80]. The presence of cagA + H. pylori strains can reduce the acidity of the stomach, and it is believed that the raising of the pH by H. pylori prevents GERD, Barrett's oesophagus and adenocarcinoma of the oesophagus (Figure). Conversely, arguments have been made that, although H. pylori may prevent these refluxassociated diseases, the risks of acquiring gastric cancer via H. pylori infection far outweigh any possible benefits it may provide [81]. However, it has been stated that, if H. pylori does provide protection from GERD, the notion of restriction of anti-H. pylori treatment to only a few cases (peptic ulcer disease and MALT lymphoma) could be justified [82]. In spite of this controversy, recent reports have demonstrated a protective role for H. pylori in erosive reflux oesophagitis [83][84][85]. However, as safe and potent anitsecretory drugs to prevent gastro-oesophageal reflux are available [86] it seems unjustified to use a dangerous organism that has been associated with extremely dangerous outcomes such as a carcinoma.
On the other hand, eradication also is not an ultimate choice. Some ulcers recur even after successful eradication of H. pylori in non-users of non-steroidal anti-inflammatory drug (NSAID). In addition, the incidence of H. pylorinegative, non-NSAID peptic ulcer disease (PUD) (idiopathic PUD) is reported to increase with time. Moreover, it appears that H. pylori-positive ulcers are not always H. pylori-induced ulcers because there are two paradoxes of the H. pylori myth, first the existence of H. pylori-positive non-recurring ulcer and secondly, recurring ulcer after cure of H. pylori infection. To summarise, H. pylori is not the only cause of peptic ulcer disease. Therefore, it is still necessary to seriously consider the need for eradication in all cases of PUD, which may exist even after the elimination of H. pylori.
Conclusion and expert opinion
In our opinion, the intricacies of the role of H. pylori in health and disease may be fully ascertained only if we analyze genetic diversity of the pathogen as juxtaposed to the host diversity and the environment (food and dietary habits). A possible working hypothesis (that we are currently nurturing) may be that among the ocean of molecular host-pathogen interactions that could potentially occur with micro-evolution of this bacterium during long term colonization, some could prove advantageous where the bacterium and the host negotiate nearly a 'symbiotic' and balanced relationship. Such a 'friendship' might have taken thousands of years to develop. If so, why has this bacterium survived for such a long time? Microbes that have long been persisted in humans may be less harmful than recently emerged microbes, such as the human immunodeficiency viruses (HIV). This suggests that the colonization may either be beneficial or of low biological cost to the host. In addition to characterization of bacterial virulence apparatuses that are for sure linked to disease outcome, host responses to such factors must also be examined hand in hand, to completely ascertain mechanisms that lead to gastroduodenal disease. For instance polymorphisms linked to the host immune apparatus, such as IL-1β, TNF-α, and IL-10, which are responsible for elevated proinflammatory potential of the strains. These polymorphisms increase the risk for atrophic gastritis and distal gastric adenocarcinoma among H. pylori-infected persons. Cancer of stomach is a highly lethal disease and establishment of H. pylori as a risk factor for this malignancy deserves an approach to identify persons at increased risk; however, infection with this organism is extremely common and most colonized persons never develop cancer. Thus, screens to identify high-risk subpopulations must use high-resolution biological markers. Fortunately, this task appears to be highly simplified due to the availability of biological tools, which were never thought in the past. Genome sequences (H. pylori, human, C. elegans), quantitative phenotypes (cagA phosphorylation, oipA frame status, vac Aallele status), and practical animal models (Mongolian gerbils) can be harnessed to decipher the molecular basis of H. pylori-associated malignancies, which should have direct clinical applications. It is important to gain more insight into the pathogenesis of H. pylori-induced gastric adenocarcinoma, not only to develop more effective diagnostics and treatment for this common cancer, but also to validate the role of chronic inflammation in the genesis of other tumours of the alimentary tract. | 7,583.8 | 0001-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Cooperative Entrepreneurship Model for Sustainable Development
: The main objective of this research is to contribute to the economic literature on cooperative entrepreneurship as a model for sustainable development, taking into account the special alignment of the cooperative principles (ICA) with the UN Sustainable Development Goals (SDGs). It o ff ers new empirical evidence from Spain, based on Stakeholder Theory, about the di ff erences between cooperatives (Coops) and Capitalist Firms (CFs) in relation to the distribution of economic value between the di ff erent stakeholders. For this purpose, panel data was analysed using the Correlated Random E ff ects approach. The results reveal that cooperative firms generate value for some of the stakeholders analysed, specifically for their partners and creditors, but no significant di ff erences have been found with CFs in terms of workers and the state. In both cases, it can be inferred that the period analysed has influenced the results, since it has been found that, first, cooperatives adjust wages downward rather than dismiss workers during a recession, which is in line with previous research, and second, that their tax contribution to the state is lower because they are subject to a more favourable tax system in Spain.
Introduction
Cooperatives are values-driven and principles-based [1] enterprises, and therefore sustainable development is part of their nature. These principles and values, such as equity, solidarity, democratic management and commitment to the environment, constitute a series of guidelines that value human beings over capital [2] and are aligned with the Sustainable Development Goals (SDG) of the UN 2030 Agenda [3], which represent basic principles related to the environmental, political and economic challenges facing our society, among others, with SDG 8 promoting continuous, inclusive and sustainable economic growth, full and productive employment and decent work for all; and with SDG 10 helping to reduce inequalities. The main role played by cooperatives in fulfilling the SDGs has recently been recognised, in the institutional context, by the United Nations Task Force on Social and Solidarity Economy and the International Co-operative Alliance's Cooperatives Europe. In addition, it has also been demonstrated in the economic literature that cooperatives are particularly aligned with the SDGs [4,5].
Cooperative firms are configured as an optimal business alternative to meet these challenges. They are business organisations whose management is designed to benefit all stakeholders. In recent years, various studies have highlighted the value of these companies as a vehicle for improving the business sector in local areas, boosting economic development in these areas [6]. It has also been found that the very nature of cooperative firms implies socially responsible behaviour [7].
Both their objectives (meeting the needs of their partners) and their democratic governance (one partner one vote) and ownership and control (mostly belonging to the partners-workers-users) result in cooperative firms being a model of sustainable economic development. They have a people-centred approach that differs from conventional capitalist firms, that try to maximise value, being owned and controlled by capitalist investors and without democratic governance.
These differences between the two legal structures lead to the conclusion that the economic performance of cooperative firms is more focused on the search for value for all their stakeholders, and not only shareholder value. They are different, therefore, from conventional capitalist firms, that aim to maximise their value for shareholders-although, in keeping with the growing interest in corporate social responsibility and business ethics, many do so without neglecting other stakeholders [8].
The objective of this study is to test whether cooperative firms generate value for both their shareholders and the rest of their stakeholders. With this aim, a sample of worker and service cooperative firms (Coops) has been compared with a sample of Limited Liability Companies (LLCs). In this way, unconventional capitalist companies, from the social economy or third sector, represented by cooperatives (Coops), are compared with conventional Capitalist Firms (CFs) represented by LLCs. In this study, these firm structures were defined in accordance with Spanish legislation (Capital Societies Act for Limited Liability Companies and the Co-operative Act for cooperative firms).
To achieve this objective, the work has been structured as follows: the next section contains a review of the economic literature comparing the two types of company, discussing their main economic differences; the section after that sets out a description of the methodology used for the panel data and the panel data itself; to conclude, the results and their practical implications are discussed.
Literature Review
In the economic literature, specific differences between cooperatives and conventional firms have been debated in relation to various aspects, such as their objectives as a firm, the use of surpluses and the capital structure [2], but the results of this research are not conclusive for all business aspects. On different occasions, it has been found that the cooperative values and principles that make cooperative firms different from other business structures guarantee that the interests of all stakeholders are considered.
Cooperatives are organized to engender and sustain multiple benefits for the involved stakeholders and members, while contributing to local sustainable development [9,10]. Cooperatives can play an important role in the implementation of SDGs [11], because the seven cooperative principles contribute to it. Voluntary and Open Membership (First Principle) can contribute to the elimination of poverty (SDG 1) and enforces gender equality (SDG 5); Democratic Member Control (Second Principle) helps reduce inequalities (SDG 10); Member Economic Participation (Third Principle) facilitates the reduction of inequalities (SDG 10) and creates decent work and economic growth (SDG 8); Education, Training and Information (Fifth Principle) can contribute to improving education (SDG 4) by allocating resources endowed with the obligatory reserves destined for this purpose, and Concern for Community (Seventh Principle) works for the sustainable development of communities (SDG 11).
Bretos and Marcuello [12] conduct a critical review of the literature that concludes with the possible strengths of cooperatives, based on their principles, as strategic elements to achieve sustainable economic development and greater social cohesion.
There are different types of cooperatives, such as consumer, credit and education cooperatives, but we segment the sample into service cooperatives and worker cooperatives. Both follow similar strategies and in both the partners mainly contribute with their own work.
Therefore, cooperatives are models that incorporate a desire to serve the stakeholders in their mission [13]. As a result, this study aims to test whether the cooperative model can represent a factor in value creation for the four stakeholders: shareholders, workers, state and creditors.
In relation to the objective of the firm, the most widely held view [14,15] is that cooperative firms pursue the maximisation of net income per worker, while CFs aim to maximise profit, although this difference in behaviour has been questioned [16] by different authors, such as Park et al. [17], who consider that the ownership of capital by the workers is associated with higher productivity, but that in reality the higher survival rate of these companies is due to greater employment stability rather than their high productivity, financial strength or flexible compensation. According to Dow [18], in the short term, worker-managed firms demonstrate inefficient behaviour by establishing an average income per partner-worker that is not in keeping with that offered in the labour market, and this has an influence on worker productivity and the firm's competitiveness.
As recognised by Burdín [19], the most appropriate definition of the goals pursued by cooperative firms is still a controversial issue. The results of this research support the idea that worker-managed firms pursue mixed objectives, placing importance on both employment and income per worker.
However, the traditional business model, the CF, where the ultimate strategic objective is to maximise the profit or value creation for the shareholder, is being replaced by a socio-economic model in which the objective is to create value for all stakeholders. So, the firm's objective in relation to maximising value for all its stakeholders, "despite its significant analytical contribution, remains fairly vague and difficult to measure" [20]. Nevertheless, the stakeholder approach is a valuable explanatory tool to address how firms can generate a broader positive impact at the social level [13].
In relation to creditors, different studies [21,22] have shown that cooperatives tend to be more indebted than CFs, for different reasons, as pointed out by Parliament et al. [23]: because they have more problems with funding themselves and resort more to external financing; because they do not have access to the financial markets to capitalise their securities; and because they are more likely to incur moral hazard derived from common responsibility and risk sharing.
However, recent analyses such as that of Atienza et al. [24], which compare cooperatives with conventional capitalist firms, have found no significant differences in their financial solvency ratios nor greater indebtedness for cooperatives. The empirical evidence is therefore inconclusive.
In relation to the contribution to the state, this depends significantly on the tax policy of each country, so the institutional environment is decisive when measuring the effects of this impact. Thus, in the case of Spain, cooperative firms enjoy tax benefits to which CFs do not have access, and this applies to different taxes, including the Corporation Tax, with a lower tax rate, tax exemptions and reduction of the social security payable. These are regulated in Spanish Law 20/1990 on the tax regime for cooperatives. This implies that the tax contribution of cooperatives to the generation of value is usually less than that of the comparable CFs.
Beyond the differences between the two types of firm, the literature has widely recognised that cooperatives have a positive influence in socio-economic terms on the people involved in the business and the region where they are located [2]. Different institutions (World Bank, UN) have highlighted the benefits of the cooperative firm as a model of sustainable development [25]. However, the issue of whether the value generated by the cooperative for all its stakeholders differs significantly from that generated by a conventional capitalist firm has not been empirically tested.
Data Collection
The data have been obtained from the Orbis database [26] for the period 2013-2017 by filtering out two samples, one for worker and service cooperative firms and another for Limited Liability Companies. In both cases, the companies in the primary sector have been eliminated (due to their limited number), along with those involved in banking and insurance activities (due to their special characteristics). The samples have been cleaned to leave only the firms with all the data available in the reference years and eliminating those with negative value added (value added is the difference between the total sales revenue and the total costs incurred to obtain that revenue, without including depreciation, interest, taxes and salary costs). According to their age, both CFs and Coops have been operating on average for more than twenty years. Therefore, they can be considered the majority of mature companies in their sector. Finally, since most of the cooperatives had fewer than 50 workers, we have used this criterion to standardise the two samples. Therefore, the final sample includes 512 firms split into n = 393 CFs and n = 119 Coops. Table 1 shows the firms classified according to the sector of economic activity. In both cases the majority belong to the services sector rather than to the industrial sector. Table 2 shows the distribution of the companies by size. In both samples, micro-enterprises with fewer than ten workers make up the majority, compared to small companies, which accounted for 28.3% of the CFs and 41.2% of the Coops.
Dependent Variables
According to Poulain-Rehm and Lepers [20], the distribution of value added in the accounting sense was adopted as a dependent variable because "it is possible to measure the value allotted using financial documents (as employees, creditors, the State and shareholders) . . . . . . despite the bias concerning the measured objective of the created value". VA is considered to be stakeholder value [27] because it is distributed to the employees, state, creditors and shareholders and used for the self-financing of the firm. Specifically, for shareholders (DIVVA), the log of the ratio of dividends over value added is used; for employees (PCVA), the log of the ratio of staff costs to value added is used; for creditors (INTVA), the log of the ratio of interest and similar payments over value added is used; and for the state (TAXVA), the log of the ratio of tax over value added is used.
Explanatory Variables
The following were selected as explanatory variables on the basis that previous research has found them to be relevant [20,[28][29][30]. The main independent dichotomous variable takes the value 0 if it is a CF and 1 if it is a Coop. Financial and economic variables: financial risk (Leverage) and investment policy (Inv), both also measured in logarithmic form. Control variables: size and sector are included as dummy variables (See Table 3). Table 4 summarises the main descriptive statistics for the numeric variables used in the models to be assessed. All the economic variables analysed are higher on average for the CFs than for the Coops, except for the INTVA and DIVA variables.
Hypotheses and Methods
To test the purpose of this research-that the VA generated by the cooperative for all its stakeholders differs significantly from that generated by a conventional capitalist firm-two hypotheses are proposed:
Hypothesis 2 (H2). Coops positively influence stakeholder value added.
To do this, four models have been proposed (see Figure 1): the first, the DIVVA model, aims to test the first hypothesis, while the other three models, PCVA, TAXVA and INTVA, aim to test the second hypothesis. To do this, the study uses the panel data methodology known as Correlated Random Effects (CRE). This is an alternative to Fixed Effects (FE) that still allows unobserved effects to be correlated with the observed explanatory variables [31,32]. CRE is considered a good method when the response variable is a fraction or proportion [33]. CRE offer some advantages, given that "a decomposition within and between effects in a single model increases flexibility in model setup because it combines advantages of FE and RE models" [34]. The result of the Hausman test in the four models proposed rejects the null hypothesis, thus indicating that we should use the FE estimator. However, the models include three time-invariant variables, Type, Sector and Size, and FE cannot estimate the effects of these variables. Following Wooldridge [31], the CRE approach provides a way to include time-constant explanatory variables in what is effectively a fixed effects analysis.
We therefore use CRE as a technique allowing us to include the average of the time-varying explanatory variables (MLeverage and Minv) in a regression model with random effects (RE) in such a way that the coefficients associated with the variables are a consistent estimate of the fixed effects, while the coefficients of the averages control the correlation between the error term and each of the explanatory variables [31].
Results and Discussion
The CRE results for the four models are shown in Table 5. It can be observed that H1 is accepted (model 1 DIVVA) and H2 is rejected, except for the creditors group (model 4 INTVA). The four models have shown a good fit and explanatory power. Some results are similar to those in the empirical evidence to date [30,35]. The result of the Hausman test in the four models proposed rejects the null hypothesis, thus indicating that we should use the FE estimator. However, the models include three time-invariant variables, Type, Sector and Size, and FE cannot estimate the effects of these variables. Following Wooldridge [31], the CRE approach provides a way to include time-constant explanatory variables in what is effectively a fixed effects analysis.
We therefore use CRE as a technique allowing us to include the average of the time-varying explanatory variables (MLeverage and Minv) in a regression model with random effects (RE) in such a way that the coefficients associated with the variables are a consistent estimate of the fixed effects, while the coefficients of the averages control the correlation between the error term and each of the explanatory variables [31].
Results and Discussion
The CRE results for the four models are shown in Table 5. It can be observed that H1 is accepted (model 1 DIVVA) and H2 is rejected, except for the creditors group (model 4 INTVA). The four models have shown a good fit and explanatory power. Some results are similar to those in the empirical evidence to date [30,35]. The results of model 1 show that cooperatives have a positive and significant influence on the DIVVA variable, therefore allowing us to accept Hypothesis 1: Coops positively influence shareholder value added. However, we can only partially accept the second hypothesis because the results detect a significant and positive influence of cooperatives on the creditors group (Model 4 INTVA) but not on the rest of the stakeholders analysed, that is, employees (GPVA) and the state (TAXVA).
In the case of model 2, the results may be due to the fact that the period analysed includes years of economic recession and, as previous empirical evidence has shown, cooperatives adjust wages downward rather than dismiss workers [24]. The specific nature of cooperatives leads these firms to prioritise the preservation of jobs over the maintenance of high profits, which is why they tend to reduce salaries and working hours instead of dismissing workers [2,36]. Since Burdín and Dean [37] demonstrate that capitalist firms and worker cooperatives use different wage and employment adjustment mechanisms, the crisis negatively affected both wages and employment, although the employment adjustment was larger in capitalist firms than in worker cooperatives.
In the face of economic shocks, employees are more willing to accept changes in their working conditions and to find ways to reduce costs and enhance their relationship with their customers. These measures are necessary to increase the cooperatives' chances of survival and to maintain their employment levels [38]. The greater flexibility of cooperatives to adjust hours worked rather than employee numbers confers greater labour stability [36,39]. Cooperatives are more selective at the time of hiring and more reluctant at the time of firing, which reduces the incentive to expand operations simply to take advantage of the opportunities in expansive phases, but also makes them less vulnerable to contractions [40].
The results of model 3 are consistent with the fact that cooperative firms in Spain are fiscally protected and, therefore, when their earnings are equal to those of their conventional capitalist equivalents, they pay less Corporation Tax. In relation to this issue, it is also worth noting that Marín-Sánchez [41] states that Spanish legislation makes it possible for cooperatives to treat their financial information in such a way that the tax benefits resulting from alternating financial years with losses and financial years with profits are maximised. In this way, the increase in cooperatives with losses during the recession seen in the study period could be managed with accounting and fiscal mechanisms that make alternating profits and losses a viable and legal strategy.
In relation to model 4, we observe a positive and significant influence of cooperative firms on the INTVA variable. Cooperative firms have traditionally used external funding to finance their investments, in part due to the lack of implicit incentives to finance themselves with their own resources. This arises as a result of their specific nature, from factors such as: the role of partner-workers; limits on the raising of capital by equity partners; the inability to sell shares; and limits on the recovery of the obligatory reserve funds that characterise this legal structure in Spanish legislation.
Finally, the size variable has a negative influence in models 1, 3 and 4 in favour of SMEs compared to micro-enterprises, and similar results were found for Employee Owned Fims in previous studies [30]. Similarly, the activity sector was significant in models 1 and 4, in favour of the service sector, and in model 3 in favour of the industrial sector. The impact of investment policy appears significant and positive in models 2 and 4, which implies that a greater investment in assets has positive effects both in the generation of value for the workers, via higher remuneration, and in higher financial interest from greater indebtedness, resulting in a residual for taxes and shareholders on the short term, because the effects of the investment policy are usually observed on shareholders in the long term [20]. Meanwhile, the Leverage variable has a negative influence on taxes and a positive influence on both workers (model 2) and creditors (model 4).
Conclusions
The traditional business model, the CF, where the ultimate strategic objective is to maximise profit and create shareholder value, is being replaced by a socio-economic model where the objective is to create value for all stakeholders. This results in direct effects on the society in which the company operates. In this sense, the cooperative model becomes of great importance for sustainable development, in accordance with its cooperative values and principles, and becomes a benchmark for social innovation-not only because at the economic level it contributes to income generation, the democratisation of ownership and efficiency in the use of resources through economies of scale, but also because it is important for rural sustainable development and the survival of local territories, becoming an instrument for the empowerment of the population as managers of their own progress and development.
According to Ferruzza et al. [42], SDG 8 best represents the idea of a need for a new development model that combines economic growth while ensuring inclusion and fairness in the distribution of economic resources and guaranteeing decent working conditions. Cooperatives must therefore adopt a sustainable development strategy that can represent an effective alternative to the dominant model, capable of comprehensively responding to today's challenges [43]. Cooperative values and principles, that make Coops different from other enterprises, contribute to the achievement of some SDGs, ensuring that the interests of all stakeholders are taken into account. Moreover, as their members assume multiple stakeholder roles (partners, suppliers, customers, workers, etc.), Coops will pursue the needs of these stakeholders and even count on their active participation [4].
In this study we have tested whether the cooperative model may represent a factor that supports value creation for the stakeholders analysed: shareholders, workers, state and creditors. We have done so by offering new empirical evidence about the differences with Capitalist Firms. The results do not confirm that the business behaviour of cooperative firms positively influences the distribution of value in favour of all the stakeholder groups analysed. They do demonstrate its positive influence on shareholders and creditors, but this does not apply to workers and the state. In the latter case, the results are consistent with cooperative firms in Spain being fiscally protected and, therefore, when their earnings are equal to those of their conventional capitalist equivalents, they generate less value via taxation. The model's results, which reflect the distribution of value in favour of the workers, refute some of the theories to date [16]. However, one explanation for this can be found in the economic cycle analysed (2013-2017), that was partly affected by the economic crisis. As previous empirical evidence has shown, cooperatives adjust wages downward rather than dismiss workers [24]. With this in mind, this analysis provides one important insight: cooperatives operate in line with their purpose and in keeping with the legal and fiscal institutional structure.
However, the results of this study should be treated with caution, because they are subject to some limitations. In relation to the methodology, the database is not balanced and the data is based solely on accounting figures, although, as Harris and Fulton [44] point out, some of the benefits of cooperatives are not reflected directly in the business accounts. Therefore, as Lazcano et al. [45] suggested, there is a need to standardise the social accounting to demonstrate and understand the value of social economy companies. Similarly, Parliament et al. [23] argue that this approach does not include non-market dimensions and objectives that are inherent to cooperatives in line with their own principles. To detect social benefits for a wide spectrum of stakeholders, other items should have been evaluated through interviews or questionnaires, and it is expected that these will be developed in future research. | 5,619.6 | 2020-07-07T00:00:00.000 | [
"Economics",
"Business"
] |
Suberoylanilide Hydroxamic Acid Treatment Reveals Crosstalks among Proteome, Ubiquitylome and Acetylome in Non-Small Cell Lung Cancer A549 Cell Line
Suberoylanilide hydroxamic acid (SAHA) is a well-known histone deacetylase (HDAC) inhibitor and has been used as practical therapy for breast cancer and non-small cell lung cancer (NSCLC). It is previously demonstrated that SAHA treatment could extensively change the profile of acetylome and proteome in cancer cells. However, little is known about the impact of SAHA on other protein modifications and the crosstalks among different modifications and proteome, hindering the deep understanding of SAHA-mediated cancer therapy. In this work, by using SILAC technique, antibody-based affinity enrichment and high-resolution LC-MS/MS analysis, we investigated quantitative proteome, acetylome and ubiquitylome as well as crosstalks among the three datasets in A549 cells toward SAHA treatment. In total, 2968 proteins, 1099 acetylation sites and 1012 ubiquitination sites were quantified in response to SAHA treatment, respectively. With the aid of intensive bioinformatics, we revealed that the proteome and ubiquitylome were negatively related upon SAHA treatment. Moreover, the impact of SAHA on acetylome resulted in 258 up-regulated and 99 down-regulated acetylation sites at the threshold of 1.5 folds. Finally, we identified 55 common sites with both acetylation and ubiquitination, among which ubiquitination level in 43 sites (78.2%) was positive related to acetylation level.
H istone deacetylases (HDACs) are well known for their important functions in chromatin remodeling, cell cycle progression, cell migration suppression and epigenetic regulation impact by turning over histone lysine acetylation in various pathophysiological conditions. Moreover, HDACs are considered as important targets for cancer therapy. Therefore, HDAC inhibitors (HDACi) were emerged as practical therapies for different cancer types 1,2 . Furthermore, HDACi were also found to have potential therapeutic functions in cardiac conditions, arthritis and malaria 3 . As a consequence, HDACi were drawing increasing attentions in the past decade 1,2,4-6 and varieties of HDACi were investigated including suberoylanilide hydroxamic acid (SAHA), depsipeptide (Romidepsin) 7,8 , panobinostat (LBH589) 9 and so on. Among them, SAHA is the most studied one and was first approved by the Food and Drug Administration (FDA) as HDACi drug for the treatment of refractory cutaneous T-cell lymphomas (CTCL) 10 . In addition, its activities against other solid tumor cancers such as non-small cell lung cancer (NSCLC) [11][12][13][14] , breast cancer [14][15][16] and ovarian cancer 17,18 were also confirmed.
It is previously reported that SAHA can suppress tumor cell proliferation, differentiation, and can also induce cell apoptosis and cytotoxity 1,4,6,19,20 , therefore it is well-studied to show the therapeutic effect of SAHA for single treatment or combinative treatment with other small molecule inhibitors 21,22 . To elucidate the effect of SAHA treatment to proteins, the expression level of transcriptome and proteome in response to SAHA induction was studies. Lee et al. observed that SAHA change microRNA expression profiles in NSCLC A549 cells and breast cancer cell lines 12 . Sardiu and coworkers established a human histone deacetylase protein interaction network toward SAHA treatment 23 . In our previous study, the impacts of SAHA on proteome and histone acetylome in NSCLC A549 cells were investigated, which demonstrated that SAHA altered the profile of the whole proteome of NSCLC cells and highly increased the expression level of histone lysine acetylation, giving its intrinsic roles of HDAC inhibitor for epigenetic regulation 13 . More recently, Xu et al. found that SAHA regulate histone acetylation, butyrylation and protein expression in neuroblastoma 24 . In their study, 28 histone lysine acetylation sites and 18 histone lysine butyrylation sites were detected, most of which were up-regulated upon SAHA treatment.
Despite the extensive reports of SAHA in cancer therapy and the critical alteration of SAHA treatment to proteome and histone acetylome, the underling mechanisms are poorly understood. Previously we found that the expression levels of the global proteome and histone lysine acetylome were both regulated by SAHA treatment, and the alteration of proteome may partially be attributed to histone lysine acetylome 13 . Moreover, we also revealed that ubiquitination, a well-known PTM, is also closely related to the change of proteome level because of its important function in protein degradation 25 and the existing crosstalk between lysine acetylation and ubiquitination 26 . Therefore, to reveal the relationship between ubiquitination and SAHA treatment, the global proteome, ubiquitylome and acetylome in response to SAHA treatment should all be studied. In this work, we established an integrated system by the combination of SILAC labeling, affinity enrichment by antibodies and high-resolution LC-MS/ MS for quantitative comparison of the proteome, ubiquitylome and acetylome of A549 cells before and after SAHA treatment ( Figure 1). Moreover, the crosstalk between global proteome and ubiquitylome, ubiquitylome and acetylome are also studied, which may largely deepen our understanding of SAHA-dependent NSCLC therapy.
Results
Integrated strategy for quantitative proteome, ubiquitylome and acetylome. SAHA is a well-studies HDAC inhibitor (HDACi) and was considered as meaningful therapy for cancers. It is reported that SAHA could induce the changes of the whole proteome expression level and increase the histone acetylation level of human NSCLC A549 cells 12,13 . However, the alteration of non-histone lysine acetylome was seldom explored 27 . Moreover, our results also showed that SAHAinduced proteins closely related to protein complex of ubiquitin E3 ligase 13 , which indicate that ubiquitination may also be regulated upon SAHA treatment. Therefore, the quantitative comparison of SAHAinduced proteome, ubiquitylome and acetylome is of considerable biological significance.
In this work, we combined stable isotope labeling by amino acids in cell culture (SILAC), HPLC fractionation, high specific pan-antibody enrichment, high resolution Orbitrap mass spectrometry and bioinformatic analysis for systematic quantification of proteome, lysine ubiquitylome and lysine acetylome upon SAHA treatment in A549 cells. The integrated workflow includes 6 key steps, as shown in Figure 1: (1) stable isotope labeling of A549 cells by SILAC; (2) purification and digestion of cell lysate; (3) HPLC separation of extracted proteins into fractions; (4) affinity enrichment and purification of lysine ubiquitylated and acetylated peptides; (5) separation and analysis of the enriched peptides using nano-LC-MS/MS; (6) interpretation of the collected MS data and analysis of proteome, lysine ubiquitylome and acetylome in protein functions, pathways, interaction networks and crosstalks.
SAHA treatment changes proteome profile in A549 cells. The microRNA expression levels in different cancer cells were observed to be changed after SAHA treatment 12 . Furthermore, the alteration of the whole proteome in A549 cells in response to SAHA stimulation was also demonstrated by our previous study 13 . In this work, 4302 proteins were identified and 2968 proteins were quantified by the comparison between cells with and without SAHA treatment. Among the 2968 quantifiable proteins, 1279 were changed over 1.3 folds (598 up-regulated and 681 down-regulated) and 817 were changed over 1.5 folds (365 up-regulated and 452 down-regulated). All these data was listed in Supplementary Information Table S1.
To elucidate the functional differences of down-regulated and upregulated proteins, the quantified proteins were analyzed for four types of enrichment-based clustering analyses: Gene Ontology (GO) enrichment-based clustering analysis, protein domain enrichment-based clustering analysis, KEGG pathway enrichment-based clustering analysis and protein complex enrichment-based clustering analysis (Figure S1A-F). As changing ratio of 1.3 and 1.5 were both used as significant threshold by lots of previous lung tumor or cancer proteomics studies 28-30 , we performed above clustering analyses by dividing all significantly changed proteins into four quantiles (Q1-Q4) according to L/H ratios (Q1: ,0.67, Q2: 0.77-0.67, Q3: 1.3-1.5, Q4: .1.5) to see the biological functions of the proteins with large changing ratios (.1.5 or ,0.67) or with relatively small changing ratios (1.3-1.5 or 0.77-0.67) upon SAHA treatment.
GO enrichment-based clustering including cellular compartment, biological process and molecular function was firstly performed www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9520 | DOI: 10.1038/srep09520 (Figure S1A-C). It is observed that significant differences were occurred among different quantiles. In the cellular component category ( Figure S1A), the up-regulated proteins were highly enriched in mitochondria and endoplasmic reticulum (ER), while the downregulated proteins were enriched in nucleus, chromosome, ribosome, spliceosomal complex and transcriptional repressor complex. This is the response to SAHA treatment which suppressed the cell transcription. Moreover, proteins focused on histone methyltransferase complex and methyltransferase complex were both enriched in down-regulated proteins which demonstrated that SAHA may decrease lysine methylation in cells. In biological process ( Figure S1B), the processes related to regulation of cell cycle and gene expression were enriched in proteins with low L/H ratios. The analysis of molecular functions ( Figure S1C) showed that proteins involved in the binding of cofactor and coenzyme, catalytic activity were enriched toward SAHA treatment. Therefore, despite the differentially proteome pattern, the enriched nuclear, chromosome and DNA related terms accounted for relative larger proportion and these terms were extensively associated with histone lysine acetylation, suggesting the intrinsic roles of SAHA as HDAC inhibitor.
Specific domain structure is one of the major functional features in proteins. As a consequence, we next analyzed the enriched domain of those up-and down-regulated proteins induced by SAHA treatment ( Figure S1D). We observed that protein domains involved in nucleic acid-binding, OB-fold, chromo and so on were enriched in downregulated proteins, and glycoside hydrolase, EF-hand domain, linker histone H1/H5 domain H15 were enriched in up-regulated quantiles.
To identify metabolic pathways regulated by SAHA treatment, we performed a pathway enrichment-based clustering analysis by using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database ( Figure S1E). The results showed that lysosome, synaptic vesicle cycle, mucin type O-glycan biosynthesis, and fatty acid metabolism were the most prominent pathways enriched in quantiles with increased protein level in SAHA-treated cells, suggesting a role of SAHA in these pathways. In contrast, protein expression in the cellular pathways of DNA replication, RNA degradation, spliceosome, SNARE interactions in vesicular transport, and RNA transport was decreased in response to SAHA treatment.
By using a manually curated CORUM database, we performed enrichment analysis on protein complexes ( Figure S1F). Altogether, we obtained 15 complexes with significant enrichment in Q1 and 14 complexes enriched in Q4. These complexes can be considered to be SAHA-regulated core complexes. For example, anti-HDAC2 complex and MTA1-HDAC core complex were enriched in Q1, showing the intrinsic roles of SAHA as an HDAC inhibitor. Moreover, the expression level of proteins contains spliceosome, ribosome, 60 S ribosomal subunit and 40 S ribosomal subunit were significantly down-regulated in response to SAHA treatment. These complexes are closely associated with ribosome biogenesis and protein translation, the decreased level of proteins containing these complexes suggested the non-HDACi roles of SAHA in the regulation of protein expression.
SAHA treatment changes ubiquitylome profile in A549 cells.
Ubiquitination or ubiquitylation is one of the most important post-translational modifications (PTMs) and is formed by covalent attachment of ubiquitin to its target proteins. It is well-known for its function in targeting proteins for degradation by the proteasome 25 . Moreover, it also play critical roles in cell signaling, immune system and tumor suppression [31][32][33] . In our previous study, ubiquitin E3 ligase was found to be closely related to SAHA treatment 13 , therefore the alteration of protein ubiquitylome after SAHA treatment was investigated in this work.
Proteome-wide enrichment of ubiquitination is based on its distinct feature of di-glycine remnant (K-e-GG). In this work, we combined SILAC, immuneaffinity enrichment by a high-quality anti-K-e-GG antibody (PTM Biolabs) and high-resolution mass spectrometry for the quantification of protein ubiquitination in A549 cells upon SAHA treatment. Altogether, we identified 1067 ubiquitination sites on 613 proteins from A549 cells, and 1012 sites from 586 proteins were quantifiable, among which 614 were changed over 1.3 folds (340 up-regulated and 274 down-regulated) and 426 were changed over 1.5 folds (234 up-regulated and 192 down-regulated). All these data was listed in Supplementary Information Table S2.
For clustering analysis, all the quantified ubiquitination were also divided into four quantiles (Q1-Q4) according to L/H ratios the same as were described above. Then, the enrichment-based clustering analyses (Gene Ontology, protein domain, KEGG pathway and protein complex) were performed (Figure 2A-D and Figure S1). For the cellular component analysis (Figure 2A), we found that lots of proteins located on membrane such as membrane part, plasma membrane, endosome membrane, vacuolar membrane, lysosomal membrane, organelle membrane and so on were highly enriched in Q1 with down-regulated Kub sites. This result indicated that ubiquitination may possess important roles in cell membrane. On the contrary, the up-regulated Kub proteins were focused on nucleosome, DNA bending complex, nucleus, and intracellular. The biological process of ubiquitination was also analyzed as shown in Figure 2B. For the down-regulated Kub proteins, they were highly enriched in phosphorylation processes including positive regulation of protein phosphorylation, phosphate metabolic process and phosphorus metabolic process, which may attributed to the interaction between ubiquitination and phosphorylation 34 and indicating that SAHA treatment could be used to suppress protein phosphorylation. Moreover, proteins with down-regulated Kub sites were also focused on ion transport processed such as anion transport, cation transport and organic anion transport, this may related to the functions of ubiquitination to cell membranes. For proteins with up-regulated Kub sites, they were enriched in ubiquitin-dependent protein catabolic process. In addition, proteins with up-regulated Kub sites were also enriched in chromatin assembly, protein-DNA complex assembly, negative regulation of transcription which suggests that ubiquitination was also related to cell cycle and transcription, the increase level of ubiquitination could slow down cell cycle and transcription. The molecular function analysis was presented in Figure 2C. It is observed that proteins with ATPase activity, hydrolase activity and transporter activity were enriched in Q1 and Q2 which is consistent with the biological process analysis results of ion transport described above. By contrast, proteins with up-regulated Kub sites in Q3 and Q4 were enriched in nucleic acid binding, DNA binding and histone binding.
For the protein domain analysis, we observed that protein domains involved in aldehyde dehydrogenase, ABC transporter and von Willebr factor were enriched in proteins with down-regulated Kub sites, and zinc finger, ubiquitin-like, histone core and histone-fold were enriched in up-regulated quantiles ( Figure S2A).
The KEGG pathway analysis of the quantitatively changed proteins undergo ubiquitination showed a number of vital pathways. The pathways of ABC transporters, NF-kappa B signaling pathway and Ras signaling pathway were enriched in Q1 and Q2 with downregulated Kub sites, while proteins with up-regulated Kub sites were enriched in Hapatitis B, systemic lupus erythematosus and so on ( Figure 2D). These results showed that ubiquitination were highly associated with cell signaling and diseases 32,35 .
Finally, the protein complex analysis was performed and shown in Figure S2B. It was observed that proteins with up-regulated Kub sites were highly enriched in protein complexes of ubiquitin E3 ligase which was also reported in our previous study 13 . More interestingly, ubiquitination level of proteins associated with ribosome, 60S ribosomal subunit and Nop56p-associated pre-rRNA complex were also significantly up-regulated in response to SAHA treatment, which are totally opposite to that of global proteome alteration in these com-www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9520 | DOI: 10.1038/srep09520 plexes. This phenomenon suggested that the decrease of global proteome is negatively regulated by ubiquitination.
Protein-protein interaction network of the ubiquitylated proteins was also established by using Cytoscape software ( Figure S3 and Figure 2E-F). The global network among Kub proteins were shown in Figure S3. It was also observed that Kub proteins were highly enriched in proteasome and ribosome ( Figure 2E-F). Proteasome is very important for ubiquitylated proteins to be degraded and we found that HSPA8 (Heat shock 70 kDa protein), a key protein in protein degradation, was ubiquitylated at multiple sites and their ubiquitination levels were all highly increased up-SAHA treatment ( Figure 2E).
Crosstalk between global proteome and ubiquitylome. Ubiquitination is well-known for its protein-degradation function by the proteasome 25 . The expression of proteins in cells may also be regulated by ubiquitination. In this work, the quantitative proteome and ubiquitylome in A549 cells toward SAHA treatment were both obtained. Therefore, the interaction between proteome and ubiquitination could be studied.
According to the quantitative results obtained in this study, the crosstalk between the whole proteome and ubiquitylome in A549 cells was analyzed. In our data, there are 343 quantified proteins which also undergo ubiquitination, and a number of 663 Kub sites were quantified. The quantitative ratios of proteome and ubiquitylome upon SAHA treatment were compared as shown in Figure 3A and Supplementary Table S4. To be accurate, the ratio was normalized as shown in red. The pearson's correlation coefficient and the Spearman's rank correlation coefficient were calculated as 20.53 and 20.46, respectively. Therefore, the global proteome and ubiquitylome were weak negatively correlated, which imply changing pattern of proteome was opposite to that of ubiquitylome upon SAHA to some extent. This result demonstrated that the expression level of proteome was negatively regulated by ubiquitination which is consistent with the protein-degradation function of ubiquitination.
SAHA treatment changes acetylome profile in A549 cells. SAHA is a well-known HDACi and its therapeutic functions for different cancers were extensively studies 1,2,4-6 . Previously we revealed that SAHA significantly increased the acetylation level on most histone Kac sites. More interestingly, it also tremendously decreased the acetylation level of some important ''histone markers'' 13 . In this work, the acetylation level of non-histone proteins was investigated in response to SAHA treatment by the combination of SILAC, lysine acetylation antibody enrichment and LC-MS/MS analysis. Our study identified 1124 acetylation sites corresponding to 551 proteins from A549 cells, among which 1099 acetylation sites from 542 proteins were quantifiable (Supplementary Information Table S3). To our best knowledge, it is the most comprehensive profiling of lysine acetylation dataset upon SAHA treatment in A549 cells.
In this work, 91 Kac sites were identified from histones including various histone isoforms, among which 87 sites were quantified. The acetylation levels of most histone Kac sites (66 out of 87) were increased (.1.5 folds) and 61 Kac sites were even up-regulated over 2 folds. Only three Kac sites were down-regulated. Compared with previous results, all of the ''histone markers'' except one were covered by this work 13 . Moreover, H3K23ac and H4K12ac which were reported previously to be significantly decreased due to SAHA induction were also quantified as down-regulated with the changing fold of 0.33 and 0.61, respectively.
Depart from histone acetylation, we demonstrated in this study that the acetylation level of non-histone proteins was also increased by SAHA treatment. Among 1012 quantified Kac site from nonhistone proteins, 296 and 258 sites were up-regulated by setting the ratio bar of 1.3 and 1.5 folds, respectively. However, there are still 178 and 99 sites down-regulated with ratio ,0.77 and 0.67 fold upon SAHA treatment.
The enrichment-based clustering analyses were carried out to compare the functions of corresponding proteins with up-and down-regulated acetylation levels. (Figure 4A-D and Figure S4). It was found that the acetylation level was considerably up-regulated for those proteins involved in histone acetyltransferase complex from GO analysis of cellular component and molecular function (Figure 4A and 4C). However, the acetylation level of some N-acetyltransferases was decreased ( Figure 4C). This may be the reason of the existing of down-regulated acetylation sites. For the protein with up-regulated acetylation level, we found that they are associated with transcription such as transcription factor complex in cellular component, DNA-dependent transcription and positive regulation of transcription in biological process, and transcription regulatory region DNA binding in molecular function were all down-regulated ( Figure 4A-C). Moreover, proteins related to DNA replication, chromatin remodeling, and gene expression were also quantified with down-regulated acetylation sites ( Figure 4B). The KEGG pathway analysis showed that proteins with differentially changed acetylation level were enriched in several important diseases such as Parkinsons disease, Huntingtons disease and prostate cancer ( Figure 4C). Apart from above analysis, clustering analysis of protein domain and protein complex for lysine acetylation were all performed as shown in Figure S3A and S3B.
The protein-protein interaction network for acetylome in A549 cells was established by Cytoscape software (Figure S5 and Figure 4E-G). The global scope of network among Kac proteins were first obtained ( Figure S5), then Kac proteins were clustered into multiple biological processes ( Figure 4E-G). It was observed that Kac proteins participated in ribosome and spliceosome, TCA cycle and oxidation phosphorylation. Some important acetylated proteins were found such as NCBP1 (Nuclear cap-binding protein subunit 1) and RNPS1 (RNA-binding protein with serine-rich domain 1), these two proteins act as connectors between ribosome and spliceosome and were both with up-regulated acetylation level upon SAHA treatment ( Figure 4E). Moreover, for TCA cycle, there are 14 enzymes quantified to be acetylated, 9 of which were up-regulated and 5 were down-regulated in acetylation level ( Figure 4F). In oxidation phosphorylation, 18 proteins were identified as acetylated protein and all of them were quantified with up-regulated acetylation level upon SAHA treatment ( Figure 4G).
Crosstalk between quantitative ubiquitylome and acetylome.
Ubiquitination and acetylation were reported to exist crosstalks 26 . However, no experimental evidence was shown. In this work, we compared the data of acetylome and ubiquitylome obtained from SAHA treated A549 cells to study the crosstalk between acetylation and ubiquitination.
By comparing the data of Kac and Kub, we identified 55 sites both modified with Kac and Kub, among which 52 sites were quantified. Moreover, there are also 110 proteins that undergo acetylation and ubiquitination at the same protein by at different sites ( Figure 5A-B). For the sites both acetylated and ubiquitylated, the changing patterns are always the same (43 out of 52 sites) as shown in Figure 3B, which suggest acetylation and ubiquitination in those proteins are positively related. However, in a few proteins, the changing patterns of Kac and Kub are opposite, such as Histone H3 K122, Nascent polypeptide-associated complex K142 and Acyl-CoA-binding protein K77. Some spectra of lysine sites undergo both acetylation and ubiquitination were presented in Figure 6 and Figure S6.
Many other researchers also studied the crosstalk among various PTM and drawn different conclusion. Pan and Trinidadet al. reported that PTM crosstalks are not significant without any natural selection 36,37 while Grayet al. suggested that PTM crosstalks are significantly under natural selection 38 . The discrepant conclusion of various reports may be related with the different PTM types they studied. Besides, species and even tissues specificity may also affect the PTM crosstalk as different species and tissues were used in their studies. In our study, though the sites both modified with Kac and Kub were not so much, the changing patterns of acetylation and ubiquitination on these sites were positive related ( Figure 3B). The crosstalk between acetylome and ubiquitylome in A549 cells under SAHA treatment tend to be positively correlated. However, the precedence rule of acetylation and ubiquitination on these sites demonstrating which modification appears first and how the subsequent modification follows was still unclear and need further investigation.
To deeper reveal the crosstalk between acetylome and ubiquitylome, the protein-protein interaction network based on Kac and Kub proteins were established as shown in Figure 5C-F. The global overview of protein-protein interaction network among Kac and Kub proteins were first obtained by using Cytoscape software ( Figure 5C), then Kac and Kub proteins were clustered into multiple biological processes (Figure 5D and 5F). It was observed that Kac and Kub proteins both participated in ribosome, proteasome and glycolysis pathway. Some important proteins involved in these net-works are both acetylated and ubiquitylated, for example RPL23 (60S ribosomal protein L23), HSPA8 (heat shock 70 kDa protein) and TPI1 (triosephosphate isomerase). These proteins may of key importance in SAHA treatment of A549 cells and could be selected for further biological investigation.
Discussion
The expression level of the whole proteome was reported to be changed by SAHA treatment both up-regulated and down-regulated 13 , however, the regulation mechanism of proteome alteration upon SAHA treatment was unexplored. As it is well known that proteins undergo ubiquitination will be degraded by the proteasome due to the biological function of ubiquitination, we proposed previously that SAHA treatment mediated ubiquitination pathway probably is an unrevealed mechanism to regulate the proteome in A549 cells. In this work, the quantitative proteome and ubiquitylome were both obtained upon SAHA treatment in A549 cells. We found that SAHA treatment could largely change protein ubiquitination level both increase and decrease. Finally, by comparing the quantitative results of proteome and ubiquitylome, we revealed that the express- ion levels of proteins in global proteome are negatively related to the ubiquitination levels in the same proteins ( Figure 3A). That means when ubiquitination level of a specific protein increases, the level of this protein correspondingly decreases. It is completely consistent with the protein-degradation function of ubiquitination. Therefore, we came to a conclusion that the alteration of proteome expression level upon SAHA treatment is regulated or partially regulated by protein ubiquitination.
In this work, the quantitative acetylome of A549 cells before and after SAHA treatment was also obtained with more than 1000 Kac sites being quantified. It is the most comprehensive Kac profiling upon SAHA treatment in A549 cells by now. Moreover, 91 Kac sites were identified in histones, which is also the largest dataset of histone acetylation upon SAHA treatment. Due to the treatment of SAHA, the acetylation levels of almost all histones were greatly increased with the largest quantitative ratio of 44.26 in histone H2B K5. However, there were still three histone Kac sites quantified as down-regulated acetylation level, including H3 K23, H4 K12 and H4 K79, two of which were already reported previously 13 . For this interesting phenomenon, further study should be carried out to reveal the hiding principle. For non-histone proteins, the acetylation was also largely increased after SAHA treatment, but not so extensively increased as in histones. There are still quite a number of proteins with down-regulated acetylation level. As SAHA is the inhibitor of histone deacetylases, its activity to non-histone proteins may be compromised. We found that the acetylation levels of Nacetyltransferases such as N-acetyltransferase 10, N-alpha-acetyltransferase 30 and CREB-binding protein are down-regulated by SAHA treatment. This could be the reason for the decrease of acetylation level in non-histone proteins. For example, CREB-binding www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 9520 | DOI: 10.1038/srep09520 protein acetylates both histone and non-histone proteins, the decrease of acetylation level in histone H3 K23, H4 K12 and H4 K79 may regulated by this enzyme.
Lastly, the crosstalk between acetylome and ubiquitylome was also analyzed. According to our results, for the sites both acetylated and ubiquitylated, Kac was positively related to Kub in most sites (43 out of 52 sites) as shown in Figure 3B. As SAHA treatment could largely increase the acetylation level in cells, the ubiquitination level will also be increased due to the crosstalk. Then, the expression level of the proteins being ubiquitylated is decreased by protein degradation. As a result, the up-regulation of acetylation level by SAHA treatment in cells eventually induced the down-regulation of global proteome expression level which was also reported previously 26 . Finally, we conclude the interaction of acetylome, ubiquitination and global proteome with positive regulation between acetylome and ubiquitylome, and negative regulation between ubiquitylome and global proteome.
According to the data of this study, the relationship among global proteome, ubiquitylome and acetylome were summarized in Figure 7. Firstly, SAHA treatment both directly changes lysine acetylation and ubiquitination level in A549 cells. Moreover, the alteration level of acetylation and ubiquitination will also regulate each other due to the existed crosstalks between acetylation and ubiquitination. Secondly, the changing level of protein lysine acetylation could induce the change of global proteome level in two aspects. Lysine acetylation may divide into two types, histone acetylation and non-histone acetylation. Histone acetylation could induce the change of proteome level by epigenetics as previously reported 13 , while non-histone acetylation of transcription factors could regulated proteome level through transcriptome. Thirdly, protein lysine ubiquitination could also induce the change of global proteome level. The ubiquitination level of proteins could be induced by E3 ligase and then degraded by proteasome. Moreover, transcription factors could also undergo ubiquitination and induce the global proteome level through transcriptome. Above is just the predicted mechanism, for further confirmation, more experimental results should be obtained.
In conclusion, we comprehensively investigated the effects of SAHA on A549 cells. Taking the advantages of SILAC labeling, antibody-based affinity enrichment and high-resolution LC-MS/MS, quantitative comparison of the proteome, ubiquitylome and acetylome before and after SAHA treatment were extensively studied. It was found that SAHA broadly changed the proteome, ubiquitylome and acetylome of A549 cells. By the help of advanced bioinformatic analysis, important biological processes and functions related with SAHA were revealed. More importantly, the crosstalk among global proteome, ubiquitylome and acetylome were also studied, which may considerably expand our current understanding of SAHA-dependent NSCLC therapy.
Methods
Stable isotope labeling and SAHA treatment in A549 cells. Non-small cell lung cancer (NSCLC) cell line A549 cells (American Type Culture Collection, ATCC; Catalog CCL-185; Manassas, VA) were maintained in DMEM SILAC medium (Invitrogen, Carlsbad, CA) supplemented with 10% FBS (Life Technologies, Grand Island, NY) at 37uC in humidified atmosphere with 5% CO 2 .
The stable isotope labeling was performed as described previously 13 . In brief, A549 cells were maintained in SILAC Protein Quantitation Kit (Invitrogen, Carlsbad, CA) according to manufacturer's instructions. Briefly, cells were maintained in DMEM culture medium supplemented with 10% fetal bovine serum (FBS) supplemented with either the ''heavy (H)'' form of [U213C6]-L-lysine or ''light (L)'' [U212C6]-Llysine for over six generations. The heavy labeling efficiency was evaluated by mass spectrometer analysis to a confirmed .97% labeling efficiency. After that, the cells were further expanded in SILAC media to desired cell number (,5 3 10 8 ) in twenty 150 cm 2 flasks. The ''light'' labeled cells were then treated with SAHA at 3 mM final concentrations for 18 hours and the ''heavy'' labeled cells treated with same volume of DMSO for 18 hours, respectively. The SAHA concentration and duration time were determined according to our previous report 13 . After treatment, the cells were harvested and washed twice with ice-cold PBS supplemented with 2 mM Trichostatin A and 30 mM Nicotinamide. After snap freezing in liquid nitrogen, cell pellets were stored in 280uC freezer. Extraction of proteins was followed the method described previously 13 .
In-solution digestion and HPLC fractionation. For reduction of proteins, dithothreitol (DTT) was then added to final concentration 10 mM followed by incubation at 56uC for 60 min. After that, iodoacetamine (IAA) was added to alkylate proteins to final concentration 15 mM followed by incubation at room temperature in dark for 40 min. The alkylation reaction was quenched by 30 mM of cysteine (final concentration) at room temperature for another 30 min. Trypsin was then added with the ratio of trypsin to protein at 1525 (w/w) for digestion at 37uC for overnight.
The protein digestion was then fractionated by high pH reverse-phase HPLC using Agilent 300Extend C18 column (5 mm particles, 4.6 mm ID, 250 mm length). Briefly, peptides were first separated with a gradient of 2% to 60% acetonitrile in 10 mM ammonium bicarbonate pH 8 over 80 min into 80 fractions. Then, the peptides were combined into 18 fractions for the global proteome analysis as previously reported 39 . For ubiquitination and acetylation analysis, no HPLC fractionation was performed.
Affinity enrichment of lysine acetylated and ubiquitylated peptides. Prior to affinity enrichment, anti-lysine acetylation (Kac) and anti-lysine ubiquitination (Kub) antibody beads (PTM Biolabs, Inc, Hangzhou) were washed twice with ice-cold PBS. To enrich Kac and Kub peptides, 5 mg tryptic peptides of Kac and Kub were dissolved in NETN buffer (100 mM NaCl, 1 mM EDTA, 50 mM Tris-HCl, 0.5% NP-40, pH 8.0) and then incubated separately with pre-washed antibody beads (catalog no. PTM-104 for Kac and catalog no. PTM-1104 for Kub, PTM Biolabs, Inc, Hangzhou, respectively) in a ratio of 15 mL beads/mg protein at 4uC overnight with gentle shaking. The beads were washed four times with NETN buffer and twice with ddH 2 O. The bound peptides were eluted from the beads with 0.1% TFA. The eluted peptides were collected and vacuum-dried followed by LC-MS/MS analysis.
LC-MS/MS analysis.
Peptides were re-dissolved in solvent A (0.1% FA in 2% ACN) and directly loaded onto a reversed-phase pre-column (Acclaim PepMap 100, Thermo Scientific). Peptide separation was performed using a reversed-phase analytical column (Acclaim PepMap RSLC, Thermo Scientific) with a linear gradient of 5-35% solvent B (0.1% FA in 98% ACN) for 30 min and 35-80% solvent B for 10 min at a constant flow rate of 300 nl/min on an EASY-nLC 1000 UPLC system. The resulting peptides were analyzed by Q Exactive TM Plus hybrid quadrupole-Orbitrap mass spectrometer (Thermo Fisher Scientific).
The peptides were subjected to NSI source followed by tandem mass spectrometry (MS/MS) in Q Exactive TM Plus (Thermo) coupled online to the UPLC. Intact peptides were detected in the Orbitrap at a resolution of 70,000. Peptides were selected for MS/ MS using 28% NCE; ion fragments were detected in the Orbitrap at a resolution of 17,500. A data-dependent procedure that alternated between one MS scan followed by 20 MS/MS scans was applied for the top 20 precursor ions above a threshold ion count of 2E4 in the MS survey scan with 15.0 s dynamic exclusion. The electrospray voltage applied was 2.0 kV. Automatic gain control (AGC) was used to prevent overfilling of the ion trap; 5E4 ions were accumulated for generation of MS/MS spectra. For MS scans, the m/z scan range was 350 to 1600. www.nature.com/scientificreports decoy database and protein sequences of common contaminants. Trypsin/P was specified as cleavage enzyme allowing up to 3 missing cleavages, 4 modifications per peptide and 5 charges. Mass error was set to 6 ppm for precursor ions and 0.02 Da for fragment ions. Carbamidomethylation on Cys was specified as fixed modification and oxidation on Met, acetylation and ubiquitination on lysine and acetylation on protein N-terminal were specified as variable modifications. False discovery rate (FDR) thresholds for protein, peptide and modification site were specified at 0.01. Minimum peptide length was set at 7. All the other parameters in MaxQuant were set to default values. Kac and Kub site identifications with localization probability less than 0.75 or from reverse and contaminant protein sequences were removed.
For quantification of the SILAC data (Heavy/Light ratio calculation), the built-in SILAC 2-plex quantification method was used (Proteome Discoverer 1.3, Thermo Fisher) withLys-0 and Lys-6 labels, based on ion intensities of monoisotopic peaks observed in the LC MS spectra. To minimize the systematic errors introduced by the Bradford assay and sample mixing, the normalization was done using a multiplepoint normalization strategy according to previous report 40 . Briefly, the distributions of protein ratios were plotted by SPSS statistical software (version 12.0, IBM Company, Chicago, IL, USA), then 5% trimmed mean values were calculated. All protein ratios were normalized against the 5% trimmed mean values so that most protein ratios were distributed in the 1.00 6 0.10 zone. Data for SAHA treated and untreated cells labeled with 'light' or 'heavy' amino acids were combined for identifying significant changes in level of proteome, acetylome and ubiquitylome. Comparisons between variables were tested by paired t test and p values , 0.05 were considered to be statistically significant.
Bioinformatic analysis. Gene Ontology (GO) term association and enrichment analysis were performed using the Database for Annotation, Visualization and Integrated Discovery (DAVID). Encyclopedia of Genes and Genomes (KEGG) database was used to identify enriched pathways by Functional Annotation Tool of DAVID against the background of Homo sapiens. InterPro database was researched using Functional Annotation Tool of DAVID against the background of Homo sapiens. Manually curated CORUM protein complex database for human was used for protein complex analysis. To construct a protein-protein interaction network, the STRING database system was used. Functional protein interaction networks were visualized using Cytoscape. When performing the bioinformatics analysis, corrected p-value , 0.05 was considered significant. And all the detailed description of bioinformatic analysis was listed in Supplementary Information. | 8,328.6 | 2015-03-31T00:00:00.000 | [
"Biology"
] |
Semantic and Syntactic Interoperability Issues in the Context of SDI
Interoperability is one of the core concepts of Spatial Data Infrastructure due to the fact that exchange and access to spatial data is the foremost aim of any SDI. These issues are closely related to the concept of application schema that plays a significant role in interchanging spatial data and information across SDI. It is the basis of a successful data interchange between two systems as it defines the possible content and structure of data, thus it covers both semantic and syntactic interoperability. These matters also appear in a couple of questions concerning SDI including, among others, a model‑driven approach and data specifications.
Spatial data exchange through SDI involves UML and GML application schemas that comprise semantic and syntactic interoperability respectively. However, working out accurate and correct application schemas may be a challenge. Moreover, their faultiness or complexity may influence the ability to valid data interchange.
The principal subject of this paper is to present the concept of interoperability in SDI, especially semantic and syntactic, as well as to discuss the role of UML and GML application schemas during the interoperable exchange of spatial data over SDI. Considerations were conducted focusing on the European SDI and the National SDI in Poland.
Introduction
"Spatial Data Infrastructure (SDI) is a general term for the computerised environment for handling data (spatial data) that relates to a position on or near the surface of the earth. It may be defined in a range of ways, in different circumstances, from the local up to the global level" [1].
There are over 150 SDI initiatives described in the literature according to Longley et al. [2]. INSPIRE is the main SDI initiative in the European Union (EU), established at the supranational level to support environmental policies and policies or activities that may have a direct or indirect impact on the environment. The INSPIRE Directive [3] set up an infrastructure for spatial information in Europe that includes metadata, spatial data sets and spatial data services; network services and technologies; agreements on sharing, access and use; coordination and monitoring mechanisms, processes and procedures, established, operated or made available in accordance with this Directive, what means in an interoperable manner.
Interoperability, particularly semantic and syntactic interoperability, is one of the core concepts of SDI due to the fact that the exchange of and access to spatial data is the main aim of any SDI. The significant role in this case plays an application schema that is the basis of a successful data interchange between two systems. It defines the possible content and structure of data [4].
In the process of spatial data interchange through SDI, two types of application schema take part: the first one expressed in the UML (Unified Modelling Language) and the second one in the GML (Geography Markup Language). In general, the UML application schema comprises semantic interoperability and the GML application schema covers syntactic interoperability. However, working out accurate and correct application schemas may be a challenging task. Many issues should be considered, such as appropriate regulations for given problem or topic, production opportunities and limitations. Besides, what if these structures are too complex? Does this fact influence the ability to valid data exchange? Therefore, examining the complexity and quality of these application schemas seems to be an extremely important issue in the context of semantic and syntactic interoperability in SDI.
The principal subject of this paper is to present the concept of interoperability in SDI, on the example of the European SDI and the National SDI in Poland, as well as to discuss the role of the UML and GML application schemas that are commonly used during the interoperable exchange of spatial data within these SDIs.
This article briefly sets out the context of further research aiming to elaborate a general methodology for examining and evaluating the UML and GML application schemas quality and, at a later stage, quality and complexity measures of data structures expressed in the UML and GML. The results of the conducted research will primarily become the contribution base for creating some guidelines and recommendations that will allow the optimisation of the UML and GML application schemas currently in force in Poland.
Interoperability in the Context of SDI
The concept of interoperability has been widely defined in diverse contexts related to information technology (e.g. [5][6][7]). The most fundamental and well known definition comes from the Institute of Electrical and Electronics Engineers (IEEE) and reads as follows: the ability of two or more systems or components to exchange information and to use the information that has been exchanged [8]. According to the ISO/IEC 2382-1 [9] interoperability means the capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units.
In the geographic information (geoinformation/geomatics) domain, special attention should be given to the interoperability frameworks determined in the ISO 19100 series of International Standards and by the European Commission that are the foundation of the functioning of European SDI.
Interoperability according to the ISO 19100 Series
The ISO 19100 series of geographic information standards establishes the conceptual framework for interoperability of geographic information and compares it "to an interpersonal communication process by which independent systems manipulate, exchange, and integrate information that are received from others automatically" [10] (Fig. 1).
Fig. 1. Levels of interoperability
Source: own elaboration on the basis of [10] and adapted from [11] The reference model worked out in the ISO 19101 [10] defines and describes interoperability of geographic information in relation to system, syntactic, structural, and semantic levels. According to it, interoperability in geographic information is broken down into six layers [11]: network protocols, file systems, remote procedure calls, search and access databases, geographic information systems, semantic interoperability.
Network protocols are the lowest layer but also the most significant on account of each layer in this decomposition is dependent on the layers that underlie it. This level of interoperability describes basic communication between systems within a network of computers that consists of hardware and software. Network protocols that belong to software determine communication between applications and the transmission of signals on the network [10].
The next layer is file systems and interoperability. At this level it means above all the ability to open and display files from another system in their native format [10].
Remote procedure calls, by contrast, enable users to execute programs on a remote system, regardless of any operating system [10].
Afterwards, the search and access databases layer is responsible for "the ability to query and manipulate data in a common database that is distributed over different platforms". This means "seamlessly access databases despite the locations, the data structures, and the query languages of the database management systems" [10].
Interoperability between geographic information systems (GIS) provides transparent access to spatial and temporal data, the sharing of spatial databases, and other services independently of the platform. To achieve this kind of "interoperability, real world phenomena need to be abstracted and represented using a common mechanism, services shall follow a common specification model, and institutional issues solved in an information communities model" [10].
The highest level of interoperability in geographic information is semantic interoperability that concerns the proper exchange, interpretation, understanding and use of geographic information between systems [10].
Interoperability according to INSPIRE
In line with the INSPIRE Directive [3] interoperability means the possibility for spatial data sets to be combined, and for services to interact, without repetitive manual intervention, in such a way that the result is coherent and the added value of the data sets and services is enhanced. This stands for "users of the infrastructure are able to integrate spatial data from diverse sources and these retrieved datasets follow a common structure and shared semantics" [12]. Interoperability is one of the core concepts of the European SDI due to it is "built on the existing standards, information systems and infrastructures, professional and cultural practices of the 27 Member States of the EU in all the 23 official and possibly also the minority languages of the EU" [12]. Therefore, "in the context of SDI interoperability is the ability to exchange and manipulate geographic information across distributed systems without having to consider the heterogeneity of the information source, e.g. format and semantics" [10]. Furthermore, SDIs "are becoming more and more linked to and integrated with systems developed in the context of e-Government" [12] that "consists of governance, information and communication technology (ICT), business process re-engineering, and citizens at all levels of government, e.g. city, state/province, national and international" [10]. A crucial element of e-government data is geographic information and its relation with other types of information in an interoperable manner.
The European Interoperability Framework (EIF) "defines a set of recommendations and guidelines for e-government services so that public administrations, enterprises and citizens can interact across borders, in a pan-European context" [13]. For the purpose of the EIF, interoperability is the "ability of information and communication technology (ICT) systems and of the business processes they support to exchange data and to enable the sharing of information and knowledge" [14]. Moreover the EIF defines an interoperability model ( Fig. 2) that includes [15]: a background layer (interoperability governance), four principal layers of interoperability (legal, organisational, semantic and technical), and a cross-cutting component of the four layers (integrated public service governance). [15] Interoperability governance "refers to decisions on interoperability frameworks, institutional arrangements, organisational structures, roles and responsibilities, policies, agreements and other aspects of ensuring and monitoring interoperability at national and EU levels" [15]. One of the significant parts of interoperability governance at the EU level is the EIF, and the INSPIRE Directive in turn "is an important domain-specific illustration of an interoperability framework including legal interoperability, coordination structures and technical interoperability arrangements" [15]. An example of interoperability governance at the national level is the National Interoperability Framework [16] established in Poland that lays down the minimal requirements for public registers and electronic information exchange and also information and communication systems.
Legal interoperability "ensures that organisations operating under different legal frameworks, policies and strategies are able to work together" [15].
Organisational interoperability is about "documenting and integrating or aligning business processes and relevant information exchanged" [15]. This layer also refers to service identification, availability and access.
Semantic interoperability provides "that the precise format and meaning of exchanged data and information is preserved and understood throughout exchanges between parties. In the EIF, semantic interoperability covers both semantic and syntactic aspects" [15]. The first one aspect "refers to the meaning of data elements and the relationship between them. It includes developing vocabularies and schemas to describe data exchanges, and ensures that data elements are understood in the same way by all communicating parties" [15]. In turn the syntactic aspect concerns "describing the exact format of the information to be exchanged in terms of grammar and format" [15].
Technical interoperability includes "the applications and infrastructures linking systems and services, by extension, interface specifications, interconnection services, data integration services, data presentation and exchange, and secure communication protocols" [15].
Integrated public service governance covers legal, organisational, semantic and technical layers. This jointing layer refers to "ensuring interoperability during preparation of legal instruments, organisation business processes, information exchange, services and components that support public services" [15].
Semantic and Syntactic Interoperability
A common part of the interoperability models introduced above is the semantic interoperability. This layer also appears in other interoperability frameworks, linked to information systems in various domains, like healthcare, emergency management or military and defence, widely discussed in the literature. However, the majority of these models base on Levels of Conceptual Interoperability Model (LCIM) [17]. Turnitsa [18] in the current version of LCIM distinguishes 7 levels, including no interoperability (level 0), technical, syntactic, semantic, pragmatic, dynamic and conceptual interoperability (level 6).
Semantic interoperability is defined as the ability of two or more computer systems to automatically interpret the information exchanged meaningfully and accurately in order to produce useful results as defined by the end users of both systems [19]. A necessary precondition for achieving not only semantic interoperability, but any further interoperability, is the syntactic interoperability. According to Krishnamurthy and St. Louis [19], two or more computers systems exhibit syntactic interoperability, if they are capable of communicating and exchanging data. In this case specified data formats are crucial, e.g. the XML (eXtensible Markup Language) standard provides this kind of interoperability and it is used by the GML. Thus, in general terms and in the context of spatial data, the semantics refers to the content and the meaning of information (spatial objects and their attributes) while the syntax refers to the structuring or ordering of data [20].
Both semantic and syntactic issues should be considered in order to reach interoperability in SDI [22]. They are closely related to the concept of application schema and play one of the key roles in interchanging spatial data and information across SDI. These matters also appear in a couple of questions concerning SDI, among others, model-driven approach, spatial data interchange and data specifications.
Data-Centric View on SDI
The CEN/TR 15449 series of Technical Reports [1,[21][22][23][24] developed by the European Committee for Standardization (CEN) offers two different approaches to SDI: data-centric view and service-centric view. The data-centric view on SDI addresses among others the concept of semantic interoperability [22] and it is related to the data that are at the heart of SDI. This perspective includes application schemas and metadata [1].
One of the general considerations for achieving interoperability according to the reference model for SDIs defined in the CEN/TR 15449-1 [1] is the use of the model-driven approach. This solution for SDI development is also promoted by the ISO 19100 series of International Standards and it follows the concepts formulated in the model-driven architecture (MDA) defined by the OMG [25], that enables cross-platform interoperability.
In the model-driven approach, the starting point is the universe of discourse (view of the real or hypothetical world that includes everything of interest [10]) expressed in the form of a conceptual model that formally can be represented in one or more conceptual schemas. This schema, using a conceptual schema language, defines how the universe of discourse is described as data [22]. The conceptual schema language is a formal language containing the required linguistic constructs to describe the conceptual model in the conceptual schema [10]. Besides, conceptual schema can be used by one or more applications and then it is called an application schema. The application schema provides not only "a description of the semantic structure of the spatial dataset but also identifies the spatial object types and reference systems required to provide a complete description of geographic (spatial) information in the dataset" [22].
The set of principles for such model-driven approach is supplied by the ISO 19100 suite of geographic information standards (Fig. 3). In general outline, the information is described by an application schema (formal, implementation-independent description of semantics and logical data structures). Specifications and implementations for different techniques (e.g. relational database, XML schema for data transfer) and various implementations environments (e.g. J2EE, .Net) can be obtained from the schema in a more or less automatic way [22].
Spatial Data Interchange
Access and exchange of spatial data are the main goals of any SDI. In this context, semantic and syntactic issues become very important, specifically when spatial data are interchanged between different systems [22]. Applications (software) and users (people) should interpret data and information in the same manner to ensure they are understood as it was planned by the creator of the data. In line with the ISO 19100 series of standards that support this level of interoperability, two fundamental issues need to be determined to achieve interoperability between heterogeneous systems [4]. The first is to define the semantics of the content and the logical structures of spatial data, something which should be done in the application schema. Second, a system and platform independent data structure needs to be defined that can represent data according to the application schema [4].
An overview of an interoperable data exchange is illustrated in Figure 4. System A wants to send a dataset to system B, what follows, system B has to be able to use data from system A. To ensure a successful data transfer, both systems must agree on a common application schema I, which encoding rule R to apply, and what kind of transfer protocol to use [4]. The application schema defines the possible content and structure of the interchanged spatial data, thus it underpins the interoperable data exchange. By contrast, "the encoding rule defines the conversion rules for how to code the data into a system independent data structure" [4].
For the purpose of the data transfer, data is structured in accordance with the common application schema I and encoded/decoded in compliance with the principles defined in the ISO 19118 standard [4]. Data mappings (MAI and MIB) specify how existing schema A can be converted to the application schema I and how the data according to the application schema I can be transformed to an existing schema B [22]. In case of differences between data structure of system A or B and I, that kind of mappings may be difficult to accomplish. Nevertheless, if the semantics of system A or B are different from that of I, such mappings may be even impossible to achieve [22]. Hence semantics is a very important issue.
Application Schemas
An application schema is a conceptual schema for applications with similar data requirements [4]. As mentioned above, the application schema is the basis of a successful data transfer as it defines the possible content and structure of the exchanged spatial data. Therefore, it covers both semantic and syntactic interoperability.
Additionally, beyond providing the description of the features in the data set, the application schema also identifies the spatial object types and reference systems, as well as data quality elements [10].
Moreover, to ensure a fulfilling result, it should be accessible to both the sender and receiver of spatial data. The International Standards in the domain of geographic information recommend that it should be transferred before data interchange proceeds. Then, both ends of this transaction can prepare their systems by implementing the appropriate mappings and data structures corresponding to the application schema [4].
During the interoperable spatial data exchange, two types of application schema commonly take part, the first one expressed in the UML, the second one in the GML.
In line with the ISO 19100 suite of standards, the application schema used in the spatial data interchange process should be expressed in the UML conceptual schema language, in compliance with the ISO 19103 [26] and the ISO 19109 [27]. These International Standards provide a set of rules for how to properly write the application schema, including the usage of standardized schemas to define feature types. The UML allows to present data models in a graphical way (as UML diagrams) that provides a well understood form of the spatial data, especially for people. In addition, this presentation is also readable by machines as XMI (XML Metadata Interchange) format to support the transition to the encoding schemas.
The GML is an XML encoding based on principles specified in the ISO 19118 [4]. It was developed to provide a common XML encoding for spatial data, as well as "an open, vendor-neutral framework for the description of geospatial application schemas for the transport and storage of geographic information in the XML" [28].
The GML application schema is an application schema written in the XML Schema in accordance with the rules specified in the ISO 19136 [28]. It also has to import the GML schema that compromises XML encodings of a number of the conceptual classes defined in the ISO 19100 series of International Standards.
In conclusion, generally the UML application schema comprises semantic interoperability and it is mainly dedicated to humans, whereas the GML application schema covers syntactic interoperability and is intended for machines and software.
Both UML and GML application schemas are widely used in the European SDI as well as at the national level in Poland. They are an integral part of spatial data specifications and relevant regulations in the form of data models.
Data Specifications
The spatial data are the centre of SDI and they represent the real world (the universe of discourse) in abstracted form that can be structured in data models. The ISO 19100 series of geographic information standards provides well defined methodology, based on conceptual modelling, for elaborating such models. The spatial data model is a mathematical construct to formalise the perception of space [12]. A conceptual model includes semantics (concepts) to place spatial objects within the scope of the description, while an application schema adds logical structure to this semantics.
Data models are encapsulated in data specifications that beyond these models also contain other relevant requirements about data, such as rules for data capture, encoding, and delivery, as well as provisions of data quality and consistency, metadata, etc.
In the broader sense, data specification can refer to both the data product specification and the interoperability target specification in SDI. The first one is a detailed description of a dataset or dataset series used for creating a specific data product [29]. The second one is used for transforming existing data so that they share common characteristics [12].
In the case of INSPIRE, such data specifications have been developed for the 34 themes of the 3 annexes of the Directive to achieve interoperability in the European SDI. These guidelines can be used by the Member States of the EU to create new datasets or to transform existing datasets according to the specifications by mapping the existing model to the model described in the data specification documents. Thus, semantic interoperability can be attained in this manner. Various datasets can be used together and be understood by different SDI users in the same way [22].
By way of illustration, a similar approach was used in Poland to implement the INSPIRE Directive and establish the National SDI. Passing the law of the infrastructure for spatial information in Poland, that is a transposition of the INSPIRE Directive (what means an adjustment of the INSPIRE regulations to the national law), involved the introduction of many acts and changes to related laws, among others the law on geodesy and cartography. The Head Office of Geodesy and Cartography (HOGC), the main coordinator of the creation and functioning of SDI in Poland, made a decision to replace the existing instructions and guidelines (very often obsolete) by regulations of the Cabinet or relevant minister that on the one hand became annexes to the law on geodesy and cartography and on the other hand put some recommendations of the INSPIRE Directive into action.
An integral part of these elaborated regulations are the UML and GML application schemas that define information structures of spatial databases, corresponding to each regulation. In terms of the ISO 19131 [29], these instructions are data product specifications. The aforementioned schemas were prepared according to the ISO 19100 series of International Standards in the geographic information domain, what should ensure interoperability of spatial data sets and GIS applications in Poland. Regulations cover the whole legal and technical issues regarding the geodesy domain. This was a very ambitious challenge, particularly the methodology of the conceptual modelling and the usage of the UML and GML notations in conceptual schemas, which describe the information content of databases, being applied for the first time in Poland.
Interoperability Challenges in SDI
Two approaches are possible to reach interoperability in SDI: transformation and harmonisation of spatial data. Transformation uses the information and communications technologies and does not impact the original data structures, while data harmonisation is the process of modifying and fine-tuning semantics and data structure to enable compatibility with agreements (specifications, standards, or legal acts) across borders and/or user communities [12]. When technical arrangements are not sufficient to connect the interoperability gap between the communicating systems in SDI, then harmonisation is needed. In the opinion of Tóth et al. [12], the combination of these two approaches provides the best solution in SDI.
Therefore, countries participating in the creation of the European SDI "should transform or harmonise their existing datasets to match them with specifications as described and required by the INSPIRE Directive and its implementing rules" [22]. In practice, "these specifications can also be used to create new datasets or datasets series that match those requirements" [22].
In the case of Poland, the process of harmonisation required either working out new data structures or adjusting existing data structures of spatial databases to INSPIRE guidelines and recommendations.
As stated above, data structures are described with the use of UML and GML application schemas. Nevertheless, working out accurate and correct application schemas is not an easy task. There should be considered many issues, for instance recommendations of the ISO 19100 series of geographic information standards, appropriate regulations for given problem or topic, production opportunities and limitations (i.e. software, tools).
In addition, the GML application schema is strictly connected with the UML application schema, in other words, it should be its translation. Following the ISO 19136 [28], the GML application schema can be constructed in two different and alternative ways. The first is by adhering to the rules specified in the ISO 19109 for application schemas in the UML, and conforming to both the constraints on such schemas and the rules for mapping them to the GML application schemas (according to the ISO 19136). The second is by adhering to the rules for GML application schemas (specified in the ISO 19136) for creating a GML application schema directly in the XML Schema. The first approach is commonly used in practice.
However, not everything that can be expressed in the UML can be represented straightforwardly in the GML, and this can have a significant influence on the spatial data sets and GIS interoperability, and thereby the ability to valid data exchange. Moreover, what should one do in the case of overly complex or even faulty application schemas that are indeed the base of successful spatial data interchange and also determine the final structure of a database? Incorrect or overly complex data structures have a direct influence on the the ability to generate GML data sets with concrete data (objects) and thereby can cause various problems and anomalies at the data production stage. Such problems have not only appeared in Poland during the adjustment of existing spatial data structures to INSPIRE guidelines and recommendations, particularly INSPIRE data specifications, but mainly after publishing regulations that define data structures for relevant spatial databases.
Already during their creation, these application schemas had some technical problems concerning the UML to GML transformation. After publishing the regulations, some contractors also reported remarks about application schemas, among others, faults, mistakes or anomalies in their notation. One of the reasons may be an ambiguity of the UML to GML transformation [30], while another may be the overly complex application schemas elaborated. Unfortunately, these problems can influence the potential to generate GML files with spatial data, as well as the ability of GIS software to process these files.
At the HOGC, work is currently underway to detect the most problematic issues related to the existing UML and GML application schemas and to propose some improvements to optimise these schemas. In turn, at the European level, INSPIRE data specifications are revised regularly and some corrigenda or new versions of these documents are published on the INSPIRE website.
Conclusions and Summary
The deployment of SDIs facilitates the interoperability of geographic information. SDI is meant as a networked environment (e.g. Internet) that supports the easy and coordinated access to geographic information and geographic information services [31]. It is an essential resource required for a specific activity, e.g. can enhance crisis and disaster management or environmental monitoring. The data plays the key role in SDI because the exchange of and access to spatial data is the principal objective of any SDI. One of the fundamentals of interoperability are standards and specifications. In the case of the European SDI, examples of such standards are the UML and GML that are also included in spatial data specifications in the form of application schemas. Data specifications refer to the interoperability target specification in the context of SDI.
Establishing the Infrastructure for Spatial Information in Europe requires, among others, harmonising different spatial data sets and thereby ensuring their semantic and syntactic coherence. This process involves adjusting existing spatial data structures to INSPIRE guidelines and recommendations, especially INSPIRE data specifications. In these documents data structures are described with the use of UML and GML application schemas. Incorrect or too complex data structures have direct influence the ability to generate GML data sets with concrete data (objects) and thereby can cause various problems and anomalies at the data production stage.
According to the CEN/TR 15449-1 [1] one of few general considerations for achieving interoperability is keeping things simple and checking the quality. For these reasons, the capability to examine and estimate the UML and GML application schemas quality, including also exploring their complexity, seems to be worthwhile, very interesting and an important issue in the context of semantic and syntactic interoperability in SDIs.
As a part of further research it is proposed to develop a methodology for examining the UML and GML application schemas quality focusing on a number of selected application schemas prepared in the HOGC in Poland within the INSPIRE Directive implementation works, as well as application schemas from INSPIRE data specifications. The results of this work will first of all provide the foundations for the elaboration of guidelines and recommendations for the optimisation of existing UML and GML application schemas included in Polish regulations. | 6,921.6 | 2020-09-21T00:00:00.000 | [
"Computer Science"
] |
Employing Two “ Sandwich Delay ” Mechanisms to Enhance Predictability of Embedded Systems Which Use Time-Triggered Co-operative Architectures
In many real-time resource-constrained embedded systems, highly-predictable system behavior is a key design requirement. The “time-triggered co-operative” (TTC) scheduling algorithm provides a good match for a wide range of low-cost embedded applications. As a consequence of the resource, timing, and power constraints, the implementation of such algorithm is often far from trivial. Thus, basic implementation of TTC algorithm can result in excessive levels of task jitter which may jeopardize the predictability of many time-critical applications using this algorithm. This paper discusses the main sources of jitter in earlier TTC implementations and develops two alternative implementations – based on the employment of “sandwich delay” (SD) mechanisms – to reduce task jitter in TTC system significantly. In addition to jitter levels at task release times, we also assess the CPU, memory and power requirements involved in practical implementations of the proposed schedulers. The paper concludes that the TTC scheduler implementation using “multiple timer interrupt” (MTI) technique achieves better performance in terms of timing behavior and resource utilization as opposed to the other implementation which is based on a simple SD mechanism. Use of MTI technique is also found to provide a simple solution to “task overrun” problem which may degrade the performance of many TTC systems.
Introduction
Embedded systems are often implemented as a collection of communicating tasks [1].The various possible system architectures can then be characterized according to these tasks.For example, if the tasks are invoked as a response to aperiodic events, the system architecture is described as "event-triggered" [2,3].Alternatively, if the tasks are invoked periodically under the control of timer, the system architecture is described as "time-triggered" [3,4].Since highly-predictable system behavior is an important design requirement for many embedded systems, time-triggered software architectures have become the subject of considerable attention (e.g.see [4]).In particular, it has been widely accepted that time-triggered architectures are a good match for many safety-critical applications, since they help to improve the overall safety and reliability [5][6][7][8][9][10].In contrast with the event-triggered, time-triggered systems are easy to validate, verify, test, and certify because the times related to tasks are deterministic [11,12].
Moreover, embedded systems can also be characterized according to the natures of their tasks.For example, if the tasks -once invoked -can pre-empt (interrupt) other tasks, then the system is described as "pre-emptive".If, instead, tasks cannot be interrupted, the system is described as "non pre-emptive" or "co-operative".When comparing with pre-emptive, many researchers demonstrated that co-operative schedulers have numerous desirable features, particularly for use in safety-related systems [2,5,7,13,14].
Cyclic executive is a form of co-operative scheduler that has a time-triggered architecture.In such "timetriggered co-operative" (TTC) architectures, tasks execute in a sequential order defined prior to system activa-tion; the number of tasks is fixed; each task is allocated an execution slot (called a minor cycle or a frame) during which the task executes; the task -once interleaved by the scheduler -can execute until completion without interruption from other tasks; all tasks are periodic and the deadline of each task is equal to its period; the worst-case execution time of all tasks is known; there is no context switching between tasks; and tasks are scheduled in a repetitive cycle called major cycle [15,16].
Provided that an appropriate implementation is used, TTC schedulers can be a good match for a broad range of embedded applications, even those which have hard real-time requirements [15][16][17][18][19][20][21].Overall, a TTC scheduler can be easily constructed using only a few hundred lines of highly portable code on high-level programming languages (such as "C"), while the resulting system is highly-predictable [14].Since all tasks in TTC scheduler are executed regularly according to their predefined order, such schedulers demonstrate very low levels of task jitter [16,22,23] and can maintain their low-jitter characteristics even when complex techniques, such as "dynamic voltage scaling" (DVS), are employed to reduce system power consumption [20].
Despite many advantages, implementing the software code of TTC algorithm, with less care, can result in demonstrating high levels of task jitter especially at the release times of low-priority tasks.The presence of jitter can have a detrimental impact on the performance of many embedded applications.For example, [24] show that -during data acquisition tasks -jitter rates of 10% or more can introduce errors which are so significant that any subsequent interpretation of the sampled signal may be rendered meaningless.Similarly [25] discusses the serious impact of jitter on applications such as spectrum analysis and filtering.In embedded control systems, jitter can greatly degrade the performance by varying the sampling period [26,27].Moreover, in applications -like distributed multimedia communications -the presence of even low amounts of jitter may result in a severe degradation in perceptual video quality [28].
The present study is concerned with implementing highly-predictable embedded systems.Predictability is one of the most important objectives of real-time embedded systems [20,[29][30][31].Ideally, predictability means that the system is able to determine, in advance, exactly what the system will do at every moment of time in which it is running and hence determine whether the system is capable of meeting all its timing constraints.One way in which predictable behavior manifests itself is in low levels of task jitter.
The main aim of this paper is to address the problem of task jitter to enhance predictability of embedded ap-plications employing TTC architectures.In particular, the paper discusses the main sources of jitter in the original TTC systems and proposes two new TTC scheduler implementations which have the potential to reduce task jitter by means of employing "sandwich delay" (SD) mechanisms [32].Such implementations will be referred to as TTC-SD and TTC-MTI schedulers.
The remaining parts of the paper are organized as follows.Section 2 reviews basic TTC scheduler implementations and highlights their main drawbacks with regards to jitter behavior.In Section 3, we describe the TTC-SD and TTC-MTI schedulers.Section 4 outlines the experimental methodology used to evaluate the described schedulers and provides the results in terms of task jitter and implementation costs (i.e.resource requirements).We finally draw the overall paper conclusions in Section 5.
Basic Implementations of TTC Scheduler
This section describes the implementation of the "original TTC-Dispatch" scheduler [14] and discusses its main limitations.
Overview
The original TTC-Dispatch scheduler is driven by periodic interrupts generated from an on-chip timer.When an interrupt occurs, the processor executes an Interrupt Service Routine (ISR) Update function.In the Update function, the scheduler checks the status of all tasks to see which tasks are due to run and sets appropriate flags.After these checks are complete, a Dispatch function will be called, and the identified tasks (if any) will be executed in sequence.The Dispatch function is called from an "endless" loop placed in the Main code and when not executing the Update and Dispatch functions, the system will usually enter a low-power "idle" mode.This process is illustrated schematically in Figure 1.Note that such a scheduler has previously been referred to as TTC-Dispatch scheduler [33].
Despite that TTC schedulers provide a simple, low-cost and highly-predictable software platform for many embedded applications, such a basic implementation of the TTC scheduler can introduce high levels of jitter at task release times [34].This point is further discussed as follows.
Task Jitter
In periodic tasks, variations in the interval between the release times are termed jitter.As previously noted, the presence of jitter can -in many systems -result in less predictable operation and cause a detrimental impact on the system performance.Since our focus in this paper is Time-Triggered Co-operative Architectures on TTC schedulers, we identify the following three possible sources of task jitter in such systems.
1) Scheduling overhead variation The overhead of a conventional scheduler arises mainly from context switching.In some systems, such as those employing DVS [20], the scheduling overhead is comparatively large and may have a highly-variable duration.Figure 2 illustrates how a TTC system can suffer release jitter as a result of variations in the scheduler overhead.
In [34], we observed that the underlying cause of this variation in the original TTC-Dispatch scheduler is the interrupt behavior.For example, when an interrupt occurs, the processor takes fixed time to leave the "idle" mode and begin to execute the ISR Update.However, in the Update, and before calling Dispatch, the scheduler goes through the task list and identifies which task is due to run.Such check activities cannot be fixed in time if there is more than one scheduled task to run.In order to deal with this problem, a "modified TTC-Dispatch" scheduler has been developed [34].The proposed scheduler controls the jitter in the first task (which is implicitly the "top priority" task with hardest timing constraints) by re-arranging the activities performed in the Update and Dispatch functions.Specifically, the Update function is very short and has a fixed duration: it simply keeps track of the number of Ticks.The dispatch activities will then be carried out in the Dispatch function.By doing so, we make sure that the first task in the system is always free of jitter.Note that the function call tree for the modified TTC-Dispatch scheduler is same as the original TTC-Dispatch scheduler (Figure 1).
2) Task placement
Even if we can avoid variations in the scheduler overhead, we may still have problems with jitter in a TTC scheduler as a result of the task placement.
To illustrate this, consider Figure 3.In this schedule, Task C runs sometimes after A, sometimes after A and B, and sometimes alone.Therefore, the period between every two successive runs of Task C is highly variable.Such a variation can be called "schedule-induced" jitter.Moreover, if Task A and Task B have variable execution durations, then the jitter levels of Task C will even be larger.This type of jitter is called "task-induced" jitter.The original and modified TTC-Dispatch schedulers are not capable of dealing with jitter caused by the task placement.
3) Tick drift For completeness, we also consider tick drift as a source of task jitter.In the TTC designs considered in this paper, a clock tick is generated by a hardware timer that is linked to an ISR.This mechanism relies on the presence of a timer that runs at a fixed frequency: in these circumstances, any jitter will arise from variations at the hardware level (e.g. through the use of a low-cost frequency source, such as a ceramic resonator, to drive the on-chip oscillator: see [14]).
In the scheduler implementations considered in this paper, the software developer has no control over the clock source.However, in some circumstances, those implementing a scheduler must take such factors into account.
For example, in situations where DVS is employed (to reduce CPU power consumption), it may take a variable amount of time for the processor's Phase-Locked Loop (PLL) to stabilize after the clock frequency is changed.As discussed elsewhere, it is possible to compensate for such changes in software and thereby reduce jitter (see [20]).Such techniques are not considered further in this paper.
Modified implementations of TTC Scheduler
Our concern in this paper is on jitter caused mainly by the task placement.To reduce this type of jitter, we introduce two techniques which can be incorporated in the basic TTC scheduler framework.These techniques are described here.
Adding "Sandwich Delays"
One way to reduce the variation in the starting times of "low-priority" tasks in TTC system is to place "Sandwich Delay" (SD) [32] around tasks which execute prior to other tasks in the same tick interval.Such a modified TTC scheduler implementation will be referred to as TTC-SD scheduler.
In the TTC-SD scheduler, sandwich delays are used to provide execution "slots" of fixed sizes in situations where there is more than one task in a tick interval.To clarify this, consider the set of tasks shown in Figure 4.In the figure, the required SD prior to Task C -for low jitter behavioris equal to the estimated "worst-case execution time" (WCET) of Task A plus Task B. This implies that in the second tick (for example), the scheduler runs Task A and then waits for the period equals to the WCET of Task B before running Task C. The figure shows that when SDs are used, the periods between any successive runs of Task C become equal and hence jitter in the release time of this task is significantly reduced.
Note that -with this implementation -estimated WCET for each task is input to the scheduler through a function placed in the Main code.After entering task parameters, the scheduler calculates the scheduler major cycle and the required release time for each task.Note that the required release time of a task is the time between the start of the tick interval and the start of the predefined task "slot" plus a little safety margin.
Working with "Multiple Timer Interrupts"
Although the use of SD can help to reduce jitter in low-priority tasks significantly, this approach does not give such a precise control over timing and can significantly increase the levels of CPU power consumption.This is because the processor is forced to run in normal operating mode while the SD is executing.To address both problems, a modified sandwich delay mechanism that uses "Multiple Timer Interrupt" (MTI) is developed.The TTC scheduler incorporating MTI technique will be referred to as TTC-MTI scheduler.
In the TTC-MTI scheduler, several timer interrupts are used to generate the predefined execution "slots" for tasks.This allows more precise control of timing in situations where more than one task executes in a given tick interval.The use of interrupts also allows the processor to enter an "idle" mode after completion of each task, resulting in power saving.
To implement this technique, two interrupts are required: Tick interrupt: to generate the scheduler periodic tick. Task interrupt: to trigger the execution of tasks within tick intervals.The complete process is illustrated in Figure 5.In this figure, to achieve zero jitter, the required release time prior to Task C (for example) is equal to the WCET of Task A plus the WCET of Task B plus scheduler overhead (i.e.ISR Update function).This implies that in the second tick (for example), after running the ISR, the scheduler waits -in the "idle" mode -for a period of time equals to the WCETs of Task A and Task B before running Task C. Figure 5 shows that with the MTI technique, the periods between the successive runs of Task C (the "lowest priority" task) are always equal.This means that the task jitter in such implementation is independent on the task placement or the duration(s) of the preceding task(s).
In fact, the method described here requires no more than two timers or one timer -with multiple channels -in total.The hardware used in this study to implement this scheduler (Section 4.1) supports multiple channels per timer, allowing efficient use of the available resources.
In the TTC-MTI, the estimated WCET for each task is also input to the scheduler through the Main code.The scheduler then calculates the scheduler major cycle and the required release time for each task.Moreover, there is no Dispatch called in the Main code: instead, "interrupt request wrappers" -which contain Assembly code -are used to manage the sequence of operation in the whole scheduler.The function call tree for the TTC-MTI scheduler is shown in Figure 6.
Unlike the normal Dispatch schedulers, the TTC-MTI implementation relies on two interrupt Update functions: Tick Update and Task Update.The Tick Update -which is called every tick interval (as normal) -identifies which tasks are ready to execute within the current tick interval.Before placing the processor in the "idle" mode, the Tick Update function sets the match register of the task timer according to the release time of the first due task running in the current interval.Calculating the release time of the first task in the system takes into account the WCET of the Tick Update code.
When the task interrupt occurs, the Task Update sets the return address to the task that will be executed straight after this update function, and sets the match register of the task timer for the next task (if any).The scheduled task then executes as normal.Once the task completes execution, the processor enters "idle" mode and waits for the next task interrupt or tick interrupt (depending on the task schedule).Note that the Task Update code has fixed execution duration to avoid jitter at the starting time of tasks.
Furthermore, it is worth noting that the TTC-MTI scheduler also provides a simple solution to "task overrun" problem in TTC system which may -in many caseshave serious impacts on system behavior [35].More specifically, the integrated MTI technique helps the TTC scheduler to shutdown any task exceeding its estimated "worst-case execution time" (WCET) [36].In the implementation considered, if the overrunning task is follo-wed by another task in the same tick, then the task interrupt -which triggers the execution of the latter task -will immediately terminate the overrun.Otherwise, the task is allowed to overrun until the next tick interrupt where a new tick will be launched.Please note that this issue will not be discussed further in this paper.
Evaluating the TTC-SD and TTC-MTI Schedulers
This section first outlines the experimental methodology used to evaluate the TTC-SD and TTC-MTI schedulers.It then presents the output results in terms of task jitter and implementation costs.Note that the results obtained from the new schedulers are compared with those obtained from the "modified TTC-Dispatch" scheduler [34] to highlight the impact of the proposed schedulers on the low-priority task jitter.
Experimental Methodology
We first outline the experimental methodology used to obtain the results presented in this section.
1) Hardware platform
The empirical studies reported in this paper were conducted using Ashling LPC2000 evaluation board supporting Philips LPC2106 processor [37].The LPC2106 is a modern 32-bit microcontroller with an ARM7 core which can run -under control of an on-chip PLL -at frequencies from 12 MHz to 60 MHz [38].The oscillator frequency used was 12 MHz, and a CPU frequency was 60 MHz.The compiler used was the GCC ARM 4.1.1operating in Windows by means of Cygwin (a Linux emulator for windows).The IDE and simulator used was the Keil ARM development kit (v3.12).
2) Jitter test For meaningful comparison of jitter results, the following task set was used (Figure 7).To allow exploring the impact of schedule-induced jitter, Task A was scheduled to run every two ticks.Moreover, all tasks were set to have variable execution durations to allow exploring the impact of task-induced jitter.Note that the duration of Task A is double the duration of Task B and Task C. Also, Task A has the highest priority and Task C has the lowest priority.
Jitter was measured at the release time of each task.To measure jitter experimentally, we set a pin high at the beginning of the task (for a short time) and then measure the periods between every two successive rising edges.We recorded 5000 samples in each experiment.The periods were measured using a National Instruments data acquisition card "NI PCI-6035E" [39], used in conjunction with appropriate software LabVIEW 7.1 [40].
To assess the jitter levels, we report two values: "average jitter" and "difference jitter".The difference jitter is obtained by subtracting the minimum period from the maximum period obtained from the measurements in the sample set.This jitter is sometimes referred to as "absolute jitter" [23].The average jitter is represented by the standard deviation in the measure of average periods.Note that there are many other measures that can be used to represent the levels of task jitter, but these measures were felt to be appropriate for this study.
3) CPU test To obtain CPU overhead measurements in each scheduler, we run the scheduler for 25 seconds and then, using the performance analyzer supported by the Keil simulator, the total time used by the scheduler code was measured.The percentage of the measured CPU time out of the total running time was also reported.
4) Memory test
In this test, CODE and DATA memory values required to implement each scheduler were recorded.Memory values were obtained using the "map" file created when the source code is compiled.The STACK usage was also measured (as part of the DATA memory overhead) by initially filling the data memory with "DEAD CODE" and then reporting the number of memory bytes that had been overwritten after running the scheduler for sufficient period.
5) Power test
To obtain representative values of power consumption, the input current and voltage to the LPC2106 CPU core were measured while executing the scheduler.Again, the measurements were obtained by using the National Instruments data acquisition card "NI PCI-6035E" in con-junction with LabVIEW 7.1 software.The sampling rate of 10 KHz was used over a period equal to 5000 major cycles.Values for currents and voltages were then multiplied and then averaged out to give the power consumption results.
Jitter Results
It can clearly be noted from Table 1 that the use of SD mechanism in TTC schedulers caused the low-priority tasks to execute at fixed intervals.However, the jitter in the release times of Tasks B and Task C was not eliminated completely.This residual jitter was caused by variation in time taken to leave the software loop -used in the SD mechanism to check if the required release time for the concerned task was matched -and begin to execute the task.
The results also show that the TTC-MTI scheduler helped to remove jitter in the release times of all tasks: this in turn would help to cause a significant enhancement in the overall system predictability.
CPU, Memory and Power Requirements
Table 2 show that the overall processing time required for the TTC-SD scheduler is equal to 74% of the total run-time.This overhead figure is too large compared to that obtained from the other schedulers considered in this paper (which was approximately equal to 40%).The observed increase in processing time is expected when such a SD approach is used: since the CPU is forced to run in normal operating mode while waiting for tasks to start their execution.
The results in Table 3 show that the Code memory required in the TTC-MTI scheduler were slightly smaller than those used to implement the other schedulers while the Data memory requirements were larger.Remember that -compared to the other schedulers -the overall architecture was rather different in TTC-MTI (see Section 3.2).Note from Table 4 that in the TTC-SD scheduler, the CPU power consumption was significantly increased.This was, again, due to the processor running in normal operating mode whilst executing the SD function.
Conclusions
Time-triggered co-operative architectures provide simple, low-cost software platforms for a wide range of embedded applications in which highly-predictable system behavior is a key design requirement.Simple TTC implementations based on periodic timer interrupts can provide highly--predictable behavior for the first task in every tick interval.However, if more than one task are executed in a tick interval, the release times of later tasks will depend (in many cases) on the execution time of earlier tasks.As demonstrated in this paper, use of "sandwich delay" mechanisms with the TTC scheduler framework can significantly reduce jitter levels in later tasks.
The results presented in the paper show that, although the TTC-SD scheduler helped to reduce jitter in the task release times significantly, such jitter could not be removed completely and the CPU overhead (and, hence, Time-Triggered Co-operative Architectures system power consumption) was increased.Therefore, the TTC-MTI scheduler was developed to provide a better solution where all tasks became free of jitter while the system maintained its low CPU overhead and power requirements.The TTC-MTI scheduler achieved this performance by using multiple timers to adjust the timing for tick and tasks and also utilizing the "idle" mode when the processor is not executing tasks or ISR functions.Moreover, the TTC-MTI scheduler has the potential to overcome the problem of task overrun, thereby increasing the overall system predictability.
Finally, it is important for embedded software developers who decide to employ any of the described techniques or adapt them for use in their existing designs to take into account the implementation costs (in terms of CPU, memory and power resources) in addition to the maximum levels of jitter that each task in the system can tolerate.
Figure 1 .
Figure 1.Function call tree for the original TTC scheduler.
Figure 4 .Figure 5 .
Figure 4. Using Sandwich Delays to reduce release jitter in TTC schedulers.
Table 1 . Task jitter from the modified TTC-Dispatch, TTC-SD and TTC-MTI schedulers.
Time-Triggered Co-operative Architectures | 5,611.2 | 2011-07-29T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
An Intelligent Waste-Sorting and Recycling Device Based on Improved EfficientNet
The main source of urban waste is the daily life activities of residents, and the waste sorting of residents’ waste is important for promoting economic recycling, reducing labor costs, and protecting the environment. However, most residents are unable to make accurate judgments about the categories of household waste, which severely limits the efficiency of waste sorting. We have designed an intelligent waste bin that enables automatic waste sorting and recycling, avoiding the extensive knowledge required for waste sorting. To ensure that the waste-classification model is high accuracy and works in real time, GECM-EfficientNet is proposed based on EfficientNet by streamlining the mobile inverted bottleneck convolution (MBConv) module, introducing the efficient channel attention (ECA) module and coordinate attention (CA) module, and transfer learning. The accuracy of GECM-EfficientNet reaches 94.54% and 94.23% on the self-built household waste dataset and TrashNet dataset, with parameters of only 1.23 M. The time of one recognition on the intelligent waste bin is only 146 ms, which satisfies the real-time classification requirement. Our method improves the computational efficiency of the waste-classification model and simplifies the hardware requirements, which contributes to the residents’ waste classification based on intelligent devices.
Introduction
In recent years, as urbanization and living standards have increased, the variety and quantity of waste has increased dramatically [1], putting enormous pressure on resource use, environmental safety, and economical recycling. Urban residents are the main producers of household waste, and they participate in household waste sorting. Recycling is an effective way to utilize waste resources, reduce the quantity of waste, and contribute to sustainable development [2]. However, due to many different categories of waste, extensive sorting knowledge is required. It is difficult to translate residents' willingness for waste sorting into actual action. As a result, many countries have started researching intelligent waste sorting and recycling devices, which were applied in engineering practice [3,4]. Intelligent recognition of waste categories is a prerequisite for sorting and recycling. Computer vision technology and deep learning technology can automatically detect and classify waste categories [5,6], providing technical support for waste sorting and recycling.
The convolutional neural network (CNN) is one of the main branches of deep learning, and it is the mainstream image-recognition method nowadays. With the rapid development of deep learning technology, CNN has made significant achievements in image classification [7]. Numerous researchers have used the CNN to solve waste image classification tasks [8,9], and have achieved a series of achievements. Ref. [10] improved ResNet18 with a self-monitoring module to enhance the feature map representation, achieving 95.87% (1) An intelligent waste bin has been designed, which can automatically collect the waste put in, improve the efficiency of waste sorting by residents, and reduce the separation work of collection facilities. (2) We propose an improved EfficientNet, named GECM-EfficientNet, which accurately classifies different categories of waste by fewer parameters. (3) We use transfer learning [17] to initialize the model parameters during training, optimizing the performance of the model without adding extra computation. (4) Our waste-classification model balances speed and accuracy with good real-time performance on edge devices, which can reduce hardware costs.
The paper is structured as follows: Section 2 reviews the work related to the large model and lightweight model. Section 3 describes the working process of the intelligent waste bin and the detailed design of the proposed model. Section 4 describes the experimental results and analysis of different datasets. Finally, Section 5 concludes the work of this paper.
Related Work
Waste-sorting and recycling devices require frequent forward inference of models, which can be computationally expensive. Unfortunately, it is impractical to equip each device with a high-performance graphics processing unit (GPU), which results in high hardware costs. Cloud deployment [18,19] can free models from reliance on local computation, but this is heavily dependent on the Internet. When there is no network or poor network connectivity, cloud deployment is not possible. Compared with large models, lightweight models tend to be slightly less accurate, but smaller and faster. Therefore, for waste sorting and recycling devices, local deployment of lightweight models would be an effective measure.
In 2012, AlexNet [20] won the ImageNet image classification competition, triggering the boom in CNN for image classification, giving rise to a series of models, such as VGG [21], GoogleNet [22], and ResNet [23], with superior performance. Ref. [24] implements three VGGs in tandem for electrocardiogram classification, achieving high accuracy of 97.23% on the PTB-XL dataset. Ref. [25] used transfer learning to improve the VGG, achieving 98.4% and 95.71% accuracy on the self-built grape and tomato pest datasets. Refs. [26,27] achieve classification of plants and botrytis by ResNet, with an accuracy of over 98%. Ref. [28] initialized GoogleNet through transfer learning, which achieved up to 99.94% accuracy on the self-built northern maize leaf blight dataset. These papers implemented image classification through large models. They excelled in accuracy, all achieving over 95% accuracy. However, large models require large memory and hardware resources, which hinders their usefulness on resource-limited embedded devices. For this, lightweight models would be a viable solution.
In 2016, ref. [29] first used lightweight ideas to design models and proposed SqueezeNet, with a model size of 0.5 M. After that, numerous developers continued to explore lightweight models, proposing MobileNet [30,31], ShuffleNet [32,33], and EfficientNet [34] (EfficientNetB0-EfficientNetB7). Among the mainstream models, EfficientNet achieves the most excellent ImageNet accuracy and has a highly efficient execution; thus, it is widely used in image classification. Ref. [35] improves EfficientNetB0 by adjusting the number of MBConv modules and using the residual structure and LeakRelu, with the parameter of 1.03 M, achieving 99.69% accuracy on the self-built human behavioral point cloud dataset. Ref. [36] initialized the weights of EfficientNetB4 by transfer learning, achieving the plant nutrient deficiency diagnosis with an accuracy of 98.52% on the DND-SB dataset. Ref. [37] embeds the spatial attention module into EfficientNetB4, which improves the accuracy by about 1% on the RFMID dataset (fundus disease dataset). Ref. [38] improves EfficientNetB0 based on the convolutional block attention module (CBAM) and coordinate attention (CA) module, improving the classification accuracy by 3.5% on the self-built cervical cancer dataset. Ref. [39] implements the printed circuit board (PCB) classification recovery model based on EfficientNetB3, improved by transfer learning, and achieves the accuracy of 94.37% on the PCB DSLR dataset.
Inspired by the many areas of EfficientNet, we chose EfficeientNet as the baseline model. Among several versions of EfficientNet, we prioritized real-time and chose Effi-cientNetB0, which has the fewest parameters. Based on this, we focused on exploring the application of EfficientNetB0 to waste-sorting tasks and making improvements. In this research, we first used SolidWorks to build the mechanical structure of the intelligent waste bin. Then, the actual intelligent waste bin was built through hardware devices, Figure 1 shows its mechanical modelling, simulation modelling, and physical construction. The intelligent waste bin consists of three parts: waste recognition, actuator device, and collection device. In the waste-recognition process, the camera captures image frames, which are then passed into the classification model to identify the waste category. The actuator device consists of two servos and the attached paddle plate and baffle plate, which sort the waste into the bins by rotating at different angles. The collection device is four fan-shaped waste bins, which are set up for recyclable waste, hazardous waste, kitchen waste, and other waste according to the standards in the literature [40]. Note that the camera is HF867 with the following parameters: maximum resolution 1280 × 720; frame rate 30/s; sensitivity 39 db.
Materials and Methodology
The intelligent waste bin works as follows. First, an infrared sensor senses the waste input. Then, the camera takes the image of the waste, which is passed into the wasteclassification model for recognition. Finally, based on the recognition result, the servo controls the paddle plate and baffle plate to sort the waste into the corresponding bins. After the sorting is complete, the paddle plate and baffle plate return to their original position.
Control Circuit
The Raspberry Pi 4B [41,42] is widely used in the intelligence field. We use it as the main control device in this research. The Raspberry Pi 4B processor is BROADCOM BCM2711 with 4-core CORTEX-A72, frequency is 1.5 GHZ, memory is 8 GB, and 40 expandable pins. This paper uses the Raspberry Pi 4B to deploy the waste classification model and control the infrared sensor, camera, and servo. The control circuit is shown in Figure 2.
Self-Built Dataset
For the task of waste classification, there is no large dedicated dataset yet. According to the literature [40] on waste classification standards, this paper establishes a scene-rich household waste dataset, and Table 1 shows the details of the dataset. Our dataset contains 18 categories of household waste, with a total of 7361 images, which are classified as recyclable waste, hazardous waste, kitchen waste, and other waste. The dataset is collected from the Internet and photography, and it is our baseline dataset. We will construct the waste-sorting device based on the dataset.
Trashnet Dataset
This paper selects the TrashNet dataset [43] to validate the model performance. The TrashNet dataset is a small public dataset with only 2527 images, widely used in waste image classification tasks. It has six categories of waste images, namely cardboard, glass, metal, paper, plastic, and trash.
The Basics of Efficientnet
Typically, CNN is developed with a fixed resource budget. If resources are increased, the network can be extended to improve performance. The common method is to scale the model depth, width, and resolution. For example, by deepening the number of layers, ResNet can construct a model with 200 layers. MobileNet v2 sets the scaling factor and resolution factor, which adjusts the number of channels and the input size. Therefore, Google proposed a composite scaling method to improve the model performance, simultaneously scaling the depth, width, and input image resolution. Google designed the baseline model EfficientNetB0 and then scaled it to obtain EfficientNetB1-EfficientNetB7. On the ImageNet dataset, EfficientNet achieved contemporaneous advanced accuracy and speed.
EfficientNet adopts the mobile inverted bottleneck convolution (MBConv [31]) as the basic module, and uses the squeeze-and-excitation (SE) module [44] to calibrate the feature map by the importance of the channels. Figure 3 shows the structure of MBConv. C i and C o are input channel and output channel, respectively. H and W are height and width of the feature map, respectively. DWConv is a deepwise convolution, the kernel size of which is K. BN denotes batch normalization. Swish and Sigmoid are the activation function. In the SE module, C represents the channel of the feature map, and r is the parameter used for dimensionality reduction, which is set to 4. First, the input channels are augmented by 1 × 1 convolution (pointwise convolution, PW), then achieved feature extraction by 3 × 3 or 5 × 5 deepwise convolution. In the SE module, the global features are extracted through global average pooling, and then channel weights are obtained with the two fully connected layers and sigmoid. The channel weights and the feature map are multiplied channel by channel, which implements the channel weighting operation. Finally, the channels of the feature map are adjusted by PW. Shortcut and dropout are used only when the input channel and output channel are equal, and the stride is 1.
Improved Efficientnet
The intelligent waste bin is used daily, requiring high accuracy and real-time waste classification models. EfficientNetB0 is a simple and elegant model, which combines the advantages of MobileNetv2, but with more efficient feature-extraction capabilities. This paper designed a lightweight and efficient waste image classification model based on Effi-cientNetB0, named GECM-EfficientNet. The network structure is shown in Figure 4. First, the number of MBConv modules is adjusted, which reduces the model parameters. Secondly, we use the effiicient channel attention (ECA) module [45] to replace the SE module, which solves the shortcomings of the dimensionality reduction operation. Subsequently, parallel connections are made between the coordinate attention (CA) module [46] and the ECA module, enabling spatial weighting operation. Finally, transfer learning is used to initialize the model parameters during training.
Optimising the Network Structure
EfficientNetB0 repeatedly stacks the MBConv module to obtain excellent featureextraction capabilities. As the network deepens, more channels in the convolutional layer are used to generate more detailed filters, which results in larger parameters. With the structure of the MBConv modules unchanged, the number of MBConv modules was reduced in the deeper network layers, which better balanced the accuracy and parameters of the model. The adapted model was named G-EfficientNet. Table 2 shows a comparison in network structure between EfficientNet and G-EfficientNet.
Efficient Channel Attention
Based on SENet, ECANet proposes the ECA module. ECANet shows that the dimensionality reduction operation in the SE module carries side effects. The ECA module implements a local cross-channel interaction strategy without dimensionality reduction by one-dimensional convolution, effectively improving performance with lower parameters. In this paper, the SE module is replaced by an ECA module, which avoids the side effects of the dimensionality reduction operation and with fewer parameters. Figure 5 shows the working process of the ECA module. First, to obtain the global features without dimensionality reduction, the global average pooling (GAP) is performed on the input feature map. Then, channel weights are generated by one-dimensional convolution and sigmoid function. Finally, the channel weights and the input feature map are multiplied channel by channel, which implements the channel weighting operation. In the operation of GAP, the input feature map is compressed by global average pooling. After completing GAP, we can obtain global features of dimension 1 × 1 × C. Equation (1) shows the calculation process: In Equation (3), z c denotes the output features on each channel that uses global average pools. H and W denote the height and width of the input feature map. i and j denote the feature value coordinates of the input feature map.
After implementing GAP, the cross-channel interaction strategy is implemented by one-dimensional convolution with size k. The parameter k is generated through an adaptive function, representing the coverage of the local cross-channel interaction. Equation (2) demonstrates the calculation principle. In Equation (2), C is the total number of channels, and |x| odd denotes the nearest odd number to x. Finally, channel weights are generated by sigmoid and multiplied with the input feature map by channel. We have
Coordinate Attention
The ECA module only considers channel weight assignment, which ignores the feature map's spatial information. CA module embeds location information into the channel weighting operation, allowing the feature map to be weighted by spatial and channel information. Figure 6 shows the structure of CA module. In this paper, we connect the CA module and the ECA module in parallel not only to calibrate the feature maps based on channel information, but also to introduce spatial information. First, the CA module implements location information embedding for the input feature map X. The dimension of X is H × W × C. Average pooling is used for each channel along the horizontal and vertical directions, with pooling kernels the size of (H,1) and (1,W). In the cth channel, Equations (3) and (4) show the output of the hth row and wth column: The components of the input feature map are x c (h,i) and x c (j,w), and the coordinates of the components are (h,i) and (j,w), and the channel is c. z h and z w denote the average pooled output along the horizontal and vertical directions. z h c (h) and z w c (w) denote the output component of the cth channel in row h and column w.
Equation (5) shows the next steps. First, the feature map is concatenated, obtained by the pooling operation. Next, the channels are compressed through a standard convolution F 1 with 1 × 1. Finally, the intermediate output m is obtained through a nonlinear activation layer δ, choosing h-swish as the activation function: Then, the intermediate output m is sliced into two feature maps along the spatial dimension. The feature maps are represented as m h and m w , and two 1 × 1 standard convolutions F h and F w transform them to the same channels with the input feature map X. The activation is then performed by the sigmoid function (σ). The calculation is shown in Equations (6) and (7): In the above equation, g h and g w represent the coordinate attention weights along the horizontal and vertical directions. The final output of the CA module is shown in Equation (8), where x c (i,j) and y c (i,j) correspond to the values in the input and output feature maps with the coordinate (i,j) and the channel c:
Transfer Learning
Transfer learning [47,48] allows knowledge learned in different domains or tasks to be transferred, which can reduce training time and improve performance. In transfer learning, domain D is the subject of learning. The domain is divided into the source domain D s and the target domain D t . They consist of the data X and the probability distribution P(X) that generates X, which can be expressed as D = {X, P(X)}. Task T is the goal of learning, divided into the source task T s and the target task T t . The task consists of the label space Y and the prediction function f (·), which can be expressed as T = {Y, f (·)}.
Given the source domain D s and the source task T s , the target domain D t and the target task T t . With D s = D t or T s = T t , transfer learning solves the target task T t in the target domain D t , through knowledge learned in the source domain D s and the source task T s . This paper implements transfer learning, through the weights of EfficientNetB0 on the ImageNet dataset.
Experimental Settings
During the training, the process is accelerated by the Tesla P100. The dataset is divided into the training dataset and the test dataset according to 8:2. During model training, the generalization capability of the model is enhanced by data augmentation, with such measures as random size cropping, flipping, and luminance transformations. Adam was chosen as the model optimizer. The learning rate prevented overfitting with cosine annealing [49], the initial learning rate was 0.001, and the cosine annealing parameter was 0.01. The loss function was chosen as cross-entropy. The training period is 200, and one batch is trained with 16 images. This paper sets up the following experiments for analysis and discussion.
(1) Ablation experiments of the improved model, verifying each improvement's contribution to the model performance. (2) Comparison experiments between the improved model and the mainstream model.
All models were trained and tested on both the self-built dataset and the TrashNet dataset, verifying the level of advancement of the improved models.
(3) Model classification accuracy and inference time test. The model was deployed on a Raspberry Pi 4B for testing, verifying the accuracy and real-time performance of the model.
Ablation Experiments
To validate the contribution of each improvement. We selected the top accuracy and parameters as metrics, and experiments were conducted on the self-built dataset. In order to demonstrate that the network structure can be optimized by streamlining the number of MBConv modules, we compare G-EfficientNet with EfficientNetB0. To prove that the ECA module is lighter and more efficient, the SE module was replaced with an ECA module for the experiments. To prove that the CA module can introduce spatial information, the SE and CA modules are connected in parallel. To prove that transfer learning can optimize the model parameters. We initialized the model parameters by EfficientNetB0 weights, trained on the ImageNet dataset. Figure 7 shows the training loss and test accuracy curves of the model. L is the training loss, which is calculated on the training dataset. A is the test accuracy, which is calculated on the test dataset. E is training epoch. Obviously, GECM-EfficientNet achieves the best test accuracy and converges quickly. Table 3 shows the performance parameters of the above model. A is the top1 accuracy of the model, and P represents the number of parameters of the mode. As can be seen, the parameters of G-EfficientNet are reduced by 72.2% compared to EfficientNetB0, but the accuracy is only reduced by 1.27%. The SE module is replaced with the ECA module, which reduces parameters by 0.1 M and improves accuracy by 1.48%. There is a parallel connection of the CA module and the SE module, with only 0.2 M additional parameters, but with a 1.80% increase in accuracy. Optimizing the model parameters by transfer learning, the parameters remain the same with a 4.53% improvement in accuracy.
First, the number of MBConv modules in EfficientNetB0 is adjusted, which acquires the lighter model G-EfficientNet, Next, we improve the MBConv module. We replaced the SE module with the ECA module, connecting the CA module in parallel. During model training, the model parameters are initialized by transfer learning. Ultimately, GECM-EfficientNet was designed. Compared to EfficientNetB0, the model accuracy was improved by 5.7% on the self-built dataset, with 69.73% fewer parameters.
Comparison and Analysis of Models
To verify the level of advancement of the improved model, GECM-EfficientNet was compared with the mainstream model, with experiments completed on both the self-built dataset and TrashNet dataset. Finally, the models were deployed on the Raspberry Pi 4B for classification and inference time test. We selected lightweight models such as EfficientNetB0, MobileNetv2, MobileNetv3 [50], and ShuffleNetv2, and large models such as GoogleNet, DenseNet121, ResNet50, Inceptionv3, and VGG16. Figure 8 shows the training loss and test accuracy curves on the self-built dataset. As can be seen, GECM-EfficientNet is in the lead, achieving an accuracy of approximately 90% in only 20 epochs. The test accuracy, parameters, and single inference time of models are shown in Table 4, where T represents the single inference time of model on the Raspberry Pi 4B. As can be seen, the lightweight model achieves similar or higher accuracy than the large network, with fewer parameters and high real-time performance. Among the mainstream models, EfficientNetB0 achieved the highest accuracy (88.81%) and low parameters (4.03 M), with single inference taking 0.2 s, second only to MobileNetv3, ShuffleNetv2 1×, and MobileNetv2. Our proposed GECM-EfficientNet, with only 1.23 M parameters, achieves 94.54% accuracy with a single inference time of 146 ms. The reasons for this analysis are as follows: (1) We adapt the number of MBConv modules to obtain G-EfficientNet, which is a lightweight and excellent model. (2) The ECA module is lighter than the SE module, eliminating dimensionality reduction operations' side effects. (3) Embedding of location information through the CA attention module, which allows for spatially weighted operations. (4) Optimization of model parameters through EfficientNetB0 weights on the ImageNet dataset, which speeds up convergence and improves accuracy. To further validate the advances of the improved model, comparison experiments were set up on the TrashNet dataset. Figure 9 shows the training loss and test accuracy curves of models. It is evident that GECM-EfficientNet achieves the highest accuracy and converges quickly. The experimental results are shown in Table 5. Among the mainstream models, the proposed GECM-EfficientNet is in the leading position. It achieved the highest accuracy (94.23%) and the lowest parameters (1.23 M). As can be verified, GECM-EfficientNet is a lightweight and excellent model. In order to further validate the superiority of the improved model, GECM-EfficientNet was compared with other related studies. Among them, the literature [4] proposes RecycleNet with an accuracy of 81%. Ref. [51] implements a dual fusion approach by PSO and GA, which achieves 94.11% and 94.58% accuracy. Among these related studies, GECM-EfficientNet is also in the lead, with an accuracy similar to the literature [10], the literature [11], and the GA (Ahmad et al. [51]). However, they focus on improving the accuracy of large models, but with large parameters and poor real-time performance. The GECM-EfficientNet parameters are few, but achieve high accuracy. On resource-constrained edge devices, GECM-EfficientNet offers good prospects for application. In order to verify the real-time performance of GECM-EfficientNet, the above model was deployed to the Raspberry Pi 4B for testing. Figure 10 shows the single inference time. N is the number of inferences. T/s represents the single inference time, which is in seconds. Among the mainstream models, GECM-EfficientNet has a significant advantage on real-time performance. The average single inference time of GECM-EfficientNet is 146 ms, meeting the real-time performance requirements of waste classification.
Classification Test
The confusion matrix is plotted through the test dataset of the self-built dataset, which shows the prediction results for the different categories. Figure 11 shows the results. The rows and columns of the matrix indicate the true and predicted values for waste categories. The values on the diagonal in Figure 11a indicate the number of correctly sorted waste items, whereas the values outside the diagonal indicate the number of incorrectly sorted waste items. Figure 11a is normalized to give Figure 11b, the diagonal values indicate the accuracy of the classification. As can be seen, single category accuracy remains mostly above 90% and up to 99%. Category 8 (waste dry batteries) is the least accurate (85%) because the waste dry battery's small, cylindrical shape resembles a cigarette butt. The confusion matrix shows that the GECM-EfficientNet can accurately distinguish between different categories of waste.
Selected waste images from the test dataset for testing. Figure 12 shows the category and accuracy of the waste tested. The four types of waste are framed by different colors. The waste type indicates the type of waste containing other waste, kitchen waste, recyclable waste, and hazardous waste. The name indicates the category of waste and accuracy indicates the classification accuracy, which is expressed as a percentage. As can be seen, the model can achieve an accurate classification of different waste. It can be proven that the GECM-EfficientNet has a high classification accuracy, which satisfies the requirements of waste-sorting devices.
Discussion of Intelligent Waste Bin
Most of the existing normal waste recycling devices are set up with different bins, which then wait for residents to sort and put out waste manually. However, residents often do not have enough knowledge of sorting, and there is difficulty in translating willingness into action. As a result, this paper designs an intelligent waste bin that automatically sorts and recycles household waste. The intelligent waste bin costs just 2500 CNY. The waste-classification model is trained with 18 categories of waste images. However, for other categories of waste, if the model is trained with enough images of the waste, it can also correctly identify the category of waste. The intelligent waste bin recycles waste to different bins through the combined movement of the paddle plate and baffle plate. This mechanical structure can effectively recycle solid waste, but it may not be pleasant for liquid waste, which may be partially left behind. Therefore, the recycling of liquid waste will be one of future highlights.
Nowadays, some academics are also researching intelligent waste-sorting devices. Ref. [9] designed a smart waste bin based on ResNet34, which can dichotomize waste with a single inference time of 950 ms. Ref. [52] constructed a smart bin based on inceptionv3, which can recycle waste into two bins. These smart devices are based on large models for waste classification, which has significance for automatic waste sorting and recycling. However, although they achieve high accuracy, they perform weakly in real time, which can cause unhappiness for users. Ref. [15] build an intelligent waste-sorting system through a lightweight model (MobileNetv3), in which the waste classification model is deployed in the cloud. Cloud deployment avoids the expense of local computing resources, but this is heavily dependent on the Internet. The highlight in this paper is that a lightweight waste-classification model is proposed, which is directly deployed on the embedded device, avoiding Internet dependencies. The model is high in real-time accuracy, with a single inference time of 146 ms. At the same time, this paper proposes the intelligent waste bin that can sort waste more carefully, and the waste will be recycled into four bins. It can be placed in airports, schools, and shopping malls, which contributes to environmental protection and resource recycling.
Conclusions
With the increasing focus on environmental safety and resource recycling, society is calling on residents to sort their waste. This requires residents to be knowledgeable about the different categories of waste, which makes it very difficult to sort waste. For this, intelligent waste-sorting devices would be an effective solution. This paper introduces computer vision technology to waste classification, proposing a lightweight and efficient waste-classification model (GECM-EfficientNet), and an intelligent waste bin is designed based on GECM-EfficientNet. On the self-built household waste dataset, GECM-EfficientNet achieved the accuracy of 94.54%, with a single inference time of 146 ms. The intelligent waste bin enables the automatic sorting and recycling of waste, improving the efficiency of waste sorting. It is relevant for environmental protection and resource recycling, but also beneficial for the country's sustainable development. The main work and contributions in this paper are as follows.
(1) We chose the lightweight EfficientNetB0 as the baseline model. The MBConv module is first streamlined, optimizing the model structure and reducing complexity. Then, the ECA module and CA module are connected in parallel, replacing the SE module in the MBConv module, which implements the feature map's spatial and channel weighting operations. (2) In the training strategy, the model parameters are initialized by transfer learning, which improves the model performance and convergence speed. (3) We verify the superiority of the GECM-EfficientNet performance with the self-built dataset and the TrashNet dataset. Among the many mainstream models and related research, GECM-EfficientNet is in the lead, with outstanding performance in accuracy and real-time performance.
(4) We design an intelligent waste bin and implement waste classification through GECM-EfficientNet.The model first identifies the input waste, then sorts and recycles into the corresponding bins by the execution structure. This provides a new solution for alleviating the environmental crisis and achieving a circular economy.
In waste sorting and recycling, some research results have been made in this paper, but the following limitations still exist. (1) Waste-classification models can only identify 18 categories of household waste. In reality, there are many categories of waste, and the dataset needs to be expanded later. (2) Consider the use of semi-supervised learning, which makes use of the vast amount of unlabelled image data for learning, facilitating the performance of the classification model. (3) The current mechanical structure is unable to recycle mixed waste. In future work, it may be effective to design a loading device that can separate mixed waste to single waste. In addition, image segmentation and object-detection techniques can identify the different components of mixed waste. | 7,138.6 | 2022-11-30T00:00:00.000 | [
"Computer Science"
] |
Measurement of the Higgs boson mass in the $H\rightarrow ZZ^* \rightarrow 4\ell$ and $H \rightarrow \gamma\gamma$ channels with $\sqrt{s}=13$ TeV $pp$ collisions using the ATLAS detector
The mass of the Higgs boson is measured in the $H\rightarrow ZZ^* \rightarrow 4\ell$ and in the $H\rightarrow \gamma\gamma$ decay channels with 36.1 fb$^{-1}$ of proton-proton collision data from the Large Hadron Collider at a centre-of-mass energy of 13 TeV recorded by the ATLAS detector in 2015 and 2016. The measured value in the $H\rightarrow ZZ^* \rightarrow 4\ell$ channel is $m_{H}^{ZZ^{*}} = 124.79 \pm 0.37$ GeV, while the measured value in the $H\rightarrow \gamma\gamma$ channel is $m_{H}^{\gamma \gamma} = 124.93 \pm 0.40$ GeV. Combining these results with the ATLAS measurement based on 7 TeV and 8 TeV proton-proton collision data yields a Higgs boson mass of $m_H=124.97 \pm 0.24$ GeV.
Introduction
The observation of a Higgs boson, H, by the ATLAS and CMS experiments [1,2] with the Large Hadron Collider (LHC) Run 1 proton-proton (pp) collision data at centre-of-mass energies of √ s = 7 TeV and 8 TeV was a major step towards understanding the mechanism of electroweak (EW) symmetry breaking [3][4][5].The mass of the Higgs boson was measured to be 125.09± 0.24 GeV [6] through a combination of the individual ATLAS [7] and CMS [8] mass measurements with Run 1 data.Recently, the CMS Collaboration measured the Higgs boson mass in the H → Z Z * → 4 channel using 35.9 fb −1 of 13 TeV pp collision data [9].The measured value of the mass is 125.26 GeV, with an observed (expected) uncertainty of 0.21 (0.26) GeV.This Letter presents a measurement of the Higgs boson mass, m H , with 36.1 fb −1 of √ s = 13 TeV pp collision data recorded with the ATLAS detector.The measurement is derived from a combined fit to the four-lepton and diphoton invariant mass spectra in the decay channels H → Z Z * → 4 ( = e, µ) and H → γγ, based on the current understanding of the reconstruction, identification, and calibration of muons, electrons, and photons in the ATLAS detector.A combination with the ATLAS Run 1 data is also presented.
ATLAS detector
The ATLAS experiment [10] at the LHC is a multi-purpose particle detector with nearly 4π coverage in solid angle.1It consists of an inner tracking detector (ID) surrounded by a 2 T superconducting solenoid, electromagnetic (EM) and hadronic calorimeters, and a muon spectrometer (MS) incorporating three large superconducting toroidal magnets.The ID provides tracking for charged particles for |η| < 2.5.The calorimeter system covers the pseudorapidity range |η| < 4.9.Its electromagnetic part is segmented into three shower-depth layers for |η| < 2.5 and includes a presampler for |η| < 1.8.The MS includes high-precision tracking chambers (|η| < 2.7) and fast trigger chambers (|η| < 2.4).Online event selection is performed by a first-level trigger with a maximum rate of 100 kHz, implemented in custom electronics, followed by a software-based high-level trigger with a maximum rate of 1 kHz.
Data and simulated samples
This measurement uses data from pp collisions with a centre-of-mass energy of 13 TeV collected during 2015 and 2016 using single-lepton, dilepton, trilepton and diphoton triggers, with looser identification, isolation and transverse momentum (p T ) requirements than those applied offline.The combined efficiency of the lepton triggers is about 98% for the H → Z Z * → 4 events (assuming m H = 125 GeV) passing the offline selection.The diphoton trigger efficiency is higher than 99% for selected H → γγ events (assuming m H = 125 GeV).After trigger and data-quality requirements, the integrated luminosity of the data sample is 36.1 fb −1 .The mean number of proton-proton interactions per bunch crossing is 14 in the 2015 data set and 25 in the 2016 data set.
Monte Carlo (MC) simulation is used in the analysis to model the detector response for signal and background processes.For the H → Z Z * → 4 measurement, a detailed list and description of the MC-simulated samples used can be found in Ref. [11] and only a few differences specific to the mass analysis are mentioned here.For the gluon-gluon fusion (ggF) signal, the NNLOPS sample generated at next-to-next-to-leading order (NNLO) in QCD [12] with m H = 125 GeV and the PDF4LHC NLO parton distribution function (PDF) set [13] was used.Additional samples generated at different m H values (120,122,124,125,126,128,130 GeV) at next-to-leading order (NLO) were also used.The NLO ggF simulation was performed with P -B v2 [14] interfaced to P 8 [15] for parton showering and hadronisation, and to E G [16] for the simulation of b-hadron decays.The CT10NLO [17] PDF set was used for the hard process and the CTEQ6L1 [18] set for the parton shower.The non-perturbative effects were modelled using the AZNLO set of tuned parameters [19].
The Z Z * continuum background from quark-antiquark annihilation was modelled at NLO in QCD using P -B v2 and interfaced to P 8 for parton showering and hadronisation, and to E G for b-hadron decays.The PDF set used is the same as for the NLO ggF signal.NNLO QCD [20,21] and NLO EW corrections [22,23] were applied as a function of the invariant mass of the Z Z * system (m Z Z * ). 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe.The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upwards.Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the z-axis.
The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2).Angular distance is measured in units of ∆R ≡ (∆η For the H → γγ measurement, the same H → γγ signal (generated for m H = 125 GeV) and background simulated events used for the measurements of the Higgs boson couplings and fiducial cross-sections in the diphoton final state [24] were used.In addition, signal samples with alternative m H values (110, 122, 123, 124, 126, 127, 130, 140 GeV) were produced, with the same generators and settings as the m H = 125 GeV samples, but only for the four Higgs boson production modes with largest cross-section: gluon-gluon fusion, vector-boson fusion (VBF), and associated production with a vector boson V = W, Z (V H), for q q → V H and gg → Z H.For rarer processes, such as associated production of the Higgs boson with a top-quark pair (t tH) or a single top-quark (tH), contributing to less than 2% of the total cross-section, only samples at m H = 125 GeV were used.
Except for the γγ background sample, whose modelling requires a large MC sample obtained through a fast parametric simulation of the calorimeter response [25], the generated events for all processes were passed through a G 4 [26] simulation of the response of the ATLAS detector [25].For both detector emulation methods, events were reconstructed with the same algorithms as the data.Additional protonproton interactions (pile-up) were included in both the parametric and the G 4 simulations, matching the average number of interactions per LHC bunch crossing to the spectrum observed in the data.
The Standard Model (SM) expectations for the Higgs boson production cross-section times branching ratio, in the various production modes and final states under study and at each value of m H , were taken from Refs.[27][28][29][30] and used to normalise the simulated samples, as described in Refs.[11,24].
Muon reconstruction, identification and calibration
Muon track reconstruction is first performed independently in the ID and the MS.Hit information from the individual subdetectors is then used in a combined muon reconstruction, which includes information from the calorimeters.
Corrections to the reconstructed momentum are applied in order to match the simulation to data precisely.These corrections to the simulated momentum resolution and momentum scale are parameterised as a power expansion in the muon p T , with each coefficient measured separately for the ID and MS, as a function of η and φ, from large data samples of J/ψ → µ + µ − and Z → µ + µ − decays.The scale corrections range from 0.1% to 0.5% for the p T of muons originating from J/ψ → µ + µ − and Z → µ + µ − decays and account for inaccurate measurement of the energy lost in the traversed material, local magnetic field inaccuracies and geometrical distortions.The corrections to the muon momentum resolution for muons from J/ψ → µ + µ − and Z → µ + µ − are at the percent level.After detector alignment, there are residual local misalignments that bias the muon track sagitta, leaving the track χ 2 invariant [31,32], and introduce a small charge-dependent resolution degradation.The bias in the measured momentum of each muon is corrected by an iterative procedure derived from Z → µ + µ − decays and checked against the E/p ratio measured in Z → e + e − decays.The residual effect after correction is reduced to the per mille level at the scale of the Z boson mass.This correction improves the resolution of the dimuon invariant mass in Z boson decays by 1% to 5%, depending on η and φ of the muon.The systematic uncertainty associated with this correction is estimated for each muon using simulation and is found to be about 0.4 × 10 −3 for the average momentum of muons from Z → µ + µ − decays.
The momentum scale is determined to a precision of 0.05% for muons with |η| < 2, and about 0.2% for muons with |η| ≥ 2. The resolution is known with a precision ranging from 1% to 2% for muons with |η| < 2 and around 10% for muons with |η| ≥ 2 [33].Both the momentum scale and momentum resolution uncertainties in the corrections to simulation are taken as fully correlated between the Run 1 and Run 2 measurements.
In total, five independent sources are used to describe the uncertainties affecting the muon momentum scale and resolution corrections in data and simulation.
Photon and electron reconstruction, identification and calibration
Photon and electron candidates are reconstructed from clusters of electromagnetic calorimeter cells [34].Clusters without a matching track or reconstructed conversion vertex in the inner detector are classified as unconverted photons.Those with a matching reconstructed conversion vertex or a matching track, consistent with originating from a photon conversion, are classified as converted photons [35].Clusters matched to a track consistent with originating from an electron produced in the beam interaction region are considered electron candidates.
The energy measurement for reconstructed electrons and photons is performed by summing the energies measured in the EM calorimeter cells belonging to the candidate cluster.The energy is measured from a cluster size of ∆η ×∆φ = 0.075×0.175 in the barrel region of the calorimeter and ∆η ×∆φ = 0.125×0.125 in the calorimeter endcaps.The procedure for the energy measurement of electrons and photons closely follows that used in Run 1 [36], with updates to reflect the 2015 and 2016 data-taking conditions: • The different layers of the electromagnetic calorimeter are intercalibrated by applying methods similar to those described in Ref. [36].The first and second calorimeter layers are intercalibrated using the energy deposited by muons from Z → µ + µ − decays, with a typical uncertainty of 0.7% to 1.5% (1.5% to 2.5%) as a function of η in the barrel (endcap) calorimeter, for |η| < 2.4.This uncertainty is added in quadrature to the uncertainty in the modelling of the muon ionisation in the simulation (1% to 1.5% depending on η).The energy scale of the presampler is estimated using electrons from Z boson decays, after correcting the simulation on the basis of the correlations between the amount of detector material and the ratio of the energies deposited in the first and second layers of the calorimeter.The uncertainty in the presampler energy scale varies between 1.5% and 3% depending on η.
• The cluster energy is corrected for energy loss in the inactive materials in front of the calorimeter, the fraction of energy deposited outside the area of the cluster in the η-φ plane, the amount of energy lost behind the electromagnetic calorimeter, and to account for the variation of the energy response as a function of the impact point in the calorimeter.The calibration coefficients used to apply these corrections are obtained from a detailed simulation of the detector response to electrons and photons, and are optimised with a boosted decision tree (BDT) [37].The response is calibrated separately for electron candidates, converted photon candidates and unconverted photon candidates.In data, small corrections are applied for the φ-dependent energy loss in the gaps between the barrel calorimeter modules (corrections up to 2%, in about 5% of the calorimeter acceptance) and for inhomogeneities due to sectors operated at non-nominal high voltage (corrections between 1% and 7%, in about 2% of the calorimeter acceptance).
• The global calorimeter energy scale is determined in situ with a large sample of Z → e + e − events.The energy response in data and simulation is equalised by applying η-dependent correction factors to match the invariant mass distributions of Z → e + e − events.The uncertainty in these energy scale correction factors ranges from 0.02% to 0.1% as a function of η, except for the barrel-endcap transition region (1.37 < |η| < 1.52), where it reaches a few per mille.In this procedure, the simulated width of the reconstructed Z boson mass distribution is matched to the width observed in data by adding in the simulation a contribution to the constant term c of the electron energy resolution, σ E E = a E ⊕ b E ⊕ c.This constant term varies between 0.7% and 2% for |η| < 2.4 with an uncertainty of 0.03%-0.3%,except for the barrel-endcap transition region, where the constant term is slightly higher (2.5%-2.9%)with an uncertainty reaching 0.6%.
The main sources of systematic uncertainties in the calibration procedure discussed in Ref. [36] have been revisited.These sources include uncertainties in the method used to extract the energy scale correction factors, as well as uncertainties due to the extrapolation of the energy scale from Z → e + e − events to photons, and also to electrons with energies different from those produced in Z → e + e − decays.The latter arise from the uncertainties in the linearity of the response due to the relative calibration of the different gains used in the calorimeter readout, in the knowledge of the material in front of the calorimeter (inside and outside of the ID, referred to as ID and non-ID material in the following), in the intercalibration of the different calorimeter layers, in the modelling of the lateral shower shapes and in the reconstruction of photon conversions.The total calibration uncertainty for photons with transverse energy (E T ) around 60 GeV is 0.2%-0.3% in the barrel and 0.45%-0.8% in the endcap.These uncertainties are close to those quoted in Ref. [36], but typically about 10% larger.The small increase in the uncertainty arises mostly from a larger uncertainty in the relative calibration of the first and second calorimeter layers with muons because of a worse ratio of signal to pile-up noise in Run 2 data.In the case of electrons with E T around 40 GeV, the total uncertainty ranges between 0.03% and 0.2% in most of the detector acceptance.For electrons with E T around 10 GeV the uncertainty ranges between 0.3% and 0.8%.A total of 64 independent uncertainty sources are used to describe the overall uncertainty affecting the energy calibration of electron and photons.
The accuracy of the energy calibration for low-energy electrons (5-20 GeV) is checked by computing residual energy calibration corrections (after applying the corrections extracted from the Z → e + e − sample) for an independent sample of J/ψ → e + e − events.These residual correction factors are found to be compatible with one within uncertainties.A similar check is performed by computing residual corrections for photons in a sample of radiative Z boson decays.They are found to be compatible with one within uncertainties which are given by the combination of the statistical uncertainty of the radiative Z boson decays sample and of the systematic uncertainty from the extrapolation of the energy scale from electrons to photons.Systematic uncertainties in the calorimeter energy resolution arise from uncertainties in the modelling of the sampling term a/ √ E and in the measurement of the constant term in Z boson decays, in the amount of material in front of the calorimeter, which affects electrons and photons differently, and in the modelling of the contribution to the resolution from fluctuations in the pile-up from additional protonproton interactions in the same or neighbouring bunch crossings.The uncertainty of the energy resolution for electrons and photons with transverse energy between 30 and 60 GeV varies between 5% and 10%.
The identification of photons and the rejection of background from hadrons is based primarily on shower shapes in the calorimeter.The two levels of selection, loose and tight, are described in Ref. [35].To further reduce the background from jets, two complementary isolation selection criteria are used, based on topological clusters of energy deposits in the calorimeter and on reconstructed tracks in a direction close to that of the photon candidate, as described in Ref. [24].
Electrons are identified using a likelihood-based method combining information from the electromagnetic calorimeter and the ID.As in the case of photons, electrons are required to be isolated using both the calorimeter-based and track-based isolation variables as described in Ref. [38].
Statistical methods
The mass measurement is based on the maximisation of the profile likelihood ratio [39,40] where the vectors θ and mH denote the unconditional-maximum likelihood estimates of the parameters of the likelihood function L, while θ is the conditional maximum-likelihood estimate of the parameters θ for a fixed value of the parameter m H . Systematic uncertainties and their correlations are modelled by introducing nuisance parameters θ described by likelihood functions associated with the estimate of the corresponding effect [6].
The statistical uncertainty of m H is estimated by fixing all nuisance parameters to their best-fit values, all remaining parameters are thus left unconstrained.This approach yields a lower bound on the statistical uncertainty, when the combination of the different event categories discussed in the next sections is performed neglecting the different impact of the systematic uncertainties in each category.The upper bound on the total systematic uncertainty is estimated by subtracting in quadrature the statistical uncertainty from the total uncertainty.
Alternatively, the decomposition of the uncertainty into statistical and systematic components is performed using the BLUE method [41][42][43].The two approaches may lead to different results from the decomposition of the uncertainty for a combination of measurements with significant and uncorrelated systematic uncertainties.
7 Mass measurement in the H → Z Z * → 4 channel
Event selection
Events are required to contain at least four isolated leptons ( = e, µ) that emerge from a common vertex, form two pairs of oppositely charged same-flavour leptons.Electrons are required to be within the full pseudorapidity range of the inner tracking detector (|η| < 2.47) and have transverse energy E T > 7 GeV, while muons are required to be within the pseudorapidity range of the muon spectrometer (|η| < 2.7) and have transverse momentum p T > 5 GeV.A detailed description of the event selection can be found in Ref. [11,44].
The lepton pair with an invariant mass closest to the Z boson mass in each quadruplet is referred to as the leading dilepton pair, while the remaining pair is referred to as the subleading dilepton pair.The selected events are split according to the flavour of the leading and subleading pairs; ordered according to the expected selection efficiency, they are 4µ, 2e2µ, 2µ2e, 4e.Reconstructed photon candidates passing final-state radiation selections are searched for in all the events [45].Such photons are found in 4% of the events and their energy is included in the mass computation.In addition, a kinematic fit is performed to constrain the invariant mass of the leading lepton pair to the Z boson mass, improving the m 4 resolution by about 15% [7].The improvement brought by the correction of the local tracker misalignments, as discussed in Section 4, is at the percent level for the m 4 resolution of signal events.After event selection, the m 4 resolution for the signal (at m H = 125 GeV), estimated with a Gaussian fit around the peak, is expected to be about 1.6 GeV, 2.4 GeV, 2.2 GeV and 1.8 GeV for the 4µ, 4e, 2µ2e and 2e2µ channels respectively.In the fit range of 110 < m 4 < 135 GeV, 123 candidate events are observed.The yield is in agreement with an expectation of 107 ± 6 events, 53% of which are expected to be from the signal, assuming m H = 125 GeV.
The dominant contribution to the background is non-resonant Z Z * production (about 84% of the total background yield).Events with hadrons, or hadron decay products, misidentified as prompt leptons also contribute (about 15%).Events originating from t t+Z, Z Z Z, W Z Z, and WW Z production are estimated to contribute less than 1% of the total background.The residual combinatorial background, originating from events with additional prompt leptons, was found to be negligibly small [44].
The precision of the mass measurement is further improved by categorising events with a multivariate discriminant which distinguishes the signal from the Z Z * background.The BDT described in Ref.
[7], based on the same input variables, is trained on simulated signal events with different mass values simultaneously (124, 125 and 126 GeV) and Z Z * background events that pass the event selection.For each final state, four equal-size exclusive bins in the BDT response are used.This improves the precision of the m H measurement in the 4 decay channel by about 6%.
Signal and background model
The invariant mass in each category is described by the sum of a signal and a background distribution.
Non-resonant Z Z * production is estimated using simulation normalised to the most accurate predictions and validated in the sidebands of the selected 4 mass range.Smaller contributions to the background from t t+Z, Z Z Z, W Z Z and WW Z production are also estimated using simulation while the contributions from Z+jets, W Z, and t t production where one or more hadrons, or hadron decay products, are misidentified as a prompt lepton are estimated from data using minimal input from simulation following the methodology described in Ref. [11].For each contribution to the background, the probability density function (pdf) is estimated with the kernel density estimation.
For the determination of the signal distribution, an approach based on the event-by-event response of the detector is employed.The measured m 4 signal distribution is modelled as the convolution of a relativistic Breit-Wigner distribution, of 4.1 MeV width [27][28][29][30] and a peak at m H , with a four-lepton invariant mass response distribution which is derived event-by-event from the expected response distributions of the individual leptons.The lepton energy response distributions are derived from simulation as a function of the lepton energy and detector region.The lepton energy response is modelled as a weighted sum of three Gaussian distributions.For an observed event, the m 4 pdf is derived from the convolution of the response distributions of the four measured leptons.The direct convolution of the four leptons distributions, leading to 3 4 = 81 Gaussian distributions, is simplified to a weighted sum of four Gaussian pdfs following an iterative merging procedure as performed with the Gaussian-sum filter procedure [46,47].An additional correction is applied to remove the residual differences which arise from the correlation between the lepton energy measurements introduced by the kinematic constrained fit on the leading dilepton pair and the BDT categorisation of events.
Finally, the mass of the Higgs boson m H is determined by a simultaneous unbinned fit of signal-plusbackground distributions to data over the sixteen categories. 2 The per-event component of the signal pdf is added to the background distribution which is integrated over all kinematic configurations of the four final state leptons.In each of the four BDT categories, the signal yield is factorised by a floating normalisation modifier independent for each BDT category.The measured Higgs boson mass depends on the lepton energy resolution and the lepton energy scale.Uncertainties in these quantities are accounted for in the fit by Gaussian-distributed penalty terms whose widths are obtained from auxiliary data or simulation control samples.The expected uncertainty, with m H = 125 GeV and production rates predicted by the SM, for a data sample of the size of the experimental set, evaluated using simulation-based pseudo-experiments, is ±0.35 GeV.
A validation with data is performed with Z → 4 events to test the performance of the method on a known resonance with similar topology.In this test, the peak and width of the relativistic Breit-Wigner function are set to those of the Z boson.The measured Z boson mass was found to be 91.62 ± 0.35 GeV including statistical and systematic uncertainty.The observed uncertainty is in agreement with the expectation of ±0.34 GeV, as evaluated from simulation.The measured value is in agreement with the world average of 91.1876 ± 0.0021 GeV [48].As an independent check, the template method [7] is also used to measure m H .The expected statistical uncertainty of m H obtained with the per-event method from a sample equal in size to the experimental data set is, on average, 3% smaller than the statistical uncertainty obtained with the template method.Both methods are found to be unbiased within the statistical uncertainty of the simulated samples used of about 8 MeV on m H .
Results
The estimate of m H for the per-event and template methods is extracted with a simultaneous profile likelihood fit to the sixteen categories.The free parameters of the fit are m H , the normalisation modifiers of each BDT category, and the 51 nuisance parameters associated with systematic uncertainties.
The measured value of m H from the per-event method is found to be m Z Z * H = 124.79± 0.36 (stat) ± 0.05 (syst) GeV = 124.79± 0.37 GeV.
The total uncertainty is in agreement with the expectation and is dominated by the statistical component.
The root-mean-square of the expected uncertainty due to statistical fluctuations in the event yields of each category was estimated to be 40 MeV.The p-value of the uncertainty being as high or higher than the observed value, estimated with pseudo-experiments, is found to be 0.47.The total systematic uncertainty is 50 MeV, the leading sources being the muon momentum scale (40 MeV) and the electron energy scale (26 MeV), with other sources (background modelling and simulation statistics) being smaller than 10 MeV.
For the template method, the total uncertainty is found to be +0.41 −0.39 GeV, larger by 35 MeV than for the perevent method.The observed difference for the m H estimates of the two methods is found to be 0.16 GeV, which is compatible with the expected variance estimated with pseudo-experiments and corresponds to a one sided p-value of 0.19. Figure 1(a) shows the m 4 distribution of the data together with the result of the fit to the H → Z Z * → 4 candidates when using the per-event method.The fit is also performed independently for each decay channel, fitting all BDT categories simultaneously; the resulting likelihood profile is compared with the combined fit in Figure 1(b).The combined measured value of m H is found to be compatible with the value measured independently for each channel, with the largest deviation being 1.4σ for the 2µ2e channel and the others being within 1σ.
The Higgs boson mass in the four-lepton channel is also measured by using a profile likelihood ratio to combine the information from the Run 1 analysis [6], where m H = 124.51± 0.52 GeV, and the Run 2 analysis, keeping each individual signal normalisation parameter independent.The systematic uncertainties taken to be correlated between the two runs are the muon momentum and electron energy scales, while all other systematic uncertainties are considered uncorrelated.The combined Run 1 and Run 2 result is m Z Z * H = 124.71± 0.30 (stat) ± 0.05 (syst) GeV = 124.71± 0.30 GeV.The difference between the measured values of m H in the four-lepton channel in the two runs is ∆m Z Z * H = 0.28 ± 0.63 GeV, with the two results being compatible, with a p-value of 0.84.
Mass measurement in the H → γγ channel
In the diphoton channel, the Higgs boson mass is measured from the position of the narrow resonant peak in the m γγ distribution due to the Higgs boson decay to two photons.Such a peak is observed over a large, monotonically decreasing, m γγ distribution from continuum background events.The diphoton invariant mass is computed from the measured photon energies and from their directions relative to the diphoton production vertex, chosen among all reconstructed primary vertex candidates using a neural-network algorithm based on track and primary vertex information, as well as the directions of the two photons measured in the calorimeter and inner detector [49].
Events are selected and divided into categories with different mass resolutions and signal-to-background ratios, optimised for the measurement of simplified template cross-sections [30,50] and of production mode signal strengths of the Higgs boson in the diphoton decay channel.The event selection and classification are described in Ref. [24].A potential reduction of the total expected uncertainty by 4% could have been obtained using the same event categories chosen for the mass measurement with the Run 1 data [7].Given the small expected improvement, a choice was made to use the same categorisation for the measurement of the mass and of the production mode signal strengths.
Event selection and categorisation
After an initial preselection, described in Ref. [24], requiring the presence of at least two loosely identified photon candidates with |η| < 1.37 or 1.52 < |η| < 2.37, events are selected if the leading and the subleading photon candidates have E T /m γγ > 0.35 and 0.25 respectively, and satisfy the tight identification criteria and isolation criteria based on calorimeter and tracking information.Only events with invariant mass of the leading and subleading photon in the range 105 GeV < m γγ < 160 GeV are kept.
The events passing the previous selection are then classified, according to the properties of the two selected photons and of jets, electrons, muons and missing transverse momentum, into 31 mutually exclusive categories [24].The most populated class, targeting gluon-gluon fusion production without reconstructed jets, is split into two categories of events with very different energy resolution: the first ("ggH 0J Cen") requires both photons to have |η| < 0.95, while the second ("ggH 0J Fwd") retains the remaining events.
Signal and background models
For each category, the shape of the diphoton invariant mass distribution of the signal is modelled with a double-sided Crystal Ball function [51], i.e. a Gaussian function in the peak region with power-law functions in both tails.The dependence of the parameters on the Higgs boson mass m H is described by first-order polynomials, whose parameters are fixed by fitting simultaneously all the simulated signal samples generated for different values of m H .The quantity σ 68 , defined as half of the smallest range containing 68% of the expected signal events, is an estimate of the signal m γγ resolution and for m H = 125 GeV it ranges between 1.41 GeV and 2.10 GeV depending on the category, while for the inclusive case its value is 1.84 GeV. Figure 2(a) shows an example of the signal model for a category with one of the best invariant mass resolutions and for a category with one of the worst resolutions.
The expected signal yield is expressed as the product of integrated luminosity, production cross-section, diphoton branching ratio, acceptance and efficiency.The cross-section is parameterised as a function of m H separately for each production mode.Similarly, the branching ratio is parameterised as a function of m H .The product of acceptance and efficiency is evaluated separately for each production mode using only the samples with m H = 125 GeV.Its dependence on the mass is weak (relative variation below 1% when varying the Higgs boson mass by ±1 GeV) and is thus neglected.The cross-sections are fixed to the SM values multiplied by a signal modifier for each production mode: µ ggF , µ VBF , µ V H and µ t t H .The expected yield for m H = 125 GeV varies between about one event in categories sensitive to rare production modes (t tH, tH) to almost 500 events in the most populated event category ("ggH 0J Fwd").The background invariant mass distribution of each category is parameterised with an empirical continuous function of the diphoton system invariant mass value.The parameters of these functions are fitted directly to data.The functional form used to describe the background in each category is chosen among several alternatives according to the three criteria described in Ref. [24]: (i) the fitted signal yield in a test sample representative of the data background, built by combining simulation and control regions in data, must be minimised; (ii) the χ 2 probability for the fit of this background control sample must be larger than a certain threshold; (iii) the quality of the fit to data sidebands must not improve significantly when adding an extra degree of freedom to the model.The models selected by this procedure are exponential or power-law functions with one degree of freedom for the categories with few events, while exponential functions of a second-order polynomial are used for the others.
From the extrapolation of a background-only fit to the sidebands of the m γγ distribution in data, excluding events with 121 GeV < m γγ < 129 GeV, the expected signal-to-background ratio in a m γγ window containing 90% of the signal distribution for m H = 125 GeV varies between 2% in the "ggH 0J Fwd" category and 100% in a high-purity, low-yield (about 12 events) category targeting H+2jet, VBF-like events with low transverse momentum of the H+2jet system.
Systematic uncertainties
The main sources of systematic uncertainty in the measured Higgs boson mass in the diphoton channel are the uncertainties in the photon energy scale (PES), the uncertainty arising from the background model, and the uncertainty in the selection of the diphoton production vertex.They are described in detail in Ref. [24].
For each source of uncertainty in the PES described in Section 5, the diphoton invariant mass distribution for each category is recomputed after varying the photon energy by its uncertainty and is then compared with the nominal distribution.The sum in quadrature of the positive or negative shifts of the m γγ peak position due to such variations ranges from ±260 MeV in the "ggH 0J Cen" category to ±470 MeV in the "jet BSM" category, which requires at least one jet with p T > 200 GeV.All the PES effects are considered as fully correlated across categories.
The uncertainty due to the background modelling is evaluated following the procedure described in Ref. [7].The expected signal contribution as predicted by the signal model is added to the background control sample.The bias in the estimated Higgs boson mass from a signal-plus-background fit to the test sample relative to the injected mass is considered as a systematic uncertainty due to the background modelling.Its value is around ±60 MeV for the most relevant categories for the mass measurement.In the other categories it can assume larger values, which are compatible with statistical fluctuations of the background control sample.For this reason this systematic uncertainty is ignored in the poorly populated t tH categories, which give a negligible contribution to the mass measurement.This systematic uncertainty is assumed to be uncorrelated between different categories.
The systematic uncertainty related to the selection of the diphoton production vertex is evaluated using Z → ee events, as described in Ref. [7].An expected uncertainty of ±40 MeV in m H is used for all the categories and assumed to be fully correlated across different categories.
Systematic uncertainties in the diphoton mass resolution due to uncertainties in the photon energy resolution vary between ±6% (for the "ggH 0J Cen" category) and 11% (for the "jet BSM" category), and are expected to have a negligible impact on the mass measurement.
Systematic uncertainties in the yield and in the migration of events between categories described in Ref. [24] have a negligible impact on the mass measurement.
The uncertainty due to the signal modelling is evaluated similarly to that due to the background modelling.
A sample is built using the expected background distribution and the simulated signal events at m H = 125 GeV.The bias in the fitted Higgs boson mass is considered as a systematic uncertainty and is assumed to be correlated between different categories.The relative bias is below 10 −4 in most of the categories, and at most a few times 10 −4 in the other categories.
Results
The Higgs boson mass in the diphoton channel is estimated with a simultaneous binned maximumlikelihood fit to the m γγ distributions of the selected event categories.In each category, the distribution is modelled with a sum of the background and signal models.The free parameters of the fit are m H , the four signal strengths, the number of background events (31 sources) and the parameters (37 sources) describing the shape of the background invariant mass distribution in each category, and all the nuisance parameters (206) associated with systematic uncertainties.Figure 2(b) shows the distribution of the data overlaid with the result of the simultaneous fit.All event categories are included.For illustration purposes, events in each category are weighted by a factor ln(1 + S/B), where S and B are the fitted signal and background yields in a m γγ interval containing 90% of the signal.
The measured mass of the Higgs boson in the diphoton channel is m γγ H = 124.93± 0.21 (stat) ± 0.34 (syst) GeV = 124.93± 0.40 GeV where the first error is the statistical uncertainty while the second is the total systematic uncertainty, dominated by the photon energy scale uncertainty.
Assuming signal strengths as in the SM and the signal model determined from the simulation, the expected statistical uncertainty is 0.25 GeV and the expected total uncertainty is 0.41 GeV, with a root-mean-square, estimated from pseudo-experiments, of about 40 MeV.Compared to the expectation, the slightly larger systematic uncertainty and smaller statistical uncertainty observed in data are due to a lower than expected signal yield in some categories with large expected yield and small photon energy scale uncertainty, and to the fitted resolution in data being a few percent better than in the simulation (but still agreeing with it within one standard deviation).
To check if the measurement is sensitive to the assumption about the splitting of the production modes, the measurement is repeated using one common signal strength for all the processes.A small shift of the measured m H by 20 MeV is observed.
Other checks targeting possible miscalibration due to detector effects for some specific category of photons are performed by partitioning the entire data sample into detector-oriented categories, different from those used for the nominal result, and determining the probability that m H measured in one of these categories is compatible with the average m H from the other categories.A first categorisation is based on whether the photons are reconstructed as converted or not, a second is based on the photons' impact points in the calorimeter (either in the barrel region, |η| < 1.37, or in the endcap region, |η| > 1.52), and a third is based on the number of interactions per bunch crossing.For each of these categories a new background model, a new signal model and new systematic uncertainty values are computed.For each category the compatibility of its m H value with the combined m H value is tested by considering as an additional likelihood parameter the quantity ∆ i equal to the difference between that category's m H value and the combined value.No value of ∆ i significantly different from zero is found.A similar test is performed to assess the global compatibility of all the different categories with a common value of m H .In the three categorisations considered the smallest global p-value is 12%.The same procedure is applied to the categories used in the analysis: the smallest p-value computed on single categories is 7% while the global p-value is 94%.
A combination of the Higgs boson mass measured in the diphoton channel by ATLAS in Run 1, 126.02 ± 0.51 GeV [6], and in Run 2 is performed using a profile likelihood ratio.The signal strengths are treated as independent parameters.The systematic uncertainties considered correlated between the two LHC run periods are most of the photon energy scale and resolution uncertainties and those in the pile-up modelling, while all the other systematic uncertainties are considered uncorrelated.The photon energy calibration uncertainties that are treated as uncorrelated between the two LHC data-taking periods are a few uncertainties included only in the Run 2 measurement, the uncertainty in the photon energy leakage outside the reconstructed cluster, whose measurement is limited by the statistical accuracy of Z → γ, and the uncertainty in the electromagnetic calorimeter response non-linearity, which is estimated with different procedures in the two LHC run periods.The result is m
Combined mass measurement
The Higgs boson mass is measured by combining information from both the H → Z Z * → 4 and H → γγ channels.The correlations between the systematic uncertainties in the two channels are accounted for in the profile likelihood function.The main sources of correlated systematic uncertainty include the calibrations of electrons and photons, the pile-up modelling, and the luminosity.Signal yield normalisations are treated as independent free parameters in the fit to minimise model-dependent assumptions in the measurement of the Higgs boson mass.
The combined value of the mass measured using Run 2 data is m H = 124.86± 0.27 GeV.Assuming statistical uncertainties only, the uncertainty in the combined value is ±0.18GeV.The corresponding profile likelihood, for the two channels and for their combination, is shown in Figure 3(a).This result is in good agreement with the ATLAS+CMS Run 1 measurement [6], m H = 125.09± 0.24 GeV.
The combined mass measurement from the ATLAS Run 1 (m H = 125.36± 0.41 GeV) and Run 2 results is m H = 124.97± 0.24 GeV.Assuming statistical uncertainties only, the measurement uncertainty amounts to 0.16 GeV. Figure 3(b) shows the value of −2 ln Λ as a function of m H for the two channels combined, separately for the ATLAS Run 1 and Run 2 data sets, as well as for their combination.The contributions of the main sources of systematic uncertainty to the combined mass measurement, using both ATLAS Run 1 and Run 2 data, are summarised in Table 1.The impact of each source of systematic uncertainty is evaluated starting from the contribution of each individual nuisance parameter to the total uncertainty.This contribution is defined as the mass shift δm H observed when re-evaluating the profile likelihood ratio after fixing the nuisance parameter in question to its best-fit value increased or decreased by one standard deviation, while all remainder nuisance parameters remain free to float.The sum in quadrature of groups of nuisance parameter variations gives the impact of each category of systematic uncertainties.The nuisance parameter values from the unconditional maximum-likelihood fit are consistent with the pre-fit values within one standard deviation.The probability that the m H results from the four measurements (in the 4 and γγ final states, using Run 1 or Run 2 ATLAS data) are compatible is 12.3%.The results from each of the four individual measurements, as well as various combinations, along with the LHC Run 1 result, are summarised in Figure 4.
The combination of the four ATLAS measurements using the BLUE approach as an alternative method, assuming two uncorrelated channels,3 is found to be m H = 124.97± 0.
Conclusion
The mass of the Higgs boson has been measured from a combined fit to the invariant mass spectra of the decay channels H → Z Z * → 4 and H → γγ.The results are obtained from a Run 2 pp collision data sample recorded by the ATLAS experiment at the CERN Large Hadron Collider at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 36.1 fb −1 .The measurements are based on the latest calibrations of muons, electrons, and photons, and on improvements to the analysis techniques used to obtain the previous results from ATLAS Run From the combination of these two channels, the mass is measured to be m H = 124.86± 0.27 GeV.This result is in good agreement with the average of the ATLAS and CMS Run 1 measurements.The combination of the ATLAS Run 1 and Run 2 measurements yields m H = 124.97± 0.24 GeV.
[4] P. W. Higgs, Broken symmetries and the masses of gauge bosons, Phys.Rev. Lett.13 (1964) u Also at Georgian Technical University (GTU),Tbilisi; Georgia.v Also at Giresun University, Faculty of Engineering; Turkey.w Also at Graduate School of Science, Osaka University, Osaka; Japan.
x Also at Hellenic Open University, Patras; Greece.y Also at Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest; Romania.z Also at II.Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen; Germany.aa Also at Institucio Catalana de Recerca i Estudis Avancats, ICREA, Barcelona; Spain.ab Also at Institut de Física d'Altes Energies (IFAE), Barcelona Institute of Science and Technology, Barcelona; Spain.
Figure 1 :
Figure 1: (a) Invariant mass distribution for the data (points with error bars) shown together with the simultaneous fit result to H → Z Z * → 4 candidates (continuous line).The background component of the fit is also shown (filled area).The signal probability density function is evaluated per-event and averaged over the observed data.(b) Value of −2 ln Λ as a function of m H for the combined fit to all H → Z Z * → 4 categories.The intersection of the −2 ln Λ curve with the horizontal lines labelled 1σ and 2σ provide the 68.3% and 95.5% confidence intervals.
Figure 2 :
Figure2: (a) Invariant mass distributions (circles) of simulated H → γγ events reconstructed in two categories with one of the best ("ggH 0J Cen": open circles) and one of the worst ("ggH 0J Fwd": solid circles) experimental resolutions.The signal model derived from a fit of the simulated events is superimposed (solid lines).(b) Diphoton invariant mass distribution of all selected data events, overlaid with the result of the fit (solid red line).Both for data and for the fit, each category is weighted by a factor ln(1 + S/B), where S and B are the fitted signal and background yields in a m γγ interval containing 90% of the expected signal.The dotted line describes the background component of the model.The bottom inset shows the difference between the sum of weights and the background component of the fitted model (dots), compared with the signal model (black line).
Figure 3 :
Figure 3: The value of −2 ln Λ as a function of m H for (a) H → γγ, H → Z Z * → 4 channels and their combination (red, blue and black, respectively) using Run 2 data only and for (b) Run 1, Run 2 and their combination (red, blue and black, respectively).The dashed lines show the mass measurement uncertainties assuming statistical uncertainties only.
Figure 4 :
Figure 4: Summary of the Higgs boson mass measurements from the individual and combined analyses performed here, compared with the combined Run 1 measurement by ATLAS and CMS [6].The statistical-only (horizontal yellow-shaded bands) and total (black error bars) uncertainties are indicated.The (red) vertical line and corresponding (grey) shaded column indicate the central value and the total uncertainty of the combined ATLAS Run 1 + 2 measurement, respectively.
1 data.The measured values of the Higgs boson mass for the H → Z Z * → 4 and H → γγ channels are m H = 124.79± 0.37 GeV, m H = 124.93± 0.40 GeV.
Table 1 :
Main sources of systematic uncertainty in the Higgs boson mass m H measured with the 4 and γγ final states using Run 1 and Run 2 data.
23GeV = 124.97± 0.19 (stat) ± 0.13 (syst) GeV.The splitting of the errors takes into account the relative weight of the two channels in the combined measurement.
508.[5] G. Guralnik, C. Hagen and T. Kibble, Global conservation laws and massless particles, Phys.Rev.Lett.13(1964)585.[6] ATLAS and CMS Collaborations, Combined Measurement of the Higgs Boson Mass in pp Collisions at √ s = 7 and 8 TeV with the ATLAS and CMS Experiments, Phys.Rev. Lett.114 (2015) 191803, arXiv: 1503.07589[hep-ex].Measurement of the Higgs boson mass from the H → γγ and H → Z Z * → 4 channels in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector, Phys.Rev. D 90 (2014) 052004, arXiv: 1406.3827[hep-ex].[8] CMS Collaboration, Precise determination of the mass of the Higgs boson and tests of compatibility of its couplings with the standard model predictions using proton collisions at 7 and 8 TeV, Eur.Phys.J. C 75 (2015) 212, arXiv: 1412.8662[hep-ex].[9] CMS Collaboration, Measurements of properties of the Higgs boson decaying into the four-lepton final state in pp collisions at √ s = 13 TeV, JHEP 11 (2017) 047, arXiv: 1706.09936[hep-ex].[10] ATLAS Collaboration, The ATLAS Experiment at the CERN Large Hadron Collider, JINST 3 (2008) S08003.[11] ATLAS Collaboration, Measurement of inclusive and differential cross sections in the H → Z Z * → 4 decay channel in pp collisions at √ s=13 TeV with the ATLAS detector, JHEP 10 (2017) 132, arXiv: 1708.02810[hep-ex].Also at Centre for High Performance Computing, CSIR Campus, Rosebank, Cape Town; South Africa.c Also at CERN, Geneva; Switzerland.d Also at CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille; France.e Also at Département de Physique Nucléaire et Corpusculaire, Université de Genève, Genève; Switzerland.f Also at Departament de Fisica de la Universitat Autonoma de Barcelona, Barcelona; Spain.g Also at Departamento de Física Teorica y del Cosmos, Universidad de Granada, Granada (Spain); Also at Department of Financial and Management Engineering, University of the Aegean, Chios; Also at Department of Physics and Astronomy, University of Louisville, Louisville, KY; United States of America.j Also at Department of Physics and Astronomy, University of Sheffield, Sheffield; United Kingdom.k Also at Department of Physics, California State University, Fresno CA; United States of America.l Also at Department of Physics, California State University, Sacramento CA; United States of America.m Also at Department of Physics, King's College London, London; United Kingdom.n Also at Department of Physics, Nanjing University, Nanjing; China.o Also at Department of Physics, St. Petersburg State Polytechnical University, St. Petersburg; Russia.p Also at Department of Physics, Stanford University, Stanford CA; United States of America.q Also at Department of Physics, University of Fribourg, Fribourg; Switzerland.r Also at Department of Physics, University of Michigan, Ann Arbor MI; United States of America.s Also at Dipartimento di Fisica E. Fermi, Università di Pisa, Pisa; Italy.t Also at Faculty of Physics, M.V.Lomonosov Moscow State University, Moscow; Russia.
b h i | 11,890.2 | 2018-06-01T00:00:00.000 | [
"Physics"
] |
Effect of Ge Addition on Magnetic Properties and Crystallization Mechanism of FeSiBPNbCu Nanocrystalline Alloy with High Fe Content
: In this work, new Ge-containing Fe-based nanocrystalline alloys with the composition of Fe 80.2 Si 3 B 12- x P 2 Nb 2 Cu 0.8 Ge x ( x = 0, 1, 2 at.%) were developed, and the effects of Ge content on the magnetic and crystallization processes of the alloys were investigated. The addition of Ge extends the annealing window of the present Fe-based alloys, which reaches 173.6 K for the alloy of x = 2. The nanocrystalline alloy of x = 2, composed of dense and uniformly distributed α -Fe grains with an average grain size of 15.7 nm precipitated in the amorphous matrix, was obtained by conventional annealing treatment at a temperature of 843 K for 10 min, and this nanocrystalline alloy exhibited excellent magnetic properties with the H c of 3 A/m and B s of 1.65 T, which has great potential for industrial application. Non-isothermal crystallization kinetics studies show that the nucleation activation energy of the alloys gradually decreases with the increase in Ge content. The primary crystallization process is dominated by the direct growth of pre-existing nuclei in the as-spun alloy ribbons, and these pre-existing nuclei provide numerous heterogeneous nucleation sites to form dense and uniform α -Fe nanocrystals with a fine grain size, which leads to the excellent magnetic properties of the present Ge-containing Fe-based nanocrystalline alloys.
Introduction
Fe-based amorphous/nanocrystalline soft magnetic alloys are widely used in wireless charging, transformers, sensors, etc. The renewal and iteration of technology requires the development of devices in the direction of miniaturization and high efficiency [1], so the shortcomings in magnetic properties of the existing alloys must be solved to make the overall performance of the alloy more excellent. Since the first discovery of the FeSiBNbCu nanocrystalline alloys in 1988 [2], Fe-based nanocrystalline alloys have attracted widespread attention due to their excellent performance. The Finemet alloy is widely used due to its ultra-high magnetic permeability (µ), extremely low coercivity (H c ) and excellent frequency characteristics, but the relative low saturation flux density (B s ) struggles to fulfill the demand of high efficiency and the miniaturization of the device [3,4]. The improvement of the B s requires the content of ferromagnetic elements in the alloy to be as high as possible. Among the ferromagnetic elements of Fe, Co and Ni, the Co and Ni elements will increase the cost and likely deteriorate the magnetic properties of the alloy [5][6][7]; thus, increasing the Fe content is the most economical and effective way to improve B s . However, for alloy systems with a high Fe content, it is very difficult to balance the high B s and sufficient amorphous forming ability (AFA) at the same time because too high an Fe content will usually cause the alloy's AFA to decrease [8][9][10][11]. So, it is desirable to design a reasonable alloy composition to ensure both good soft magnetic properties and sufficient AFA of the alloy to achieve industrial production [12][13][14].
Many studies have shown that adding a P element can not only increase the AFA of alloy, but also has the effect of promoting nucleation and refining the grain of alloys [8,15,16]. Compared with P-free Fe-based alloys, P-containing Fe-based alloys have a higher activation energy, which helps to form a uniform density microstructure, and thus they have an excellent performance [17][18][19]. Additionally, the research of V. Cremacshi et al. shows that the addition of Ge can optimize magnetic properties and avoid the decrease in B s of Febased nanocrystalline alloys [20][21][22][23]. Based on the above consideration, in this work, new Fe 80.2 Si 3 B 12-x P 2 Nb 2 Cu 0.8 Ge x (x = 0, 1, 2 at.%) alloys were designed by adding P and Ge to a FINEMET-like nanocrystalline alloys with a high Fe content, the corresponding nanocrystalline alloys were prepared by conventional heat treatment, and the effects of Ge content on the magnetic properties and crystallization process of the present nanocrystalline alloys were investigated. The problems of rapid grain coarsening and the sudden decrease in the magnetic properties of Fe-based nanocrystalline soft magnetic alloys with a high Fe content during the annealing process were successfully solved by optimizing the composition of the alloys and the heat treatment process; therefore, the prepared nanocrystalline alloys exhibit excellent soft magnetic properties, and thus can be applied and industrialized.
Materials and Methods
The master alloy ingots of Fe 80.2 Si 3 B 12-x P 2 Nb 2 Cu 0.8 Ge x (x = 0, 1, 2 at.%) were prepared using induction-melting technology by smelting a mixture of high-purity raw materials of Fe (99.99%), Si (99.99 wt.%), B (99.95%), FeP (99.99%), Cu (99.99%) and Nb (99.99%) in high-purity argon atmosphere. The broken alloy ingots were made into alloy ribbons with a thickness of 22 µm and a width of 1.2 mm by using melt spinning technique. The thermal behavior of the samples was measured by a differential scanning calorimeter (DSC, 404C, Netzsch, Shanghai, China) under the conditions of pure argon gas flow and a constant heating rate. The structural features of the alloy ribbon samples were detected by an X-ray diffraction (XRD, D8 ADVANCE, Bruker, Billerica, MA, USA) with Cu-Kα radiation and transmission electron microscopy (Tecnai F20, FEI, Hillsboro, OR, USA). Tubular annealing furnace was used to anneal the melt-spun ribbons in a vacuum atmosphere to produce nanocrystalline alloy ribbons. The coercivity (H c ) of the annealed alloy ribbons was measured by a DC-BH loop tracer (EXPH-100, Riken Deshi, Saitama, Japan) under a magnetic field of 800 A/m. The saturation magnetic flux density (B s ) was measured by a vibrating sample magnetometer (VSM, 7410, Lake Shore, Westerville, OH, USA) at room temperature with a maximum applied field of 800 kA/m.
Results and Discussion
The melt-spun alloy ribbons of Fe 80.2 Si 3 B 12-x P 2 Nb 2 Cu 0.8 Ge x (x = 0, 1 and 2 at.%, denoted as Ge0, Ge1 and Ge2, respectively) have outstanding surface quality and bending ductility. Figure 1a shows the XRD patterns taken from the free surface of the as-spun Ge0, Ge1 and Ge2 alloy ribbons, in which there are only diffuse peaks at around 2θ = 45 • , and no sharp diffraction peaks corresponding to crystalline phases, indicating the formation of a complete amorphous phase. Figure 1b shows DSC thermal curves of the as-spun Ge0, Ge1, and Ge2 alloy ribbons at a heating rate of 40 K/min. All of the DSC curves show two clear exothermic peaks. To determine the crystallization reaction corresponding to the two exothermic peaks, we continuously heated the Ge0, Ge1, and Ge2 glassy alloy ribbons to a temperature just beyond their Tx1 (onset temperature of the first peak) and Tx2 (onset temperature of the second peak), respectively, at a heating rate of 40 K/min, and then cooled down the glassy alloy ribbons to room temperature as fast as possible in a DSC. The XRD patterns of the produced samples are shown in Figure 2. It is indicated that the first and second exothermic peaks in the DSC curves correspond to the precipitation of the α-Fe phase and Fe2(B,P) phases, respectively. Additionally, it can be seen that, as the Ge content increases, the Tx1 and Tx2 of the alloys shift toward the low-and high-temperature directions, respectively, causing ΔT (= Tx2 − Tx1) to gradually increase from 149.9 K for the Ge0 to 173.6 K for the Ge2. A larger ΔT is better for promoting the precipitation of α-Fe and inhibiting the precipitation of borides and phosphides, which is more conducive to the preparation of dense and uniform nanocrystals. Heat treatment is one of the most commonly used methods to prepare nanocrystalline soft magnetic alloys from amorphous alloys. The Ge0, Ge1, and Ge2 melt-spun ribbons were subject to a conventional annealing treatment at different annealing temperature (TA) for 10 min. Figure 3a shows the variation in Hc of the annealed alloy ribbons with To determine the crystallization reaction corresponding to the two exothermic peaks, we continuously heated the Ge0, Ge1, and Ge2 glassy alloy ribbons to a temperature just beyond their T x1 (onset temperature of the first peak) and T x2 (onset temperature of the second peak), respectively, at a heating rate of 40 K/min, and then cooled down the glassy alloy ribbons to room temperature as fast as possible in a DSC. The XRD patterns of the produced samples are shown in Figure 2. It is indicated that the first and second exothermic peaks in the DSC curves correspond to the precipitation of the α-Fe phase and Fe 2 (B,P) phases, respectively. Additionally, it can be seen that, as the Ge content increases, the T x1 and T x2 of the alloys shift toward the low-and high-temperature directions, respectively, causing ∆T (= T x2 − T x1 ) to gradually increase from 149.9 K for the Ge0 to 173.6 K for the Ge2. A larger ∆T is better for promoting the precipitation of α-Fe and inhibiting the precipitation of borides and phosphides, which is more conducive to the preparation of dense and uniform nanocrystals. Figure 1b shows DSC thermal curves of the as-spun Ge0, Ge1, and Ge2 alloy ribbons at a heating rate of 40 K/min. All of the DSC curves show two clear exothermic peaks. To determine the crystallization reaction corresponding to the two exothermic peaks, we continuously heated the Ge0, Ge1, and Ge2 glassy alloy ribbons to a temperature just beyond their Tx1 (onset temperature of the first peak) and Tx2 (onset temperature of the second peak), respectively, at a heating rate of 40 K/min, and then cooled down the glassy alloy ribbons to room temperature as fast as possible in a DSC. The XRD patterns of the produced samples are shown in Figure 2. It is indicated that the first and second exothermic peaks in the DSC curves correspond to the precipitation of the α-Fe phase and Fe2(B,P) phases, respectively. Additionally, it can be seen that, as the Ge content increases, the Tx1 and Tx2 of the alloys shift toward the low-and high-temperature directions, respectively, causing ΔT (= Tx2 − Tx1) to gradually increase from 149.9 K for the Ge0 to 173.6 K for the Ge2. A larger ΔT is better for promoting the precipitation of α-Fe and inhibiting the precipitation of borides and phosphides, which is more conducive to the preparation of dense and uniform nanocrystals. Heat treatment is one of the most commonly used methods to prepare nanocrystalline soft magnetic alloys from amorphous alloys. The Ge0, Ge1, and Ge2 melt-spun ribbons were subject to a conventional annealing treatment at different annealing temperature (TA) for 10 min. Figure 3a shows the variation in Hc of the annealed alloy ribbons with Heat treatment is one of the most commonly used methods to prepare nanocrystalline soft magnetic alloys from amorphous alloys. The Ge0, Ge1, and Ge2 melt-spun ribbons were subject to a conventional annealing treatment at different annealing temperature (T A ) for 10 min. Figure 3a shows the variation in H c of the annealed alloy ribbons with the T A .
It can be seen that, as the T A rises from 743 K to 863 K, the H c of the alloys first gradually decreases, reaches the lowest value at 843 K, and then rapidly increases. It is indicated that 843 K is the best annealing temperature for the nano-crystallization of the alloys, which may be due to the precipitation of a large number of dense and uniform nano-crystals by annealing at this temperature. By annealing below the optimal annealing temperature, only a small amount of crystal grains can be precipitated and properly coarsened, which causes an increase in the H c of the alloys. By annealing above the optimal annealing temperature, hard magnetic phases, such as borides and phosphides, may be precipitated, which leads to a rapid increase in the H c of the alloys. It can also be clearly seen from Figure 3a that the magnetic properties become better with the increase in the Ge content. This may be because the Ge is preferentially enriched in the remaining amorphous phase, which leads to an increase in the exchange stiffness of the intergranular region, reducing the magneto crystalline anisotropy effect of the alloy and making its soft magnetic properties more excellent. the TA. It can be seen that, as the TA rises from 743 K to 863 K, the Hc of the alloys first gradually decreases, reaches the lowest value at 843 K, and then rapidly increases. It is indicated that 843 K is the best annealing temperature for the nano-crystallization of the alloys, which may be due to the precipitation of a large number of dense and uniform nano-crystals by annealing at this temperature. By annealing below the optimal annealing temperature, only a small amount of crystal grains can be precipitated and properly coarsened, which causes an increase in the Hc of the alloys. By annealing above the optimal annealing temperature, hard magnetic phases, such as borides and phosphides, may be precipitated, which leads to a rapid increase in the Hc of the alloys. It can also be clearly seen from Figure 3a that the magnetic properties become better with the increase in the Ge content. This may be because the Ge is preferentially enriched in the remaining amorphous phase, which leads to an increase in the exchange stiffness of the intergranular region, reducing the magneto crystalline anisotropy effect of the alloy and making its soft magnetic properties more excellent. Figure 3b shows the Bs of the alloys annealed at 843 K for 10 min. It can be seen that the Bs of the annealed Ge0, Ge1, and Ge2 alloys is essentially the same. The saturation magnetic flux density of the Fe-based nanocrystalline alloy is the sum of the magnetic moments of the amorphous phase and the nanocrystalline phase. The relative Fe content of the Ge0, Ge1, and Ge2 alloys is very close, and thus their Bs has no significant difference. The partial enlargement in the inset of Figure 3b shows that the Hc of the nanocrystalline alloys gradually decreases with the increase in the Ge content. Among the three alloys annealed at 843 K for 10 min, the Ge2 alloy exhibits the best magnetic properties with the Hc of 3 A/m and the Bs of 1.65 T, which is enough to meet market requirements.
In order to explore the cause of the excellent soft magnetic properties of the Ge2 alloy annealed at 843 K for 10 min, the microscopic morphology of this alloy was observed by TEM, as shown in Figure 4. In Figure 4, it can be clearly seen that nano-scale grains with an average size of 15.7 nm are uniformly precipitated in the amorphous matrix; furthermore, these precipitated grains can be determined as the α-Fe phase through the identification of the selected area electron diffraction pattern. It is known that the Hc and grain size are positively correlated in nanocrystalline alloys. Therefore, the small, dense and uniform nanocrystalline structure of the annealed Ge2 alloy may account for its excellent magnetic properties. Figure 3b shows the B s of the alloys annealed at 843 K for 10 min. It can be seen that the B s of the annealed Ge0, Ge1, and Ge2 alloys is essentially the same. The saturation magnetic flux density of the Fe-based nanocrystalline alloy is the sum of the magnetic moments of the amorphous phase and the nanocrystalline phase. The relative Fe content of the Ge0, Ge1, and Ge2 alloys is very close, and thus their B s has no significant difference. The partial enlargement in the inset of Figure 3b shows that the H c of the nanocrystalline alloys gradually decreases with the increase in the Ge content. Among the three alloys annealed at 843 K for 10 min, the Ge2 alloy exhibits the best magnetic properties with the H c of 3 A/m and the B s of 1.65 T, which is enough to meet market requirements.
In order to explore the cause of the excellent soft magnetic properties of the Ge2 alloy annealed at 843 K for 10 min, the microscopic morphology of this alloy was observed by TEM, as shown in Figure 4. In Figure 4, it can be clearly seen that nano-scale grains with an average size of 15.7 nm are uniformly precipitated in the amorphous matrix; furthermore, these precipitated grains can be determined as the α-Fe phase through the identification of the selected area electron diffraction pattern. It is known that the H c and grain size are positively correlated in nanocrystalline alloys. Therefore, the small, dense and uniform nanocrystalline structure of the annealed Ge2 alloy may account for its excellent magnetic properties. The effective activation energy (E) of the alloys can be used to describe the difficulty of the crystallization process [24]. The effective activation energy can be determined by various methods [25][26][27]; here, it is calculated by the Kissinger equation as follows [26,27]: where R is the ideal gas constant, T is the characteristic temperature at a certain heating rate β, and E is the effective activation energy of the transformation process related to the corresponding characteristic temperature. Based on Equation (1), a straight line, i.e., the Kissinger plot, can be obtained by drawing ln(T 2 /β) against 1000/T, and the slope of the straight line is equal to E/R and can be obtained by linear fitting, from which the effective activation energy E can be determined. When the characteristic temperature T is taken as the onset crystallization temperature (Tx) and the crystallization peak temperature (Tp), respectively, the corresponding effective activation energy E is referred to as the nucleation activation energy (Ex) and the growth activation energy (Ep) of the crystallization reaction, respectively. The Tx1 and Tp1 of the primary crystallization reaction can be determined from the DSC curves of the alloys, and the corresponding Kissinger plots for the two characteristic temperatures are shown in Figure 5. From the Kissinger plots, the Ex1 and Ep1 of the primary crystallization reaction can be calculated. The Curie temperature (Tc), Tx1, Tp1, Ex1 and Ep1 of the alloys are summarized in Table 1. It can be seen that both the Ex1 and Ep1 of the alloys gradually decrease with the increase in Ge content. In other words, the Ge2 alloy has the lowest nucleation and growth activation energies of the primary crystallization reaction among the three alloys. This is because the increase in the Ge content reduces the non-metallic element content in the alloys, which may lead to a decrease in the alloy's AFA. For alloys with a high Fe content, the reduction in AFA will result in a large number of pre-existing nuclei in the as-spun alloy ribbon, which will serve as heterogeneous nucleation sits to promote nucleation and thus reduce the nucleation activation energy. The decrease in the growth activation energy is because, as the crystallization progresses, the B and Nb elements insoluble in the α-Fe phase are continuously enriched in the alloy, which stabilizes the amorphous phase and makes the precipitation of the hard magnetic phase more difficult. The effective activation energy (E) of the alloys can be used to describe the difficulty of the crystallization process [24]. The effective activation energy can be determined by various methods [25][26][27]; here, it is calculated by the Kissinger equation as follows [26,27]: where R is the ideal gas constant, T is the characteristic temperature at a certain heating rate β, and E is the effective activation energy of the transformation process related to the corresponding characteristic temperature. Based on Equation (1), a straight line, i.e., the Kissinger plot, can be obtained by drawing ln(T 2 /β) against 1000/T, and the slope of the straight line is equal to E/R and can be obtained by linear fitting, from which the effective activation energy E can be determined. When the characteristic temperature T is taken as the onset crystallization temperature (T x ) and the crystallization peak temperature (T p ), respectively, the corresponding effective activation energy E is referred to as the nucleation activation energy (E x ) and the growth activation energy (E p ) of the crystallization reaction, respectively. The T x1 and T p1 of the primary crystallization reaction can be determined from the DSC curves of the alloys, and the corresponding Kissinger plots for the two characteristic temperatures are shown in Figure 5. From the Kissinger plots, the E x1 and E p1 of the primary crystallization reaction can be calculated. The Curie temperature (T c ), T x1 , T p1 , E x1 and E p1 of the alloys are summarized in Table 1. It can be seen that both the E x1 and E p1 of the alloys gradually decrease with the increase in Ge content. In other words, the Ge2 alloy has the lowest nucleation and growth activation energies of the primary crystallization reaction among the three alloys. This is because the increase in the Ge content reduces the non-metallic element content in the alloys, which may lead to a decrease in the alloy's AFA. For alloys with a high Fe content, the reduction in AFA will result in a large number of pre-existing nuclei in the as-spun alloy ribbon, which will serve as heterogeneous nucleation sits to promote nucleation and thus reduce the nucleation activation energy. The decrease in the growth activation energy is because, as the crystallization progresses, the B and Nb elements insoluble in the α-Fe phase are continuously enriched in the alloy, which stabilizes the amorphous phase and makes the precipitation of the hard magnetic phase more difficult. In order to extend the qualitative analysis to a basic quantitative understanding of the crystallization mechanism, the on-isothermal crystallization kinetics of the primary crystallization process, at a heating rate of 20 K/min for the alloys, is studied by the Bla'zquez method [28]. For a crystallization process, the crystallization volume fraction α(T) can be expressed by the following equation: where T∞ is the end crystallization temperature; dH/dT is the heat capacity at atmospheric pressure; AT and A are the areas of crystallization peak in the isochronal DSC trace from T0 and T to T∞, respectively. The crystallization volume fraction α(T) at the heating rate of 20 K/min for the primary crystallization reaction of the alloys is plotted as shown in Figure 6. It can be seen that the α(T) plot of all the alloys shows a typical S-shaped response. The slope of the α(T) plot corresponds to the crystallization rate in the crystallization process. The S-shaped curve indicates that the crystallization of the alloys occurs through the nucleation and growth process, which can be roughly divided into three stages. The initial stage with 0 < α < 0.03, in which the slop of the α(T) plot is small, corresponds to the nucleation process. It can be seen that the holding time in the first stage gradually decreases as the Ge content increases for the present alloys. This may be a higher Ge content results in more pre-existing nuclei in the alloy ribbons, which reduces nucleation activation energy, and thus shortens the nucleation time. The middle stage with 0.03 < α < 0.9, in which the α(T) plot is very steep, corresponds to the grain growth process. In this stage the crystal nucleus grows rapidly after nucleation, resulting in a sharp increase in the crystallization rate of the alloy. In the last stage with 0.9 < α < 0.1, the increase in crystallization rate slows down significantly, which is mainly because the amorphous matrix is exhausted, resulting in the limited growth space of grains. In order to extend the qualitative analysis to a basic quantitative understanding of the crystallization mechanism, the on-isothermal crystallization kinetics of the primary crystallization process, at a heating rate of 20 K/min for the alloys, is studied by the Bla'zquez method [28]. For a crystallization process, the crystallization volume fraction α(T) can be expressed by the following equation: where T ∞ is the end crystallization temperature; dH/dT is the heat capacity at atmospheric pressure; A T and A are the areas of crystallization peak in the isochronal DSC trace from T 0 and T to T ∞ , respectively. The crystallization volume fraction α(T) at the heating rate of 20 K/min for the primary crystallization reaction of the alloys is plotted as shown in Figure 6. It can be seen that the α(T) plot of all the alloys shows a typical S-shaped response. The slope of the α(T) plot corresponds to the crystallization rate in the crystallization process. The S-shaped curve indicates that the crystallization of the alloys occurs through the nucleation and growth process, which can be roughly divided into three stages. The initial stage with 0 < α < 0.03, in which the slop of the α(T) plot is small, corresponds to the nucleation process. It can be seen that the holding time in the first stage gradually decreases as the Ge content increases for the present alloys. This may be a higher Ge content results in more pre-existing nuclei in the alloy ribbons, which reduces nucleation activation energy, and thus shortens the nucleation time. The middle stage with 0.03 < α < 0.9, in which the α(T) plot is very steep, corresponds to the grain growth process. In this stage the crystal nucleus grows rapidly after nucleation, resulting in a sharp increase in the crystallization rate of the alloy. In the last stage with 0.9 < α < 0.1, the increase in crystallization rate slows down significantly, which is mainly because the amorphous matrix is exhausted, resulting in the limited growth space of grains. Next, we analyzed the nucleation and growth mechanism for the first crystallization reaction of the alloys by calculating the local Avrami index n as the function of α based on the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model. The crystallization kinetics of the alloys during isothermal heating can be expressed by the JMAK equation as [29]: where n is the Avrami index, t0 is the induction time, and k(T) is the kinetic coefficient as a function of temperature. Equation (3) can be written as [30]: Under the approximation of the isokinetic behavior proposed by Nakamura et al., the relationship between α and temperature T and time t is: Further, the JMAK equation can be extended to non-isothermal kinetics under the isokinetic approximation [31]. For isochronous transformation, d d is equal to the heating rate β, and so Equation (5) can be written as: where Tx is the onset temperature of the crystallization reaction. Blázquez et al. make a hypothesis of ∫ ( )d = , ( − ), and so Equation (6) can be rewritten as [28]: where k' is a new frequency factor. Suppose that there is an Arrhenius dependency between k' and T, i.e., k' can be expressed as , ( ) = 0 , exp (− / ), where k'0 is a constant and E is the activation energy. Therefore, the following equation can be obtained from Equation (7) [28,32,33]. Figure 6. Curves of crystallization volume fraction versus temperature T for the first crystallization peak of the alloys ribbon at different heating rates.
Next, we analyzed the nucleation and growth mechanism for the first crystallization reaction of the alloys by calculating the local Avrami index n as the function of α based on the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model. The crystallization kinetics of the alloys during isothermal heating can be expressed by the JMAK equation as [29]: where n is the Avrami index, t 0 is the induction time, and k(T) is the kinetic coefficient as a function of temperature. Equation (3) can be written as [30]: Under the approximation of the isokinetic behavior proposed by Nakamura et al., the relationship between α and temperature T and time t is: Further, the JMAK equation can be extended to non-isothermal kinetics under the isokinetic approximation [31]. For isochronous transformation, dT dt is equal to the heating rate β, and so Equation (5) can be written as: where T x is the onset temperature of the crystallization reaction. Blázquez et al. make a hypothesis of T T x k(T)dT = k (T − T x ), and so Equation (6) can be rewritten as [28]: where k is a new frequency factor. Suppose that there is an Arrhenius dependency between k and T, i.e., k can be expressed as k (T) = k 0 exp(−E/RT), where k 0 is a constant and E is the activation energy. Therefore, the following equation can be obtained from Equation (7) [28,32,33].
The curve of ln [−ln (1 − α)] versus ln [(T − T 0 )/β] is shown in Figure 7a, and from this and Equation (8), the variation of the local Avrami index n with the crystallized volume fraction α can be plotted as shown in Figure 7b. The value of n involves information about the nucleation and growth mechanism of the crystallization process and is expressed as [33,34]: where a, b, and p are the nucleation index, growth dimension index, and growth index, respectively. Different values of a, b, and p reflect the different nucleation and growth mechanisms of the crystallization process [30,32]. It can see in Figure 7b that the value of the n of the first crystallization reaction of the Ge0, Ge1 and Ge2 alloys is essentially within the range of n < 1. The growth index, p, can be taken as 0.5 for Fe-based amorphous alloys [30]. Due to the small thickness of the alloy ribbon, the growth of nanocrystals along the thickness direction of the alloy ribbon may be restricted during the crystallization process, and thus the growth dimension index, b, should be less than 3. Based on the Equation (9), the nucleation index, a, for the present alloy ribbons can only be equal to 0, indicating the crystallization process of the direct growth of pre-existing nuclei. Therefore, it can be suggested that there are a large number of pre-existing nuclei in the α-Fe phase and precipitated Cu clusters in the as-spun alloy ribbons, which provide numerous heterogeneous nucleation sites to form dense and uniform α-Fe nanocrystals with a fine grain size. Figure 7a, and from this and Equation (8), the variation of the local Avrami index n with the crystallized volume fraction α can be plotted as shown in Figure 7b. The value of n involves information about the nucleation and growth mechanism of the crystallization process and is expressed as [33,34]: where a, b, and p are the nucleation index, growth dimension index, and growth index, respectively. Different values of a, b, and p reflect the different nucleation and growth mechanisms of the crystallization process [30,32]. It can see in Figure 7b that the value of the n of the first crystallization reaction of the Ge0, Ge1 and Ge2 alloys is essentially within the range of n < 1. The growth index, p, can be taken as 0.5 for Fe-based amorphous alloys [30]. Due to the small thickness of the alloy ribbon, the growth of nanocrystals along the thickness direction of the alloy ribbon may be restricted during the crystallization process, and thus the growth dimension index, b, should be less than 3. Based on the Equation (9), the nucleation index, a, for the present alloy ribbons can only be equal to 0, indicating the crystallization process of the direct growth of pre-existing nuclei. Therefore, it can be suggested that there are a large number of pre-existing nuclei in the α-Fe phase and precipitated Cu clusters in the as-spun alloy ribbons, which provide numerous heterogeneous nucleation sites to form dense and uniform α-Fe nanocrystals with a fine grain size.
Conclusions
In this work, Fe80.2Si3B12−xP2Nb2Cu0.8Gex (x = 0, 1, 2 at.%) nanocrystalline alloys with excellent soft magnetic properties were prepared through the conventional heat treatment of the corresponding amorphous alloy ribbons, and the effects of Ge on magnetic properties and crystallization process of the alloys were thoroughly investigated. The results obtained are as follows:
As the Ge content increases, the onset crystallization temperature of the first crystallization reaction (Tx1) and that of the second crystallization temperature (Tx2) of the as-spun Fe80.2Si3B12-xP2Nb2Cu0.8Gex (x = 0, 1, 2 at.%) amorphous alloy ribbons shift to the low-and high-temperature directions, respectively, resulting in an increase in the heat treatment window ΔT (= Tx2 − Tx1) from 149.9 K for x = 0 to 173.6 K for x = 2 in the Ge2. A larger ΔT is better for promoting the precipitation of α-Fe and inhibiting the precipitation of borides and phosphides, which is more conducive to the preparation of dense and uniform nanocrystals.
Conclusions
In this work, Fe 80.2 Si 3 B 12−x P 2 Nb 2 Cu 0.8 Ge x (x = 0, 1, 2 at.%) nanocrystalline alloys with excellent soft magnetic properties were prepared through the conventional heat treatment of the corresponding amorphous alloy ribbons, and the effects of Ge on magnetic properties and crystallization process of the alloys were thoroughly investigated. The results obtained are as follows:
•
As the Ge content increases, the onset crystallization temperature of the first crystallization reaction (T x1 ) and that of the second crystallization temperature (T x2 ) of the as-spun Fe 80.2 Si 3 B 12-x P 2 Nb 2 Cu 0.8 Ge x (x = 0, 1, 2 at.%) amorphous alloy ribbons shift to the low-and high-temperature directions, respectively, resulting in an increase in the heat treatment window ∆T (= T x2 − T x1 ) from 149.9 K for x = 0 to 173.6 K for x = 2 in the Ge2. A larger ∆T is better for promoting the precipitation of α-Fe and inhibiting the precipitation of borides and phosphides, which is more conducive to the preparation of dense and uniform nanocrystals.
• The Fe 80.2 Si 3 B 10 P 2 Nb 2 Cu 0.8 Ge 2 nanocrystalline alloys with a small grain size of 15.7 nm were obtained by annealing the corresponding amorphous alloy ribbons at a temperature of 843 K for 10 min. The Fe 80.2 Si 3 B 10 P 2 Nb 2 Cu 0.8 Ge 2 nanocrystalline alloy exhibits excellent magnetic properties with a high B s of 1.65 T, small H c of 3 A/m, which are considered to be derived from uniform and dense nanocrystalline structures. • Both the nucleation crystallization activation energy (E x1 ) and the growth crystallization activation energy (E p1 ) for the primary crystallization reaction of the as-spun Fe-based alloy ribbons decrease with increasing the Ge content.
•
The non-isothermal crystallization kinetics study shows that the value of the local Avrami exponent, n, for the crystallization is less than 1 in the whole crystallization process, reflecting the crystallization mechanism of the direct growth of preexisting nuclei. | 8,120 | 2022-04-08T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Structural Behavior of Short Thin-Walled Steel Columns Filled with High Strength Concrete Made of Recycled Aggregate
. This paper introduces an experimental work to study the behavior of 12 recycled aggregate concrete-filled steel tubular (RACFST) short columns with high-strength concrete grade subjected to axial compression loading. These columns formed from six different cross-sections involving triangle, elliptical, and hexagon, whereas the other three sections included traditional forms for control purposes, involving square, rectangular, and circular. All RACFST columns used are made of mild steel plates. These columns were divided into two groups. The steel tube thickness was the only parameter modified to study its effect properly. In addition, the study included the search for the best effective section concerning the properties of stability and confinement, so these columns were designed so that the cross-sectional areas of steel tubes were approximately equal. Also, composite action and level of ductility for these columns were studied. The ultimate failure axial load, the reduction in the axial column length, the failure pattern of cross-section shape, and lateral displacement were recorded during the test. Regarding the two groups of columns with (1 and 2) mm thick steel plates, testing data obtained from RACFST columns with elliptical and circular cross-sections respectively showed better stability and confinement of concrete and the ability to withstand a higher ultimate failure stress. Moreover, columns with polygonal sections showed when the number of ribs for the steel plates of the model or increased the angle between the two sides is (90°) or more, column cross-sections can achieve more stability and confinement, respectively. In general , increasing the thickness of the steel tubes increases. Finally, for all columns, the higher values of the concrete contribution ratio (C
strength of circular CFST columns increased as the concrete strength increased.In addition, when the diameter-to-thickness ratio (D/t) of steel tubes decreased, the confinement increased, but the ductility decreased [5].Crushing the concrete and local buckling was a typical failure for examined specimens.Besides, circular CFST columns exhibited higher final capacities than rectangular columns with equal steel area [6].The CFST columns with high-strength concrete exhibited the highest loads.Also, the values of CCR proved high performance when filled with HSC, mainly for thin-walled circular columns steel tubes [7].The concrete mixes with low strength showed better confinement and ductility than concrete mixes with high strength [8].The values of final load capacity increased with increasing steel tube thickness [9].The ductility of columns decreased when concrete strength increased [10].
It is worth noting that one of the most important properties of recycled coarse aggregate is its high ability to absorb water when it is not previously wet compared to natural coarse aggregate, so it works to reduce the ratio of water cement ratio (w / c) of concrete.It thus increases the strength of concrete [11].This feature was confirmed by Chen et al. (12).At the same time, they have shown in their study that the use of recycled aggregate concrete (RAC) in concrete-filled steel tubes as a structural material is possible and safe.Some authors, though different experimental programs, Safiuddin [13], Yuyin Wanga [14], and Wan-Qing Lyu [15], Have shown that recycled Coarse Aggregate Concrete (RAC) can be used completely instead of natural coarse aggregate to obtain concrete with strength ranging from (80-90) % of the strength of natural coarse aggregate concrete.The researcher Azevedo [16] explained in their study that the resistance of a composite column not only depends on the compressive strength of concrete but also on the ratio between the compressive strength of both steel and concrete, i.e. (fy/fc), besides to that, they stated the confinement effect be a significant factor in the strength of the RACFST column.
The researchers You-Fu Yang [17], You-Fu Yang and Lin-Hai Han [18], Zong-ping Chen [19], Wengui Li [20], Niu and Cao [21], Dongdong Yang [22] Presented experimental studies on the behavior of RACFST columns filled with recycled aggregate concrete, they mentioned the failure of all specimens is failure buckling.Also, they illustrated that the failure patterns for those columns were similar to counterparts in CFST columns, and all tested specimens behaved ductile.In addition to that, there is also an accepted consensus that the structural behavior of (RACFST) columns is slightly less than that of (CFST) columns filled with natural aggregate concrete.Here it should be noted that the importance of this research lies in benefiting from each of the mechanical properties of recycled concrete and thin-walled hollow steel tubes.The combined action of both materials improves these properties.Through composite columns, the structural properties of the recycled concrete were improved and thus increased the effectiveness of these structural elements.Especially if we take into account the use of high-strength concrete, as this type of concrete can increase the maximum loadbearing capacity of the RACFST columns.This reduces the size of the columns by using cross-sections with smaller areas, thus reducing the weight of dead loads for buildings while providing larger service spaces.In addition, short, thin-walled steel columns were studied here due to their frequent use in various buildings and their low cost compared to other columns.Also, this study looked at some forms of special sections for thinwalled steel columns and tried to find the optimal section in terms of stability and confinement while comparing them with sections commonly used.Moreover, it achieves the architectural requirements from an aesthetic point of view by providing the sections with special forms while providing the required information about the possibility of using them in various civil engineering sectors.
Finally, recycled coarse aggregates were used as fillers for these columns due to their great importance in promoting environmental conservation and reducing natural resource depletion, preserving the environmental diversity of living organisms, and reducing noise and environmental pollution of land and air.Also, recycled aggregate is undoubtedly of great importance in several aspects, including reducing costs compared to the continued use of natural aggregate and helping to get rid of the rubble of destroyed or old buildings due to natural disasters or wars.Based on the aforementioned, using recycled concrete, which includes coarse aggregate, in various civil engineering facilities can contribute effectively to promoting and developing sustainability.This is regarded as one of the basic requirements of modern construction.Thus, achieving the overall quality of use for the welfare of the people without continuing to deplete the natural resources that can be provided for future generations.
Despite the availability of previous studies, it was concluded that there is a lack of experimental investigations on RACFST columns when filled with HSC in terms of the effect of thin-walled steel crosssections of different shapes and thicknesses on the final bearing capacities of those columns.To understand the structural behavior of this type of composite member.The effect of the cross-section shape was studied in terms of the number of ribs that formed the shape of the model and the angles between these ribs on the final bearing capacity of these columns.Also, the effect of changing the thickness of the steel tube on the final failure stress was studied for all the samples examined.This study's main goals for recycled aggregate concrete-filled steel tubular (RACFST) thin-walled short columns are adopted when filled with high-strength concrete and subjected to axial compression loading. 1) Present practical experiments on the structural behavior of RACFST columns because experimental and numerical studies related to the compressive behavior of these columns are still insufficient.Moreover, no design methods are available for the local stability of these columns.2) Investigation for the more effective cross-sectional shape for all RACFST short columns.Regarding confinement properties, stability, and the maximum capacity of failure stress, with unusual cross-section shapes such as elliptical, triangular, and hexagonal.After that is compared these sections with traditional cross-sectional shapes such as circular, square, and rectangular.3) Investigating the effect of steel tube thickness on the final failure stress of all RACFST columns.
MATERIALS and Methodology
After a detailed study of the previous literature, the materials required for both mild steel plates and recycled concrete needed in the experimental work were identified and provided.Then, a total of 12 samples were prepared.The shape of the cross-sections of these samples and their details are shown in Figure 1 and Table 1, respectively.
Description of Hollow Steel Tubes Specimens.
The columns specimens that were used in this study are twelve RACFST columns with six different crosssections, three of which are commonly used for comparison purposes, which are circular, square, and rectangle.The remaining are special cross-sectional shapes, including the elliptical, hexagon, and triangle, as shown in Figure 1.The symbols have been used ("CC", "CH", "CE", "CS", "CR", and "CT") to identify all RACFST specimens.The first letter represents the composite column specimen.The second letter denotes circular, hexagonal, elliptical, square, rectangular, and triangular.All specimens were divided into two groups and manufactured with two thick (1 and 2) mm.The first group with a thickness of (t = 1) mm, whereas the second group with a thickness of (t = 2) mm.Sections of all RACFST columns were designed with approximately an equal external perimeter (P); therefore; as a result, the area of the cross-section of the steel tube (As) was approximately equal for each group.On the other hand, this led to a difference in the cross-sectional area of the concrete (Ac).The length (L) of each specimen was 300 mm, while the external perimeter of the cross-section of each column was 400 mm, with a difference ranging from (+2 to -3%) of this length.The information for these specimens was recorded in Table 1.(23).Fine aggregate with brandnamed Al-Ukhaidir sand, with an optimum grain size of (4.75 mm) was used for the concrete mixtures in this investigation.This sand's chemical and physical properties comply with what is required according to the Iraqi standard specification (IQS) No. 5/1984 (23).The waste concrete resulting from the demolished concrete buildings was the resource of the recycled coarse aggregate (R.C.A.) with a gradation of (5-19) mm.The aggregate was separated by sieve analysis.
The results of the gradient test comply with Iraqi Standard No. 45/1984 (24).These aggregates were soaked in water for 24 hours to get the saturated surface condition before mixing.Recycled coarse aggregate concrete (RCAC) with strength -high concrete (H.S.C.) was used as a filling material in all hollow steel tubes after completing all these tests.Three mix trials have been carried out to find the best proportion, and the chosen mix with the proportion of (cement: sand: recycled aggregate) for (1) m³ were (400, 500, 750 kg/m³ is 1:1.25:1.875)respectively with water cement ratio (W/C=0.3)and superplasticizers in a ratio 2% by weight of cement.From this proportion, the cylinder compressive strength and splitting tensile strength of the green concrete mixture were (40 and 3.94 MPa) respectively.
To find the steel plate's mechanical properties, this was used to manufacture the hollow steel tubes.Standard coupon tensile tests have been carried out, which comply with the specification of American Steel Testing Materials (ASTM A370-17) (25).The results of failure strength, yield strength, and modulus of elasticity of steel plates with two thicknesses of (1 and 2) mm were (455 MPa,258 MPa, and 217 G Pa) respectively.Figure 2, Illustrates steel samples for tensile stress and testing some mechanical properties of hardened concrete after 28 days.
Manufacturing of Specimens
Using the AutoCAD program, the sections of the columns were drawn according to the required measurements.After that, these cross sections were cut from thin steel plates, and based on these sections, the columns were manufactured according to the required shapes.Each sample has been formed from two symmetrical parts, so its longitudinal welding is also symmetrical.A steel plate with a thickness of 2 mm and dimensions (20*20) mm was welded as a base for these specimens.This base provides three-side confining conditions, prevents the leakage of fine materials from the specimens for the concrete mixture, and increases its stability, as shown in Figure 3(a).Green concrete mix (G.C.) was used, with recycled coarse aggregates (RCA) for mixing instead of using normal coarse aggregates (NCA).The green concrete mixture was designed with a cylinder compressive strength of 40 MPa and then cast all the specimens of RACFST columns as illustrated in Figure 3(b,c).Finally, these samples were placed in water basins to complete the curing process for 28 days.
Test Setup
Before starting the tests required for the samples, it was validated that the device of the test and its connected devices were working properly.Next, it was verified that the applied load could be applied to the samples without happened eccentricity.This was achieved by checking the horizontality of the base of the device and making sure that the center of the load cell of the device matched the center of the sample to be examined.The test included a study of the failure's patterns, the ultimate load for each sample, the vertical deformation load, the transverse deformation load, the amount of tensile strain in steel for all samples, the effect of steel tube thickness on the maximum load, and the effect of cross-section shape on the amount of the ultimate failure load.
Three variable linear differential transformers (LVDT) devices have been used to measure the deflection in the sample due to the applied vertical load.One is to measure vertical deflection, the second is to measure the horizontal movement on the long side of the model, and the third is to measure the horizontal movement on the other side.All of them are installed in the middle of the column.Also, 6 cm long strain gauge stickers were installed to measure the longitudinal tensile strain in steel in the middle of columns, as shown in Figure 4(a).The axial load was applied to all samples in the same way, using a hydraulic compressor pressing the sample vertically from the top using the load cell as shown in Figure 4(b) below, where the load was regularly increased by 10 kN until failure or when a sudden collapse occurred to the specimen.The axial load was applied by a load cell with a maximum capacity of 2000 kN linked to the computer.
Experimental Results
Data on ultimate failure axial load (Nu) have been recorded in the experimental tests.In contrast, other information, including the ultimate failure stress (δu), strength index (SI), ductility index (DI), and concrete
Observation of Tested Specimens and Failure Patterns
For all tested columns, ductile failure was noticed.All RACFST columns failed due to capillary cracks and then cracking and expansion of the recycled concrete core.Continuing with the loading, an external local buckling occurred near the middle of the RACFST column.During the final steps of the applied loading, more external local buckling occurred near the top and bottom edges of the tested columns.Figures 5 and 6 show the failure patterns of all tested column samples for (1 and 2) mm steel plates, respectively.In general, the section shape had a distinct effect on the failure patterns in the tested samples, as the failure patterns of polygonal cross-sections such as hexagons, squares, and rectangles were slightly different from those of circular and elliptical sections.The RACFST triangular cross-section column had a different behavior than the remaining shapes, as its failure represented a local buckling along the circumference of the bottom base of the column.In contrast, the failure of the square cross-section column was a multiple circumferential local buckling starting from the top middle of the column to its end.However, the limit of plastic deformation for the samples was almost equal for the first and second groups, except for the hexagonal cross-section column, which gave a higher result in the group with thickness (2) mm, and this is related to the type of failure that occurred in this section.
Load-Deformation Curves
Figures 7 and 8 show the plots of experimental axial load (N) versus axial displacement (D) in the axial (load-displacement) curve for each of the RACFST columns tested, which included both an elastic phase and an elastic-plastic phase until the failure load.It showed after obtaining the outward local buckling of RACFST until reaching the ultimate strength of the column and beyond.The maximum axial load capacity of the crosssection related by the tested RACFST circular column and its slope in relation to axial (load-displacement) curves was higher than all other RACFST columns.Anyway, the final failure load here cannot be applied as a method of evaluation between the samples.This fact is because all the tested columns differ from each other in the concrete cross-section area of each column due to maintaining an equal cross-section area for all hollow steel tubes during the design despite the different shapes of these sections.Therefore, failure stresses were used to compare all RACFST columns, which are discussed later in detail.The relationship connecting the axially applied load (Nu) with the lateral displacement (D) was plotted in the mid-height of the axes of the tested RACFST columns, as shown in Figure 9, due to the application of the load axially without eccentricity and the fact that the columns were short.Hence, the deflection values were mostly small in the areas close to the middle of the column.Shortly after the applied load reached its maximum value, the external local bucking increased almost equally around the columns of similar cross-sections.After that, the lateral displacement developed significantly after exceeding the post-peak stage.
The results of the group (G1) showed that the lateral deformation values of the polygon cross-section RACFST columns for the (C.H) and (C.R) samples were smaller than the deformation values for the rest of the columns.At the same time, for the group (G2), the lateral deformation values of the (C.H) and (C.E) samples were smaller than the other deformation values of the rest of the columns.The reason for this was that the location of the local external buckling was far from where the linear variable transverse differential transformer (LVDT) was installed.The results showed that the transverse deformation values for the samples with a thickness of (1 mm) were higher than their counterparts with a thickness of (2 mm).From the above, it can be seen that the cross-sectional shape of the RACFST columns did not affect the relationship between the load and the lateral displacement of the column.
Ultimate Axial Load of RACFST Columns
Laboratory test results for all RACFST columns showed that the highest load was for the circular crosssection column.In addition, the lowest load was for the column of the triangle cross-section.All maximum axial loads were represented graphically in Figure 10 (a,b).
Effect of Section Shape on Final Failure Stress
As said previously, Sections of all RACFST were designed with approximately an equal external perimeter (P), as a result; the cross-section area of the steel tube was approximately equal for all columns.This led to a difference in the cross-section area of the concrete.Thus, it is suitable to calculate the strength for all RACFST columns by using the final failure stress that happened in each column by converting the composite section into an equivalent section of steel rather than the final failure strength.The final failure stress was calculated through the below equation [26].
Where: At = (As+Ac/n), n = (Es/Ec), N equal to the final failure axial load of RACFST columns or the highest value obtained through the experiment, (At) represented the area of steel equivalent to the cross-section area of each composite column, while (n) represents the modular ratio, (n=Es/Ec), which equals (7.777) and depend on the properties of the materials used in this investigation.The final failure stress (δu) for each RACFST column was illustrated below in Figure 11 (a,b).In the first group(G1), with a thickness of 1 mm for steel tubes, the elliptical RACFST column showed better concrete confinement and better bond stress between steel and concrete.This behavior increased the effective compound effect in the member and thus its ability to bear greater ultimate failure stress from the other sections for columns.Conversely, the column with a rectangle section showed less ability to bear the stress compared with all columns with other sections.For the second group with a thickness of 2 mm, the results showed that the column with a circular cross-section showed the ability to bear ultimate failure stress more than all the columns with other sections.The column with a rectangle cross-section was less able to bear the stress than all columns with other sections, as shown in Figure 11(a, b).
The ultimate failure stress of the RACFST circular column showed better concrete confinement and better bond stress between the steel and concrete.This behavior increased the effective compound effect in the member and thus increased its ability to bear the maximum pressure from the concrete core.The main reason for this is that this column was made of two pieces of steel plates; the dimensions of each piece are (204 x 300) mm.The dimension (204) mm has been rotated in a half-circle shape and (300) mm in height to form half of a circular column.The same way was done for the second piece so that the circular column was manufactured after welding symmetrically from both sides along the column.This rolling process was like a pre-stress for the steel and thus gave the circular shape a greater ability to bear the stress.In addition, the circular shape could generate better confinement of the concrete section compared to column other crosssections.The same understanding above applies to the column with an elliptical cross-section.
In the first group(G1) with a thickness of (1) mm for steel tube, Figure 11 (a, b) shows a hierarchical arrangement of the ultimate stress of RACFST columns with polygonal sections, i.e., hexagon (C.H.) has the highest ultimate stress bearing followed by a square (C.S.), a triangle (C.T.), and then rectangle (C.R).Here, the column with a triangle cross-section advanced on the column with a rectangle cross-section because of the early welding failure of the last.In the second group (G2) with a thickness of (2) mm for steel tube, the same result stayed about the hierarchical arrangement of the ultimate stress of RACFST columns with polygonal sections.Except for the column with a square cross-section, where the column was advanced with a hexagon cross-section because the first exhibited good confinement using high-strength concrete.
Thus, a distinctive pattern was observed here, with an increasing number of corners of steel plates for the model, that is mean, the greater the number of formed sides and the greater the angles between the sides are (90°) degrees or more, there was more stable and confinement for the section, respectively.For example, in the hexagonal RACFST column that showed the highest ultimate stress, the section was made with a circumference of 402 mm equally distributed on six sides with two same sections so that they were symmetrically welded along the shape.This ribbing process was like pre-stress work for the steel forming the model, allowing it to bear more significant stress.Also, the angles between both sides were 120°, which gave the shape greater ability to bear compression.This design allowed the concrete components to overlap well with the steel mold and reduced the possibility of decay or gaps between the concrete components and the steel model during pouring columns.The size of the recycled aggregate used in the concrete mix was with a gradient between 5-19 mm, and therefore, the measurement of these angles gave the column greater ability to withstand the applied compression.As for the square-section column, it had a circumference of 400 mm distributed on four sides.The width of each side was 100 mm, where the model was made of two halves in the form of (L) shape, and they were welded longitudinally and symmetrically.The same interpretation applies to all RACFST columns with polygonal sections.
Strength Index
The ratio resulting from dividing the value of the final stress of any examined column by the final stress of the circular column is called the strength index (SI) and is used to investigate compression applied.It can be computed from Equation 2 as follows [27]: where (σu) represented the final failure stress for a given column of RACFST columns, whereas σr represented the final stress of the circular RACFST column.The SI values of all tested RACFST columns are illustrated in Figure 12 (a, b) and are listed in Table 2.The columns with a thickness of (1) mm and the following cross-section shapes, elliptical, hexagonal, and square, have been shown higher confinement values for concrete core, respectively, from all other columns of both groups.The results showed that when the (SI) values were increased, the ultimate failure stress values for all (RACFST) columns also increased.
Ductility Index
Ductility is a mechanical property of a material that indicates the degree of plastic deformation, as it is considered an effective property of the material.The ductility index was defined as the ratio of the total axial shortening of a RACFST column as a result of the ultimate failure load during plastic phase loading to axial shorting up to 80% of the failure load per column, and it was defined as follows [28].The ductility index values for all tested columns were listed in Table 2, while Figure 13 (a, b) showed a graph for these values.Regarding the first group with a thickness of 1 mm, the above figure shows that the ductility index (DI) values for all columns were close except for the column with a square cross-section, which showed a much higher value than the ductility of all other shapes.This was most likely related to the good confinement provided by the steel tube to the concrete core.During the advanced loading stages, this behavior of this sample led to multiple local buckling of the steel tube and thus increased the vertical axial deformation value.However, the RACFST column with a triangular sectional shape showed the smallest value of DI.This was related to the failure pattern of the model, which resulted from a reduced ability to withstand the failure stress.The small cross-sectional area of the concrete led to the early failure of the concrete, and thus, there was a nondevelopment of axial vertical deformation of the model.
Regarding the second group with a thickness of 2 mm, the ductility index values for all columns were also close.Except for the column with a hexagonal cross-sectional shape, this showed a much higher value than the other columns.This was a result of increasing the thickness of steel from 1-2 mm, which led to a decrease in the percentage of the contribution of the concrete (CR) to withstand the failure pressure.After that, the continuation of loading led to the failure of the concrete by crushing.At the same time, this generated a large pressure towards the circumference of the steel tube; thus, the failure of the weld near the bottom base of the model caused an increase in the axial deformation.At the same time, the RACFST column with a square cross-sectional shape exhibited the smallest value of DI due to the symmetry of its internal ribs and angles.This symmetrical increased its ability to withstand ultimate failure stress and thus was reflected in the failure pattern.During the last stages of loading, this column exhibited local buckling deformation around the base of this sample less than the rest of the other columns.Thus, this has decreased the vertical axial deformation value.
Effect of the Thickness of the Steel Tube on: 3.8.1 Ultimate Axial Failure Load
For each column, the value of the experimental failure load was obtained.The final values are summarized in Table 2. Also, a comparison was drawn for these ultimate axial loads for steel tubes filled with recycled concrete (RACFST) using recycled coarse aggregate and for the two groups with thicknesses (1 and 2) mm, illustrated in Figure 14.As expected, the column capacity increased by increasing the thickness of the steel tube for all columns.The experimental failure stress values obtained for all columns are listed in Table 2.These values were also compared for the two groups with 1 and 2 mm thicknesses and drawn in Figure 15.In general, the results showed a higher ability to withstand the final stresses applied to all RACFST columns with an increase in the thickness of the steel tubes from 1-2 mm, except for columns with elliptical and hexagonal sectional shapes, which showed lower capacity due to crack welding failure during the late stages of loading.This failure was due to the weak ductility of the weld.In addition, the column with a circular cross-section showed a clear superiority in withstanding the stresses applied to it compared to the rest of the columns.This behavior was due to the good confinement of concrete in this section.
Contribution Ratio of Concrete
The contribution of concrete fill for all samples was analyzed using the concrete contribution ratio (C.C.R.), which can be calculated from Equation (4). 2 for both groups.The results that have been obtained support what was noted for the failure load.As expected, the (C.C.R.) of 2mm steel tubes decreases as a result of the practical increase in the cross-sectional area of the steel.Data in Table 2 shows that in the triangular columns of RACFST thin-walled steel tubes with a thickness of 1 mm, the (C.C.R) values were lowest.In general, for the two groups with a thickness of 1 and 2 mm, the results showed that the higher values of (C.C.R) led to an increase in the values of (S.I.).This increase is accompanied by an increase in the values of ultimate failure stresses for all specimens.However, by comparing the columns for the same sections, it was found that when the thickness of the steel tube increased, this led to a decrease in the values of (C.C.R) due to the increase in the cross-section area for the steel tube.
CONCLUSIONS
This manuscript showed an experimental study of the behavior of twelve recycled aggregate concretefilled steel tubular (RACFST) short columns under concentric axial loads.Two groups of thin-walled steel tubes (1 and 2) mm thick for different shapes of cross-sections were studied.High-strength concrete (H.S.C.) was used as a filling material in all hollow steel tubes.Depending on the analysis of the data obtained from the investigational work of this study, the observing conclusions below were obtained.
• The failure behavior for the tested RACFST short columns was relatively ductile.Capillary cracks and then crushing and lateral expansion of the recycled concrete core.Continuing the loading, outward local buckling occurred near the middle of the RACFST column.At the last steps of applied loading, more outward local buckling happened nearby from the top and bottom edges of the tested columns.• The cross-sectional shape of the RACFST column clearly influences the failure patterns in the tested samples.The failure patterns of some columns with polygonal cross-sections, such as hexagonal and rectangular, differed slightly from those of columns with circular or elliptical sections.Besides, the behavior of failure patterns for RACFST columns with square and triangular cross-sections was not similar to the remaining shapes.• The cross-sectional shape of the RACFST columns did not distinctly affect the relationship of the (loadlateral displacement) of the column.On the other hand, the results showed that the transverse deformation values for the samples with a thickness of 1 mm were higher than their counterparts with a thickness of 2 mm.• Regarding both groups (G1 and G2) with steel plate thickness (1 and 2) mm, RACFST elliptical and circular columns, respectively, showed better stability and confinement of concrete and the ability to withstand greater final failure stress.Thus, these shapes can create better confinement of the concrete section compared to other column sections.• Regardless of some results related to some columns.It can be said that all tested RACFST columns with polygonal sections showed when the number of ribs for the steel plates of the model or the angle between the two sides is 90° or more, column cross-section can achieve more stability and confinement, respectively.• In general, the results showed a higher ability to withstand the final stresses applied to all RACFST columns with an increase in the thickness of the steel tubes from 1 to 2 mm, except for columns with elliptical and hexagonal sectional shapes, which showed lower capacity due to crack welding failure during the late stages of loading.This failure was due to the weak ductility of the weld.• When the thickness of the steel tube increased, this led to a decrease in the values of (C.C.R) due to the increase in the cross-section area for the steel tube.• In general, for all RACFST columns with a thickness of 1 and 2 mm, the results showed that the higher values of (C.C.R) led to an increase in the values of (S.I.).This increase is accompanied by an increase in the values of ultimate failure stresses for all specimens.• The SI values of 1 mm thick RACFST columns with elliptical, hexagonal, and square-shaped cross-sections showed a high confinement effect, respectively.This led to a higher ability to withstand the final failure stresses applied to these sections, which exceeded the values of the circular column.
Figure 2 :
Figure 2: Steel coupons of tensile strength test and testing some mechanical properties of hardened concrete.a: Steel Coupons of Tensile Strength Test.b: Splitting Tensile strength.c: Compressive Strength.
Figure 3 :
Figure 3: Preparation of the pouring specimens process of RACFST columns.
Figure 5 :
Figure 5: Failure patterns for all tested RACFST columns of steel tube thickness (t) =1 mm.
Figure 6 :
Figure 6: Failure patterns for all tested RACFST columns of steel tube thickness (t) =2 mm.
Figure 14 :
Figure 14: Comparison of the ultimate axial load of RACFST columns, tube thickness (t ) = 1 and 2 mm.
Figure 15 :
Figure 15: Comparison of the ultimate failure stress of RACFST columns, tube thickness (t)=1 and 2 mm.
represented the practical ultimate failure load; (As eff) expresses the effective cross-sectional area of the steel tube as stated by the Eurocode 3 model, and fy represents the steel tube yield strength [28].The values of this indicator (C.C.R) were calculated for each column, and their values are recorded in Table
Table 1 :
Data obtained from the proposed design of the RACFST columns models.
Experimental Method and Mechanical Properties of the Materials.
Sulfate-Resistant Portland Cement (SRC) called locally (Kar) in this experimental work was used.The test results showed that the cement conformed to Iraqi Standard No.5/1984
Table 2 :
ratio (CCR), which are discussed later in detail, were obtained by some mathematical approaches.The information for these specimens is recorded in Table2below.Data obtained from the specimens test of RACFST columns. | 7,897.4 | 2023-01-01T00:00:00.000 | [
"Engineering"
] |
Machine and Deep Learning Applied to Predict Metabolic Syndrome Without a Blood Screening
: The exponential increase of metabolic syndrome and its association with the risk impact of morbidity and mortality has propitiated the development of tools to diagnose this syndrome early. This work presents a model that is based on prognostic variables to classify Mexicans with metabolic syndrome without blood screening applying machine and deep learning. The data that were used in this study contain health parameters related to anthropometric measurements, dietary information, smoking habit, alcohol consumption, quality of sleep, and physical activity from 2289 participants of the Mexico City Tlalpan 2020 cohort. We use accuracy, balanced accuracy, positive predictive value, and negative predictive value criteria to evaluate the performance and validate different models. The models were separated by gender due to the shared features and different habits. Finally, the highest performance model in women found that the most relevant features were: waist circumference, age, body mass index, waist to height ratio, height, sleepy manner that is associated with snoring, dietary habits related with coffee, cola soda, whole milk, and Oaxaca cheese and diastolic and systolic blood pressure. Men’s features were similar to women’s; the variations were in dietary habits, especially in relation to coffee, cola soda, flavored sweetened water, and corn tortilla consumption. The positive predictive value obtained was 84.7% for women and 92.29% for men. With these models, we offer a tool that supports Mexicans to prevent metabolic syndrome by gender; it also lays the foundation for monitoring the patient and recommending change habits.
Introduction
Nowadays, chronic degenerative diseases, such as ischemic heart disease, type 2 diabetes mellitus, and cerebrovascular stroke, are the leading causes of morbidity and mortality worldwide; these diseases share one or more metabolic components (glucose intolerance, insulin resistance, central obesity, dyslipidemia, and hypertension) that might co-exist in one individual. The term Metabolic Syndrome (MetS) was coined to express this constellation of metabolic abnormalities [1].
The prevalence of MetS in Mexico is 41% [2] higher than in developed countries, like the United States (34.2%) [3], due to the epidemic proportion that overweight and obesity have taken in our country, affecting not only the adult population, but also young individuals and even children, with obesity being a central, key component of MetS.
The evaluation criteria for MetS have been proposed by the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) [4], the International Diabetes Federation (IDF) [5,6], and the World Health Organization (WHO) [7]. In order to establish a clinical diagnose of MetS, at least three metabolic abnormalities mostly co-occurr in the same individual: elevated blood sugar levels, insulin resistance, abdominal obesity, high blood pressure, and abnormal lipid profile (high blood levels of triglyceride and low blood levels of high-density lipoprotein cholesterol).
These evaluation criteria have been useful from a clinical point of view. However, from the general population perspective, its utility is limited, since laboratory screening is needed [8], which can be costly or cumbersome to implement for certain population levels. The above and the fact that most people are unaware of their health condition, and approach health services until the sickness has caused health limitations [9][10][11]. For this reason, it is essential to generate tools that can help people to identify pre-clinical conditions, so that preventive measures can be taken at a population level.
The application of machine learning and deep learning (DL) algorithms has facilitated the construction of effective models to predict disease diagnoses and the corresponding treatments [12,13]. However, the appropriate selection of variables (potential risk factors) is decisive in improving the models [14][15][16].
In the case of MetS, the use of machine learning and deep learning algorithms, as well as feature selection methods, have achieved significant results in the development of models to support the prediction of this syndrome [17,18]. Related studies that have omitted blood screening for the diagnosis of the MetS have had promising results. An investigation conducted tby Barrios et al. [19], proposed a data mining methodology to diagnose MetS without blood screening by applying Artificial Neural Networks (ANN).
In that work, hip circumference, dichotomous waist circumference, dichotomous blood pressure, and sex were included; the specificity reported was 82.59%, a Positive Predictive Value (PPV) of 90.54%, and an Area under the Receiver Operating Characteristic (ROC) Curve of 87.36%. Romero et al. [20] constructed a model applying ANN and considering variables, such as Body Mass Index (BMI), Waist Circumference (WC), weight, height, and sex. The authors used PPV as an evaluation metric, obtaining a value of 38.8%.
Ivanović et al. [21] presented a model using ANN for the prediction of MetS, excluding blood screening. The features that result in this study were: gender, age, BMI, Waist-toheight Ratio (WhtR), systolic, and diastolic blood pressures, with a PPV of 85.79% and Negative Predictive Value (NPV) of 83.19%.
Another study relates the dietary habits with MetS in the Swedish region. The authors used chi-square analysis [22]. They found four patterns that were defined by clusters that demonstrated a strong association between food and components of MetS. For example, hyperglycemia in men was associated with cheese, cake, and alcoholic beverages consumption; a higher risk of hyperinsulinemia and dyslipidemia in women was associated with white bread consumption. That work was relevant, since different patterns were used for each gender.
The purpose of this article is to identify the most relevant features to propose a risk predictor for the early detection of people with MetS, through machine learning algorithms, such as Random Forest (RF), Ripper (C4.5), and deep neural networks, when considering traditional risk factors for this syndrome as well as dietary information [23,24] and habits, like the consumption of alcoholic beverages [25], smoking [26], physical activity [27], and quality of sleep [28].
This paper is structured, as follows: in Section 2, we introduce the materials and methods. In Section 3, we present the experiments performed and the results. Section 4 shows the discussion, and, finally, the conclusions.
Data
The data set used in this research was collected from the Tlalpan 2020 cohort, a study that was conducted by the Instituto Nacional de Cardiología Ignacio Chávez in Mexico City [29]. The Tlalpan 2020 cohort was approved by the Institutional Bioethics Committee of the Instituto Nacional de Cardiología-Ignacio Chavez (INC-ICh) under code 13-802. This work consists of 2289 subjects between 20 and 50 years old, 1369 women, and 920 men, it is important to mention that informed consent was obtained from all the participants. The prevalence of MetS, according to NCEP ATP III criteria, was 24.4% higher in women (54%) than in men (46%). This data set includes health parameters and habits that are related to alcohol consumption, smoking, physical activity, dietary, and quality of sleep.
Clinical and Anthropometric Parameters
Systolic and diastolic blood pressure were measured according to The Seventh Report of the Joint National Committee on the Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7) standard procedure [30]. WC, height, and weight were measured according to The International Society for the Advancement of Kinanthropometry (ISAK) [31], BMI was calculated as weight/height 2 and WHtR was calculated as the ratio of waist and height (wast/height) (cm).
Biochemical Evaluation
The blood samples were taken after 12 h of overnight fasting, and the following laboratory tests measurements were obtained: fasting plasma glucose (FPG), triglycerides (TGs), and HDL cholesterol (HDL-C) cholesterol LDL (LDL-C), total cholesterol (T-C).
Dietary Information
Dietary information was collected using the nutrient software program known as Sistema de Evaluación de Hábitos Nutricionales y Consumo de Nutrimientos (SNUT) (Evaluation of Nutritional Habits and Nutrient Consumption System) [32], as developed by the Instituto Nacional de Salud Pública de México (National Institute of Public Health of Mexico). The SNUT includes data from various types of nutrients classified into different categories, such as dairy products, fruits, meats, vegetables, legumes, cereals, sweets and candies, beverages, fats, cravings, and others. All of the variables of the SNUT were included, and their nominal numerical value was considered, which is, the frequency of food consumption during the day, in the last year.
Habits
The following lifestyle information was collected; furthermore, they are inputs for the machine and deep learning algorithms: (1) the smoking habit that was summed up as never smoked, former, or current smoker, and the three features are dichotomous. (2) Alcohol consumption that was classified as a current drinker (dichotomous variable), frequency alcohol consumption, and cups or beers consumed on average when drinking alcohol; these last two are considered to be numeric variables according to the weekly ingesting. (3) Physical activity was obtained by the extended version of the International Physical Activity Questionnaire, IPAQ, [33] that measures the level of physical activity as low, moderate, and high, through questions regarding four domains: work, home, transportation, and leisure time; for this paper, we used the nominal numerical value of variables that are related with leisure time domain, like the days and duration of the type of physical activity (walking, moderate, or vigorous) per week in the last seven days. Lastly (4), quality of sleep was obtained employing the Medical Outcomes Study-Sleep Scale (MOSS) [34], we consider the nominal numerical value of these variables, where the values are described in [35].
Methods
For the study, we applied three machine learning algorithms, RF, C4.5, and deep learning. We conducted experiments for each sex. We choose these machine learning algorithms because of their excellent results in many applications and because each one uses a different approach in a classification task [18,19,36]. We also performed feature selection to search for simpler and better models. The experiments were conducted using R programming language [37]. We aimed to predict the MetS without using biochemical variables.
Initially, the parameters of the biochemical evaluation, as well as the anthropometric factors: waist circumference, systolic and diastolic blood pressure, glucose, high-density lipoprotein cholesterol (HDL), and triglycerides, in order to identify and classify participants with MetS based on the definition that was established by the NCEP ATP III. Once the participants have been classified as positive and negative concerning the MetS, the input dataset for the learning algorithms is generated, being composed of categories related to clinical and anthropometric parameters and habits, like alcohol consumption, smoking, physical activity, dietary, and quality of sleep.
The performance of the models was evaluated according to accuracy, balanced accuracy, PPV, and NPV. Figure 1 shows a block diagram of the prediction model and describes the general methodology applied. As a first step, the instances of the data set were classified using the biochemical evaluation data and the clinical and anthropometric parameters, according to the NCEP ATP III criteria. Subsequently, we separated men and women to obtain the most important variables for each sex. We applied RF to identify and evaluate the variable importance by category, as well as Pearson Correlation Coefficient (PCC) and chi-square, to obtain the most important variables when considering all of the categories together.
We created and evaluated predictive models using the ML algorithms for different subgroups that formed based on the variables obtained. From these models, we selected the most relevant features for men and women.
In all experiments, we performed 30 independent runs by algorithm, which is a typical number that is used in the literature for fair comparisons among experiments [38,39]. Subsequently, we calculated the mean and standard deviation for each metric.
Random Forest
RF, as introduced by Breiman [40], is a machine learning algorithm that is based on a combination of tree estimators that operate as an ensemble for classification and regression cases. In this work, this method was used for the classification and identification of the importance of the variables, based on the mean decrease impurity [41], which is a method that is measured by the Gini index for variable x j , and it is computed by the equation: where ntree is the number of trees.
C4.5 builds a decision tree from training data using recursive partitions. In each iteration, C4.5 selects the attribute with the highest gain ratio as the attribute from which the tree is branched. This results in a more simplified tree [42,43]. In order to obtain the gain ratio, the following calculations are needed: where S is a set consisting of s data samples with m distinct classes, and p i is the probability that an arbitrary sample belongs to class C i . Let attribute A have v distinct values. The entropy, or expected information based on the partitioning into subsets by A, is given by: SplitIn f o represents the information that is generated by splitting the training data set S into v partitions, corresponding to v outcomes of a test on the attribute A.
Finally, the gain ratio is defined, as follows:
Chi-Squared
Chi-Squared is a statistical test that is used in machine learning to identify the essential features in a dataset for a classification task [44]. It gives a feature ranking as a result. Taking a feature f and the class c ( f , c as complements), the chi-squared test is computed with the equation: where N is the number of records in the dataset. P(x, y) is the joint probability of x and y. P(x) is the marginal probability of x.
Pearson Correlation
For the statistical analysis, the PCC was used to filter features [45]. Those with a correlation coefficient above 0.5 were used to train machine or deep learning algorithms. This relation is defined by Equation (7) [46]: where r u,i and r u ,i are the rating scores, also r u and r u are the average ratings. In this work, we used PCC to find the highest correlation between the health parameters and, thus, select the most relevant features, as we mentioned before, with a threshold of 0.5.
Additionally, it is essential to highlight and the output corresponds to the classification in the dataset labeled by NCEP ATP III criteria. With this procedure, we aimed at improving the performance of the predictive models.
Deep Neural Networks
The basis for the development of deep learning is ANN, that through the connection among many hidden layers, can learn features to train the model [47]. In this work, Keras was used [48]; this model is based on a sequential class; it means that the network is created layer by layer. Only the dimension for the first layer corresponds to the number of features; then, the hidden layers' dimension is deeply connected to the neural network. The first layer consists of a convolution network with an input shape of seven (features selected); after the first layer's output is flattened, the purpose is to only have one dimension in the shape output. The third layer is a dense network with a dimension of 8, another dense network (dimension = 8) is added; finally, the last layer is a dense network; all of them have a sigmoid function for activation. Figure 2 shows the general configuration for the deep neural network implemented. The training parameters were a learning rate of 0.0001 and 2500 epochs with Adam optimization. The parameters were selected according to proof of good performance.
Metrics
For the evaluation of the model performance, we used the PPV, NPV, accuracy (ACC), and balanced accuracy (B.ACC).
Variable Importance by Category
The first step was to separate the men and women data, then the essential variables by gender and category were obtained:
Variable Importance of the Complete Dataset
As second step, the most important variables for women and men were obtained when considering the whole dataset. For this process, VIM of RF, chi.square, and PCC were applied, the resulting variables were compared with those that were obtained for each category to identify which ones were repeated in each case.
Tables 1B and 2B show the most important variables (in descending order) for women and men while considering the whole dataset.
Most Important Variables in Women
We found that the five most important variables for women in the feeding habits category were: medium cola soda (cola_soda), soda (flav_soda), Oaxaca cheese (oax_cheese), a cup of coffee (c_coffe), and flavored water (flav_water). In the quality of sleep, the five most important variables were: snoring during sleep (snore), restless sleep (restless), little naps (lit_sleep), not getting enough sleep (nt_sleep), and feeling drowsy (drowsy). The five most important variables for the clinical and anthropometric parameters were: WC, DBP, WHtR, BMI, and SBP. In the category of habits, the most important variable was the cups or beers consumed on average when drinking alcohol (EtOH.avg), followed by smoke (smk.smke), and, finally, the free physical activity (exrcs). Table 1A presents these results. Now, when considering the whole dataset, we applied VIM of RF, PCC, and chi.square to obtain the most important variables (see Table 1B). In the case of VIM of RF, it identified twelve variables, which were: age, waist, BMI, WHtR, weight, height, SBP, DBP, cola_soda, snore, oax_cheese, and c_coffe. Moreover, the PCC method determined ten variables, which were: age, weight, BMI, waist, WHtR, SBP, DBP, snore, cola_soda, and pork rind (pork_r). Finally, chi.square obtained nine variables: age, BMI, waist, WHtR, weight, SBP, DBP, snore, and cola_soda. Table 1B presents these results.
Most Important Variables in Men
In relation to feeding habits, the five most important variables were: cola_soda, flav_water, hot sauce or chili (hot_chili), c_coffe, and corn tortilla (tortilla). In the case of clinical and anthropometric parameters were: waist, WHtR, BMI SBP, and DBP. In the category of habits, the essential variable was smoke smk.smke, followed by EtOH.avg, and exrcs, finally in the case of quality of sleep, the five most important variables were: snore, nt_sleep, drowsy, restless, and tired during the day (tired). Table 2A shows these results.
Analysis and Comparison
Once the most important variables for men and women were identified, we proceed to make an analysis and comparison of these. Table 3A shows the resulting variables by category in descending order for men and women, Tabele 3B shows the resulting variables in whole dataset.
Performance Evaluation of Classifiers
This subsection describes the resulting features after the selection; it means that the variables that contribute to the classifier model; the materials and methods section details the characteristics of each one, these variables can be a dichotomous or nominal numerical value, and it is related to their parameters.
Based on the resulting variables by category and those that were obtained from the whole dataset, we developed several models in order to test their performance evaluation and identify the relevant features that support the prediction of MetS without blood screening.
In order to evaluate the resulting variables in whole dataset that was obtained by RF, PCC and Chi.square, we applied RF, deep neural network, and C4.5, as well as RF and deep neural network for resulting variables by category.
In case of RF, the value of ntree varied between 100 to 1000 (ntree = 100, 200, 300, 500, 800, and 1000); likewise, the value of the mtry (size of the random subsets of variables considered for splitting) also varied between 1 to 10; in both cases, the grid search method, as proposed by Hsu et al. [49], was used. For each case where RF and C4.5 were applied, we used 10-fold cross-validation with ten repeats to train the model and ensure the variation of the data. The deep neural network model was trained with 2500 epochs and Adam optimization. Table 4 shows the results that were obtained by the classifiers with the resulting variables in whole dataset (see Table 1B) and those obtained by category (see Table 1A), using ACC, B.ACC, PPV, and NPV as evaluation metrics, as well as their respective standard deviations of the average performance for the 30 models generated for each case.
The obtained results showed that the highest value of ACC (84.12% with an SD of 0.38) was achieved by RF, using the resulting variables obtained in whole dataset, which are: waist, WHtR, DBP, BMI, SBP, weight, age, height, cola_soda, a glass of whole milk (milk), snore, c_coffe, and oax_cheese.
The deep neural network had the best performance in B.ACC, with a value of 63.26% and an SD of 2.42, also using the resulting variables obtained in whole dataset.
In the case of PPV, the RF obtained the best performance with a value of 85.73% and an SD of 0.32, using the resulting variables obtained by category, which are: waist, WHtR, BMI, SBP, DBP, age, cola_soda, flav_water, c_coffe, snore, nt_sleep, restless, flav_soda, and drowsy.
Regarding NPV, the deep neural network achieved the best performance with a value of 85.76% and an SD of 1.11, again with the whole dataset's relevant variables.
Considering the results that were obtained in the metrics by the different models, it is possible to identify that the first RF model with an mtry = 1 and ntree = 300 has the best performance using the resulting variables from the whole dataset (Waist, WHtR, DBP, BMI, SBP, Weight, Age, Height, cola_soda, milk, snore, c_coffe, and oax_cheese), even though the deep neural network with the same variables has a better performance in en B.ACC (63.26%); the difference is minimal (0.33%). Similarly, despite the fact that the RF model that uses the resulting variables by category has the highest performance in PPV (85.73%), the difference is also minimal (0.95%).
Concerning men, Table 5 shows the results of performance evaluation with the results variables by category and from whole dataset applying RF, deep neural network, and C4.5, using ACC, B.ACC, PPV, and NPV as evaluation metrics, as well as their respective standard deviations of the average performance for the 30 models that are generated for each case. In this case, the results obtained by the classifiers were better than those that were obtained with women data.
The RF obtained the best values in ACC (88.17% and an SD of 0.49), B.ACC (80.73% and an SD of 0.84), and PPV (92.29% and an SD of 0.36) using the variables that were obtained from the whole dataset, such as: Waist, DBP, SBP, BMI, WHtR, Weight, Height, Age, c_coffe, flav_water, tortilla, cola_soda, and snore. With respect to the best value of NPV, it was obtained by the deep neural network (91.26% and an SD of 1.79) using the variables obtained by PCC in the whole dataset, which are: age, weight, BMI, waist, WHtR, SBP, DBP, snore, cola_soda, and tortilla. In this case, the RF model presents a better performance, although its NPV was not a high as other classifiers, such as the deep neural network, and it does obtain the highest value in B.ACC, thus obtaining a better performance in balanced classifications. cola_soda: medium cola soda, c_coffe: a cup of coffee, flav_water: a glass of flavored sweetened water, tortilla: corn tortilla, hot_chili: a tablespoon of hot sauce or chili in food. snore: snore during sleep, nt_sleep: not getting enough sleep, drowsy: feeling drowsy, restless: restless sleep, tired: feel fatigue. Waist: waist circumference, DBP: diastolic blood pressure. SBP: systolic blood pressure, BMI: Body Mass Index, WHtR: Waist-to-Height-Ratio.
The relevant variables that were found by the Chi.Square filter method, both for women (Table 1B) and for men (Table 2B), were used to create predictive models with the C4.5 classifier to compare them with the models that were created with Deep Neural Network and Random Forest. Furthermore, C4.5 has the advantage of creating a model that is interpretable to the naked eye by a person. In Figure 3, we present the best model for women and in Figure 4, the best model for men, both being found over 30 independent runs using the train set. Classification trees have the property of making an embedded variable selection during the predictive model creation process. In the case of women, the variables that were selected for the construction of the tree were WAIST, SBP, DBP, AGE, BMI, and FREC080. For men, the variables selected for the construction of the tree were WAIST, SBP, DBP, WhtR, and BMI.
The model shown in Figure 3 represents the best predictive model created with C4.5 for women across 30 independent runs. For this model, using data from the train set, it was found that, when the WAIST variable is greater than or equal to 89 and the DBP variable is greater than or equal to 85, 92% of the patients suffer from MS, when considering 4% of the cases in the training set. The model presented in Figure 4 represents the best predictive model created with C4.5 for men across 30 independent runs. For this model, using data from the train set, it was found that, when the WAIST variable is greater than or equal to 103 and the DBP variable is greater than or equal to 82, 89% of the patients suffer from MS, while considering 8% of the cases in the training set.
In general, by going through the branches of the classification trees, it is possible to obtain the conditions that determine whether or not a patient will be diagnosed with MS, for both women and men.
The Best Model
According the results in metrics that were obtained by the classifiers, it was possible to identify the best model as well as the most relevant features for women and men.
In men, again, the best model was RF with a mtry of 10 and Ntree of 800, which obtained the best performance using the variables: waist, DBP, SBP, BMI, WHtR, weight, height, age, cola_soda, c_coffe, flav_water, tortilla, and snore.
The relevant features obtained are strongly related with the diagnosis and the risk of developing MetS, such as consumption habits of alcoholic beverages [50], the consumption of cola soft drinks [51,52], sleep disorders, as the action of snoring when the person is sleeping [53], weight, age [54], and the recognized by ATP III, IDF, and WHO (waist, SBP, and DBP).
Best Model for the Risk Calculator
The MetS is considered to be a potential risk factor for cardiovascular disease and diabetes, it has grown exponentially in Mexico and other countries. For this reason, the development of risk calculator tools is important, especially within the prevention perspective. Based on the above, researches have developed models applying machine learning algorithms to support the diagnosis or prediction of MetS, while considering diverse definitions, such as NCEP ATP III, WHO, and IDF (three of the most used criteria throughout the world [55]), which consider health parameters that must be determined with blood screening. However, other studies [19,21,56] have proved that MetS can also be diagnosed without blood studies, while taking other risk factors into consideration.
In this work, we used RF, C4.5, and DNN with the purpose of identifying risk factors that are related to anthropometric factors, sleepy manner associated with snoring, and dietary habits to predict MetS. Accordingly, [18] comparing RF with types of ANN is a prominent topic.
Each algorithm was performed by 30 independent runs, the average of ACC, B.ACC, PPV, and NPV are presented in Table 4 for women and Table 5 for men. The results for both genders shows that highest value in the metrics was obtained by the RF with a mtry = 1 and ntree = 300 for women (ACC = 84.12%, B.ACC = 62.93%, PPV = 84.78%, and NPV = 75.92%) and a mtry = 10 and ntree = 800 for men (ACC = 88.17%, B.ACC = 80.73%, PPV = 92.29%, and NPV = 70.72%).
Based on the results that are shown in Tables 4 and 5, it is possible to observe that RF and the DNN show comparable performances in ACC and B.ACC, but lower performance in PPV and high performance in NPV, however the results that are obtained by RF are close to each other, presenting a suitable solution for the prediction of MetS. The optimal model of RF was performed analyzing several models with ntree between 100 to 1000 trees and mtry between 1 to 10.
Additionally, it is essential to highlight that deep learning models do not require an earlier feature selection step; they get the features from the data; even the above is a property for DL, a better performance was obtained by selecting features by PCC, RF by category, and all characteristics. One test was carried on with all of the features, but this experiment had a more unsatisfactory performance; the main reason is the quantity of data; DL is more effective when the set has a large amount of information.
Most Relevant Features
According to the literature [54,57], weight, waist, age, diastolic, and systolic blood pressure are risk factors that are considered for the diagnosis of MetS. Likewise, in this work, we found other risk factors for men and women that have also been studied due to their relationship with MetS and obesity.
In the case of women, the best model (RF with a mtry = 1 and ntree = 300) showed that variables related to obesity, such as waist circumference and WHtR, were the ones that obtained the highest value in terms of importance; followed by the DBP and SBP, the age and the height, trouble sleeping associated with snoring, restless sleep, and somnolence. Likewise, regarding the dietary habits, we identify that women with MetS have a high consumption of cola soda, coffee, whole milk, oranges, flavored sweetened water, and flavor soda.
For men, the best model (RF with a mtry = 10 and ntree = 800) showed that the waist circumference and blood pressure (DBP and SBP) were the highest risk factors, followed by the BMI, the WHtR, the weight, and the age. Regarding the dietary habits, it was possible to identify that men with MetS preferably consume cola soda, coffee, flavored sweetened water, and corn tortilla. Additionally, like women, men have sleep problems, with snoring being the risk factor outstanding.
As can be seen in the results section, in the case of men, more specific features were revealed; therefore, variable importance is reflected in those that are related to the blood pressure (SBP and DBP). Moreover, classifiers' performance was high; RF was the best in ACC average of 88.17%, a B.ACC of 80.73%, a PPV of 92.29%, and an NPV of 70.72%. In the case of women, the variable importance denotes a close relationship with obesity, with RF being the best classifier with an ACC of 84.12%, a B.ACC of 62.93%, a PPV of 84.78%, and an NPV 75.92%.
The most relevant features identified in this work as prognostic variables to predict MetS in Mexican women and men were strongly related to this syndrome. When the person is sleeping, the case of snoring has been a potential factor that is strongly related to obesity and the risk of suffering MetS [58]. Recent work also suggests a strong association with MetS, even when the snore eliminates the repeated apnea and hypoxia; simple snoring was still strongly associated with MetS [59]. According to studies [60,61], Mexico is one of the countries with a high consumption of sugary drinks. This study found that cola soda, flavored sweetened water, and flavor soda were common consumption habits in both men and women. However, these drinks contribute positively to the risk of developing obesity and chronic diseases [62]. Likewise, tortilla consumption has been associated with the prevalence of overweight, obesity, and MetS in Mexican adults. Coffee was another consumed beverage by both genders; nevertheless, recent studies [63,64] have reported that coffee consumption was not significantly associated with metabolic syndrome [65].
Based on these results, this work could be use for the prevention of MetS. The population can access a simple survey of the healthcare system monitoring. If some risk factors are detected, the people can be directed to medical revision; the above might prevents future problems, by reducing the cost in laboratory tests and treatments.
Limitations
This research was based on data that were obtained from a cohort of relatively healthy adult residents of Mexico City.
Conclusions
In this study, different machine learning algorithms were applied; nevertheless, RF obtained the highest performance to identify the best features by gender and predict MetS without an invasive study or laboratory tests. It should be noted that RF has been one of the best machine learning algorithms to predict the MetS [18,19].
The prognostic variables that were found in both genders were positively related to obesity, blood pressure, and MetS; besides, they can be obtained in a first medical consultation and can be monitored thoroughly. Besides, the separation by gender allowed for discovering differences in dietary patterns, which can be associated with risk factors associated with the development MetS.
Although, in this study, we found a group of risk factors that support the prediction of MetS, we consider it essential to expand the data-set with data from other regions of Mexico, where diet could vary, as well as lifestyles.
Furthermore, this work implements machine learning models, and it lays the foundation to program a friendly graphic interface, including a calculator, in order to bring health monitoring tools. Additionally, implementation is recommended to be carried out directly with the obtained model since developing the equation among the obtained weights and the interaction between trees or layers. | 7,676.2 | 2021-05-11T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
PIV-Measurements of Centrifugal Instabilities in a Rectangular Curved Duct with a Small Aspect Ratio
: In this study, experimental measurements were undertaken using non-intrusive particle image velocimetry (PIV) to investigate fluid flow within a 180° rectangular, curved duct geometry of a height-to-width aspect ratio of 0.167 and a curvature of 0.54. The duct was constructed from Plexiglas to permit optical access to flow pattern observations and flow velocity field measurements. Silicone oil was used as working fluid because it has a similar refractive index to Plexiglas. The measured velocity fields within the Reynolds number ranged from 116 to 203 and were presented at the curved channel section inlet and outlet, as well as at the mid-channel height over the complete duct length. It was observed from spanwise measurements that the transition to unsteady secondary flows generated the creation of wavy structures linked with the formation of Dean vortices close to the outer channel wall. This flow structure became unsteady with increasing Reynolds number. Simultaneously, the presence of Dean vortices in the spanwise direction influenced the velocity distribution in the streamwise direction. Two distinct regions defined by a higher velocity distribution were observed. Fluid particles were accelerated near the inner wall of the channel bend and subsequently downstream near the outer channel wall.
Introduction
The transport of fluid in closed conduits, having a strong curvature, represents key element in several engineering applications such as the design of fuel cells [1], heat exchangers [2], chemical mixers [3], and gas turbine blades [4]. When a steady laminar flow passes through a 180° bend, small disturbances are developed and lead to a three-dimensional unsteady flow. Flow instability is therefore created, and it is characterized by structured vortices that comprise secondary flow. Such patterns can enhance convective heat transfer inside the geometry as the fluid is mixed or conversely decrease the efficiency of the fluid transport.
The first significant work on fluid flow through curved geometry ducts can be credited to Couette [5] and his experimental observations on the flow between rotating cylinders. By measuring the viscosity with the torque effect of rotating cylinders, a discontinuity on viscosity measurements was exposed. Later, Taylor [6] compared experimental observations with specific predictions from the Navier-Stokes equations for instabilities in this geometry. This analysis succeeded at both explaining and predicting the instability appearing in curved geometries. A subsequent theoretical study presented by Dean [7] was directly inspired by Taylor's findings, as the analysis geometry was made of two concentric cylinders. In Dean's work, fluid motion was not induced by the rotation but instead by a pressure gradient for which a new procedure to resolve the Navier-Stokes equations and find stream lines motion was introduced. Dean defined a new parameter to characterize the occurrence of instabilities in curved pipes, which depends on the pipe geometry and flow conditions; this is termed the Dean number: where channel curvature (β) and Reynolds number (Re) and are defined in Equations (3) and (5), respectively. The characteristics of Dean instability in circular curved pipes have been extensively studied via theoretical, numerical, and experimental analyses [8][9][10][11][12][13]. In order to study the effect of geometry on Dean instability, a significant number of studies have been focused on flow in curved rectangular ducts [14][15][16][17][18][19][20]. From such studies, it has been observed that both channel aspect (Γ) and curvature (β) ratios have strong influence on Dean instability [14,15]. Despite a significant number of published studies, Dean instability in a curved channel is still a relevant topic in ongoing research. For example, Boutabaa et al. [21] numerically studied three-dimensional Dean vortices and observed the presence of two steady Dean cells in a square duct. They also observed that when centrifugal forces became significant, the flow pattern was characterized by a four-cell pattern. Similar numerical results in square curved ducts were presented by Wu et al. [22]. They reported the presence of multiple secondary flow patterns in the presence of a negative pressure gradient, i.e., flow separation. Yamamoto et al. [23,24] theoretically and experimentally studied the Taylor-Dean flow through square ducts. They characterized both Dean vortices in fixed walls and Taylor-Dean vortices when the inner wall was moving. The numerical results of Yamamoto et al. [23] showed that secondary flow consists of two, four, eight, or even non-symmetric vortices in a cross section region, depending on the speed and the direction of inner wall rotation. Subsequently, Yamamoto et al. [24] experimentally identified several types of secondary flow patterns and proposed a new flow pattern diagram for mapping (or classifying) all observed flow patterns. Norouzi et al. [25] developed an analytical solution for Dean flow in curved ducts with rectangular cross sections for which a perturbation method was used to solve the applicable governing equations. In their study, channel curvature was considered as the perturbation parameter. Using a perturbation method, they obtained analytically the main flow velocity, stream function of lateral velocities (secondary flows), and pressure drop in curved pipes. Yanase et al. [26] numerically investigated the characteristics of laminar flow through a curved rectangular duct using a spectral method. They placed emphasis on the bifurcation of the solution and observed, depending on channel aspect ratio, 2-12 secondary vortex solutions. Facao and Oliveria [27] and Chandratilleke et al. [28] numerically investigated the influence of secondary vortex motion on convective heat transfer processes. Both studies observed that secondary flow could significantly improve the heat transfer due to the interaction between Dean vortices with buoyancy force arising from the heated wall. For more complex curved rectangular ducts geometries with continuously varying curvature, Li et al. [29] conducted an experimental and numerical study to investigate the flow characteristics. The results revealed complex changes in the flow pattern with respect to both the flow and geometric parameters (Reynolds number, curvature, and aspect ratios) in terms of the onset, development, and disappearance of different types of the Dean vortex. In addition, it was found that flow development in ducts with continuously varying curvatures is quite different from that in conventional curved ducts with constant curvatures. Most published studies related to flow through curved rectangular ducts have focused on channel height-to-width aspect ratios (Γ) equal to or greater than one. However, to the authors' knowledge, only a limited number of studies have focused on small channel aspect ratios (Γ < 3) where the centrifugal forces act along the largest dimension [16,20]. Humphrey et al. [16] investigated flow stability for curved channel aspect ratios (1/3 < Γ <3) in 90° bends. Their experimental results confirmed and quantified that the location of maximum velocity moves from the center of the duct to the outer wall. Gauthier et al. [20] experimentally investigated the impact of the smaller channel geometry aspect ratios (1/40 < Γ < 1/3) on the development of secondary flow structures. The core flow was found to be three-dimensional in square or circular curved ducts, and streamwise vortices were observed above the critical Reynolds number. Gauthier et al. [20] identified the critical Reynolds number (Rec = 135.4) to be function of the curved duct dimensions and concluded that the threshold of the instabilities was controlled by the Dean number. Their experimental results were based on the qualitative observations of flow instabilities using digital photography techniques.
In this study, experimental measurements are undertaken to investigate fluid flow in curved geometries using particle image velocimetry (PIV) to provide quantitative data. The selected duct geometry was based on Gauthier's et al. [20] design, namely 180° rectangular, curved duct geometry of a height-to-width aspect ratio Γ = 0.167 and a curvature of β = 0.54. To the authors' knowledge, quantitative measurements of velocity distribution illustrating the Dean instability in rectangular, curved channels with small aspect ratios (Γ < 1) are missing in the literature.
Experimental Conditions
The channel geometry shown in Figure 1 consisted of two plates of transparent Plexiglas (acrylic), where one had a machined U-shape geometry profile. The dimension of the channel was 250 mm long, 60 mm wide, and 10 mm high. The fluid entered a rectangular cross-section divergent manifold duct with six outlet holes of 8 mm in diameter, followed by a flow straightener to ensure that the inlet flow to the channel was both uniform and laminar. After 200 mm of a straight rectangular duct section, the fluid entered the 180° curvature bend. The inner radius was fixed at R1 = 25 mm, the outer radius was fixed at R2 = 85 mm, and the duct width was fixed at 60 mm. The channel aspect (Γ = 0.167) and curvature (β = 0.54) ratios are defined as: The channel outlet section had five holes of 8 mm in diameter that were equally spaced. The use of Plexiglas allowed for the visualization of the flow and permitted the laser light to be transmitted with limited refraction effects. Furthermore, the channel internal and external walls were polished to remove all possible manufacturing defaults such as scratches. Laminar flow was generated by a pump with a maximum volumetric flow rate (Q) of 30 (L/min). The channel flow rate was measured using a digital flow meter (RS 511-3892) and regulated using a gate valve fitted after the pump outlet. The average channel flow velocity is defined by: The working fluid was silicone oil (Dow Corning DC556, DOW, Midland, MI, USA) with a refractive index of 1.46, which was close to the Plexiglas refractive index of 1.48. The fluid was maintained at a constant temperature of 60 °C with a specified kinematic viscosity (ν) of 1.384 × 10 −5 m 2 /s.
Flow Visualization
To obtain quantitative visualizations of the flow field structures, PIV was employed using the experimental setup illustrated in Figure 2. All flow field measurements were taken with a projected laser sheet, and a charge coupled device (CCD) camera installed perpendicular to this plane was used to record fluid motion. To visualize the flow, polyamide particles tracers with a diameter of 10 μm and a density of 1.016 g/cm 3 were seeded in silicone oil. PIV measurements were performed in two directions: (i) along the channel length and (ii) in the cross-section. For each case, the CCD camera and the laser were placed in different setups. The flow was observed with a field of view of 250 × 120 mm 2 . A diode laser was used as the light source for PIV measurements, with a maximum power output of 200 W at a wavelength of 532 nm. A cylindrical lens (focal length of 28 mm) was used to generate a light sheet of approximately 1 mm in thickness. The synchronization between the laser light pulses and the camera was accomplished by transistor-transistor logic pulses from the synchronizer. For each Reynolds number under analysis (116-203), 100 images were recorded using the PIV system. The time between each couple of images was 200 ms. The pulse separation time was adjusted between 1 and 3 ms depending on the Reynolds number. Full-frame images of 1024 × 820 pixels were acquired and transferred to a computer via a frame grabber.
Using the DAVIS FLOWMASTER software provided by the LaVision System (Göttingen, Germany), the 2D PIV image was divided into 16 × 16 pixel-sized sub-regions using a multigrid correlation process with a 50% overlap. The average particle velocities were calculated using 100 images and the cross correlation method. The spurious vectors calculated and based on the local median filtering were less than 3%. No smoothing data were used for the velocity calculation of the reattachment zone. Steady flow data were post-processed by the PIV software using the sum of correlation function, whereas for unsteady cases, one couple of images was chosen as representative of an instantaneous flow pattern. PIV measurement accuracy can be influenced by a range of error sources that include tracer density, particle displacement, velocity gradients in the interrogation windows, the background noise of the recordings, and the quantization level of the camera. In a study by Raffle et al. [30], a detailed discussion of the influence of such error sources was given. Additionally, near-wall measurements using PIV are often difficult and characterized by large errors and uncertainties due to potential difficulties encountered with light scattering from wall boundaries and a lack of seed particles in the nearwall region. Error analysis assessment was based on comparing the mean velocity measured using PIV and the digital flow meter described in the previous section. Velocity measurement uncertainty was quantified to be less than 7% based on comparing the average velocity derived from the PIV-measured velocity profile with the corresponding measured flow rate. where is the averaged flow velocity, h is the channel height, and ν is the silicone oil kinematic viscosity.
Experimental Validation
According to Gauthier et al. [20], flow instability appears after a critical Reynolds number (Rec), which is dependent upon the channel curvature β: Based on a curvature of 0.54 for the geometry under analysis, the critical Reynolds number value characterizing the onset of flow instabilities was found to be Rec = 135.4. To assess the relationship between different flow systems and this critical Reynolds number, experimental results are presented for four different Reynolds numbers (116, 136, 151, and 203). For each Reynolds number, the velocity profile in the streamwise direction was measured in a location after the entrance length and before the curve entrance (x = 120 mm). The flow velocity, at a given spanwise location in a rectangular duct, was approximated with the following analytical solution (Ebadian and Dong [31]): where y* = y⁄W is the non-dimensional spanwise width position (starting from the center of the duct), W is (R2 − R1), and m is a calculated coefficient: Both the measured and predicted velocity profiles are presented in Figure 3. The overall trend of the measured velocity profiles matched well with corresponding analytical predictions. For below and near the instability threshold (Re = 116, 136, and 151), it was observed that the entrance velocity profile was almost flat, with a small deviation from the corresponding analytical prediction. This discrepancy was within 7% of the error obtained from the PIV measurements and was therefore considered as negligible in this study. However, the measured velocity profile for Re = 203 represented a disturbance around an average value, with a mean constant velocity comparable to the predicted profile.
Two-Dimensional Velocity Distributions
To observe the flow behavior near the instability predicted by Gauthier et al. [20], the measured velocity vectors in the x-y plane at h/2 are presented in Figure 4 for a Reynolds number of Re = 136, which was slightly higher than the critical Reynolds number (Rec = 135.4). The color velocity contour was superimposed on the velocity vector field to provide a visualization of the directions and value of the particles velocities.
The velocity distribution upstream of the channel bend entrance was uniform, with a constant value of 0.37 m/s. The flow entered the channel bend (θ = 0°) with an upstream uniform velocity profile, but the geometry affected the flow by creating high velocity near the inner wall. A gradient of velocity was observed, with its highest value near the inner wall (0.4 m/s) and its lowest value near the outer wall (0.1 m/s). As the flow progressed through the curvature, the region of highest velocity was also changing and approaching the outer wall of the channel. At the outlet of the channel curvature (θ = 180°), the velocity distribution represented an opposite pattern compared to the inlet region (θ = 0°), with the highest value near the outer wall and the lowest value near the inner wall. This trend was amplified as the flow reached the outlet region of the straight channel. This result clearly showed that the instabilities appearing near the outlet channel were originally created in the curvature region and were due to the presence of the velocity gradient and its propagation along the curvature of the channel. Figure 5 presents the evolution of the averaged two-dimensional flow velocity pattern with an increasing Reynolds number on exiting the channel bend (θ > 90°). A steady flow condition at Re = 116 is shown in Figure 5a. The flow regained a smooth pattern containing a region of the highest velocity (red color) that was deviated from inner wall towards the outer wall. At this Reynolds number (Re < Rec), the bend geometry only created a velocity gradient. Figure 5b shows to the velocity distribution of the fluid flow near the critical Reynolds number (Re = 136). At this flow regime, the flow arrived at the bend section with an upstream uniform velocity. However, at the bend section, velocity peaked near the inner wall, representing an approximately 20% increase. The most interesting observation of this flow regime was the velocity pattern on exiting region of the bend, which was characterized by small disturbances and color changes near the highest velocity region. The velocity pattern was not smooth compared to the previous case (at Re = 116), and signs of instability were clearly seen in the two-dimensional velocity distributions that showed the transition between steady and unsteady flows. Regarding the unsteady systems, Re > 136, the same phenomenon can be observed in Figure 5c,d, with the presence of a large velocity gradient close to the outer wall at the outlet of the channel bend.
Spanwise Velocity Distribution
One of the present study's objectives was to visualize velocity distribution of Dean vortices using the PIV technique. The vertical section at the outlet of the bend was chosen for the measurements of Dean vortices at the outlet of the channel bend (θ = 180°). For each Reynolds number, instantaneous, two-dimensional velocity distribution and corresponding color velocity contour are presented in Figure 6. Steady flow conditions (Re = 116) corresponding to Reynolds number below the critical value (Re = 135.4) are shown in Figure 6a. Two horizontal and steady recirculating cells filling the entire cross section of the channel were observed. The value of the velocity was higher in the center region of the cross section with a negative sign (inner to outer wall direction). Near the bottom and top walls, flow was in the opposite direction (positive) with a very smaller value. This was an indication of the Dean vortices, and velocity distribution was in good agreement with the flow pattern observed by Guathier et al. [20]. It is also important to note that near the outer wall (0 < y < 10 mm), the velocity magnitude was higher than that of the inner wall (50 < y < 60 mm) with the start of disturbances in vector directions.
When the Reynolds number reached the critical value Re = 136, as is shown in Figure 6b, the magnitude of the velocity was increased and the small disturbances observed in Figure 6a were now transformed to 5-mm-diameter recirculating vortices propagating near walls from the outer to the inner wall. For higher Reynolds numbers (Re = 151 and 203), the fluid flow circulated in a wave motion with an approximate wavelength of 1.3 × h. Small vortices with a larger size and a higher velocity magnitude were created near the outer wall and were propagated in the opposite direction of the flow reaching the inner wall. To allow for a better comparison of the measured flow velocity, non-dimensional spanwise velocity profiles were compared along center axis of the channel in the x-y direction and are presented in Figure 8, with y* representing the non-dimensional spanwise width position. For all Reynolds numbers, velocity profiles had higher magnitudes near the outer wall. For Re = 116, the spanwise profile was almost smooth, with small variations in the range of the measurement uncertainty. Near the critical Reynolds number (Rec = 136), the velocity profile represented the first oscillation that indicated the onset of the instability. At Re = 151, the velocity presented a peak close to the outer wall and a low speed core flow near y* ≈ −0.17. This confirmed the generation of recirculating vortices near the outer wall. At Reynolds Re = 203, instabilities were caused by the oscillation of the velocity along central duct axis.
Conclusions
The experimental results presented in this study characterized secondary flow structures that appeared for flow through a 180° rectangular, curved duct geometry with a small height-to-width aspect ratio of 0.167 and a curvature of 0.54. The method of PIV permitted us to obtain both velocity profile measurements and flow pattern observations, as well as to observe the effect of Dean vortices on flow structure for the Reynolds numbers of 116-203. The transition to unsteady secondary flows generated the creation of wavy structures linked with the formation of vortices close to the outer channel wall, as observed in spanwise flow measurements. This flow structure became more unsteady with increasing Reynolds numbers. This could be further investigated through numerical analysis, such as computation fluid dynamics (CFD), to enable the obtainment of explanations for the development and progression of such secondary flow structures. Simultaneously with the generation of Dean vortices, a separation of the large velocity zone could be observed along the channel bend. Another zone subsequently appeared downstream near the outer channel wall. The observation of this upper velocity area could provide fundamental insight to aid the growing research on heat transfer for curved geometries in terms of enhancing the convective heat transfer rate. Finally, the presented experimental work clearly demonstrated that the utilization of PIV to characterize the creation of secondary flow structures in curved geometry is appropriate. PIV analysis permitted both visualizations and velocity field measurements of such structures to be obtained in both the streamwise and spanwise directions. | 5,016.2 | 2021-05-13T00:00:00.000 | [
"Physics"
] |
RM 8111: Development of a Prototype Linewidth Standard
Staffs of the Semiconductor Electronics Division, the Information Technology Laboratory, and the Precision Engineering Laboratory at NIST, have developed a new generation of prototype Single-Crystal CD (Critical Dimension) Reference (SCCDRM) Materials with the designation RM 8111. Their intended use is calibrating metrology instruments that are used in semiconductor manufacturing. Each reference material is configured as a 10 mm × 11 mm silicon test-structure chip that is mounted in a 200 mm silicon carrier wafer. The fabrication of both the chip and the carrier wafer uses the type of lattice-plane-selective etching that is commonly employed in the fabrication of micro electro-mechanical systems devices. The certified CDs of the reference features are determined from Atomic Force Microscope (AFM) measurements that are referenced to high-resolution transmission-electron microscopy images that reveal the cross-section counts of lattice planes having a pitch whose value is traceable to the SI meter.
Staffs of the Semiconductor Electronics
Division, the Information Technology Laboratory, and the Precision Engineering Laboratory at NIST, have developed a new generation of prototype Single-Crystal CD (Critical Dimension) Reference (SCCDRM) Materials with the designation RM 8111. Their intended use is calibrating metrology instruments that are used in semiconductor manufacturing. Each reference material is configured as a 10 mm × 11 mm silicon test-structure chip that is mounted in a 200 mm silicon carrier wafer. The fabrication of both the chip and the carrier wafer uses the type of lattice-plane-selective etching that is commonly employed in the fabrication of micro electro-mechanical systems devices. The certified CDs of the reference features are determined from Atomic Force Microscope (AFM) measurements that are referenced to high-resolution transmissionelectron microscopy images that reveal the cross-section counts of lattice planes having a pitch whose value is traceable to the SI meter.
Previous Work
NIST made a prior delivery of CD reference materials to SEMATECH Member Companies that were configured as test chips, each having a single designated reference feature, in 2001. 1 Their calibrated CD values were in the range 80 nm to 150 nm and had expanded uncertainties of approximately ± 15 nm [1].
The current delivery is also test-chip based with each chip having up to six designated reference features with drawn CDs staggered by 30 nm, each with a stated calibrated CD value and an expanded uncertainty value. For the current 2004 delivery, AFM replaces electrical CD (ECD), which was used in the 2001 delivery, as the transfer metrology.
The calibrated designated reference features are incorporated in a uniquely identified HRTEM target on each distribution chip that has been delivered to respective SEMATECH Member Companies, along with a data sheet listing the CDs and expanded uncertainties of those features. An example of one of these data sheets is shown in Appendix A. A formatted summary of the data shown there is also shown here in Table 1.
The current batch of reference materials includes reference features with calibrated CD values as low as 43 nm and having expanded uncertainties as low as ±1.24 nm. An example of the analysis of the contributions to the expanded uncertainty of a typical feature is shown in Table 2.
Terminology
The terminology listed below has developed during the course of this work and is used within this report.
SCCDRM: a chip that has been diced from a SIMOX (Separation by Implantation with Oxygen) wafer having special orientation of the principal axes of its test structures, which are patterned into its active device layer, with respect to the silicon lattice to assure near-atomicscale flatness of replicated silicon features. The CDs of one or more of these monocrystalline features may be calibrated.
AFM Chip: an SCCDRM that has undergone AFM measurement of the CDs of any sub-set of its test-structure features. AFM chips either have been selected as candidates for AFM metrology on the basis of SEM inspection, which is routinely performed for this purpose, or have completed AFM metrology.
Distribution Chip: is an SCCDRM chip that has been set aside for distribution to SEMATECH Member Companies at the conclusion of this project. It has one or more designated reference features, the CDs of which have been calibrated.
HRTEM Chip: an AFM chip which, on the basis of its AFM measurements, has either been selected as a suitable candidate for CD metrology by HRTEM imaging or has completed HRTEM imaging. In effect, AFM imaging is employed to select features and chips for HRTEM imaging. Note that, after HRTEM imaging, an AFM chip can no longer serve as an SCCDRM distribution chip because HRTEM imaging is destructive.
HRTEM Target: a test structure having a geometry that has been designed specifically to facilitate HRPTEM imaging of six reference features with a single FIB (focussed ion beam) cut.
Designated Reference Feature: this is a particular individual feature of an HRTEM target on an HRTEM or AFM chip. The designation encodes the chip and locations where the respective feature may be found by the user.
HRTEM CD: the CD value extracted from an HRTEM image of a designated reference feature of an HRTEM chip. It applies to the entire width of the imaged feature, which includes the thicknesses of the native oxide films on sidewalls.
Apparent AFM CD: the CD of a designated reference feature of an AFM or HRTEM chip, as measured by the AFM.
Calibration Curve: a statistical model that relates the apparent AFM CDs of the HRTEM chips to the corresponding HRTEM CDs.
Calibrated AFM CD: the SI-traceable value of the CD of a reference feature on a distribution chip that is
Reference-Material Implementation as a SIMOX Test Chip
The reference features were configured as a test chip that is replicated in the device layer of a 150 mm (110) SIMOX wafer [2]. The nominal heights of all the reference features are 150 nm. The device layer is electrically isolated from the remaining thickness of the substrate by a 390 nm thick buried oxide created by oxygen implantation.
Fabrication begins with the growth of a 10 nm thick oxide film to serve as a hard-mask material. The testchip image to be described in Sec. 4.2 below is then projected into resist so that its principal axes are orientated to a <112> lattice direction. The latter is established by transferring of a special-purpose angular fiducial pattern to the hard mask and transferring it to the silicon with a deep, lattice-plane-selective etch. Lattice orientation is subsequently determined from visual inspection of the features of the pattern. The reference-material test chip pattern is then photo-lithographically transferred to the hard mask at the correct orientation to the lattice, as revealed by the features of the etched angular fiducial pattern. It is then replicated in the p-type silicon surface layer of the substrate by lattice-plane selective etching. Tetra-methyl-ammonium hydroxide (TMAH) etches (111) silicon-lattice planes at a rate 10 to 50 times more slowly than it etches other planes, such as the (110) surface of the wafer, allowing the (111) planes of the reference-feature sidewalls to behave as lateral etch stops. Aligning reference-features with <112> lattice vectors in the (110) surface of the wafer results in their having planar, vertical, (111) sidewalls. An actual reference-feature cross section is illustrated in the HRTEM image shown in Fig. 1. Figure 2 shows the layout of the 10 mm × 11 mm NIST45 test chip. The principal axes of the test-structure geometries in the upper and lower sections of the test-chip layout are drawn to be oriented to the <112> and <-112> directions. Figure 3 shows the upper (socalled, 1 o'clock) section. It identifies, among other groups of test structures, the HRTEM-target arrays T1 through T4 where HRTEM reference features are located when they are on the 1 o'clock section of the testchip layout. Note that the corresponding HRTEMtarget arrays in the lower section of the test-chip layout, which can be seen from inspection of Fig. 2, are identified as B1 through B4. Of the 10 distribution chips that have been delivered to SEMATECH, some have CDcalibrated designated reference features in the T1-T4 arrays and some have them in the B1-B4 arrays.
Calibrated Reference-Feature Identification Scheme
Individual features are identified with a designation which uniquely identifies each respective feature according to its: <Process Job>, <Chip Number>, <Target-Array Number>, <Target Number>, and its <Feature Number>.
For example, K145-A10-T3-5p3-F4, is a reference feature from the K145 process job, on chip A10, in the T3 HRTEM target array, specifically the 5p3 target, where it is the fourth feature. The process job number is chosen as a laboratory-notebook page-number reference and is also archived as an electronic document at NIST. The designation-component "5p3" corresponds to target "5.3" in Fig. 4, where the lettering "5.3" is actually patterned into the substrate, but could not be incorporated into a file name. Figure 4 shows one of the HRTEM target arrays, the one labeled T1 in Fig. 3, in more detail. In this figure, HRTEM Target # 30-10-5.3 has been enlarged to show the six-feature architecture that is common to all targets. 2 The reference-features are designed so that the drawn linewidths increase progressively from the lower part to the upper part of each target and from the lower left to the upper right of each HRTEM target array. Note that the individual reference features in Fig. 5 are identified as F1 through F6 from left to right. 2 The annotation 30-10, which is shown in Fig. 4, indicates that the target is located in either the T1 or in the B1 target-array. The other targets have similar but different annotations that correspond to the target arrays in which they are located. Fig. 4. One of the HRTEM target arrays, the one labeled T1 in Fig. 3, in more detail. HRTEM Target # 30-10-5.3 has been enlarged to show the six-feature architecture that is common to all targets. Note that the annotation 30-10, appears on targets located in both the T1 and B1 target arrays.
Screening and Chip Selection for AFM
As indicated in Sec. 2.1, the primary metrology that has been selected to reach the project's goals is counting of lattice planes that are illuminated by HRTEM phasecontrast imaging [3]. This technique, however, cannot be used for the reference materials that are to be delivered to an end-user because it is destructive. It is also very expensive, a fact that becomes clear from the descriptions provided later in Secs. 7.1 and 7.2. Therefore, a benign non-destructive metrology, in this case AFM, is used for transfer metrology. However, AFM has relatively low throughput and higher cost than SEM (Scanning Electron Microscopy). Accordingly, SEM pre-screening was implemented for this project to facilitate selection of as-fabricated chips for AFM metrology, in rather the same way that AFM metrology was in turn applied to the judicious selection of AFM chips for HRTEM imaging. The necessity for pre-screening resulted from the fact that the special silicon-substrate and post-processing that was employed to fabricate the reference materials is prone to impart local, randomly located, structural defects in the reference features. This may be partly due to the fact that fabrication was not performed in a clean room. In any case, it was possible to compensate for the defects by careful SEM inspection after patterning, and before AFM inspection, and ranking of candidate SCCDRM chips. In general, the optical and SEM inspections sought to identify HRTEM targets whose features visually appeared to be CD-uniform, unbroken, and free from contamination and other defects, and had at least one feature with a sub-100 nm CD. Each of the candidate chips had multiple teststructure instances, each with a number of candidate features, as has been shown in Fig. 4 and Fig. 5.
High-resolution top-down SEM imaging at 15 kX magnification and 5 keV to 10 keV was implemented for pre-AFM inspection. Three features of each target were captured in each of two successive images. A montage of a pair of these images, which are typical of those that were acquired, is shown later in Fig. 10. Since several hundred chips were selected for SEM inspection by systematic high-resolution optical inspection, it was necessary to implement a database to archive the large number of images to facilitate selection of AFM chips ostensibly having more preferable reference-feature properties. This database facility further enabled an enhancement to the selection process through highly systematized SEM-image processing that characterized each candidate reference feature, HRTEM target, and SCCDRM chip [4]. These parameterizations then allowed automated interrogation of the database, which resulted in the benefits of rapid identification of the "best" SCCDRM chips for AFM metrology in quasi-real time. A further refinement to the database then allowed archiving the silicon-processing conditions and, at a later date, the AFM and HRTEM measurement data that were extracted from chips that had been selected for these more costly measurements. It is anticipated that this database embeds a wealth of information that could, in principle, be extracted to optimize the overall reference-material fabrication and calibration processing in terms of generating narrower CDs having still lower uncertainties.
One disadvantage of using SEM for pre-inspection was its well-known tendency to deposit hydrocarbon contamination on the features, which challenged the calibration procedures [5,6]. This contamination increases the apparent linewidth that is measured by the AFM. In addition, some forms of residual contamination, including moisture, may adversely affect the imaging stability of the CD-AFM tip-resulting in so-called "tip skipping" during the scan. Generally, these issues were resolved by use of a cleaning process prior to AFM imaging. This process, which was developed for this project, was not fully optimized but was observed to be quite effective. Basically, for each batch of chips, the cleaning process involved ultrasonic cleaning of two sets of quartz-ware in high-purity, laboratory-grade, isopropyl alcohol (IPA). 3 All quartz-ware was subsequently baked in a vacuum oven at 200 ºC for 1 h. The SCCDRM chips are then flat-rinsed in running DI (deionized) water, blow-dried in clean nitrogen, and immersed in IPA, which is located in one of the precleaned sets of quartz-ware. They are then removed from the IPA, again blow-dried, and then transferred into the other set of quartz-ware which is then returned to the vacuum oven for several hours. Note that the quartz-ware and chips should be thoroughly air-dried before placing them in the oven, as IPA/air mixtures can be explosive.
This procedure was generally successful in preventing "skipping" of the CD-AFM boot-tip probe, which was essential for AFM imaging. However, it appears that further development of this cleaning process would be advantageous as far as totally removing the SEM-induced hydrocarbon contamination and/or all other residues that are sometimes left on the referencefeatures' surfaces after patterning. Plasma cleaning is one possible approach that has been reported [6]. The AFM instrument used in this work is the Veeco Dimension X3D Model 340 (X3D). 4 This tool is installed in the Advanced Technology Development Facility at SEMATECH, and it has been implemented as a Reference Measurement System (RMS) [7].
The unique aspects of CD-AFM operation are that force sensing occurs along two axes (one vertical and one lateral) and that flared or "boot-shaped" tips are used. This allows imaging of near-vertical sidewalls, which is not possible with the conical probes used in a conventional atomic force microscope. Both conventional atomic force microscopes and CD atomic force microscopes are sensitive primarily to the topography of the surface and exhibit very little dependence on material composition. As such, CD-AFM is an ideal choice for transfer metrology for the subject referencematerial features.
Extraction of Apparent AFM Values From Measurement Data
The markers on the HRTEM targets, which have been shown in Fig. 4, Fig. 5, and later in Fig. 10, were used for navigation. In cases where the reference line of Fig. 10 was not exactly perpendicular to the reference features, each AFM image was referenced to the more left of the two markers. The length and width of each as-captured AFM image included measurements of all six features of the target and its width extended for a total of 2 µm centered approximately on the intersection with the reference line drawn from the left marker pointer. The spacing between adjacent AFM line-scans in the images was 25 nm. A typical AFM measurement profile for one feature consisted of approximately 78 line-scans. An example of a typical set of line-scans extracted from an AFM image is shown in Fig. 6.
The linewidth analysis was performed with the Veeco Nanoscope v6.22r1 software currently supplied with the Dimension X3D. Each of the features in the images was individually windowed and analyzed sequentially. For purposes of this calibration, the width was calculated at the half-height of each feature at a series of locations (i.e., scan line numbers) along the features. An example of these measurements for features F1, F2, and F3 from HRTEM chip K147-D1 is shown in Fig. 7. In this illustration, the x-axis values are centered on the reference line that has been shown in Fig. 5 and extend 0.25 µm in each direction.
Measurements similar to those shown in Fig. 7 were recorded for all six features of one or more designated targets on each of a set of 23 chips. Four of these were selected for HRTEM, and the remainder were set aside as distribution chips. The measurements were then further processed for calibration-curve construction in the case of the HRTEM chips, or for determination of the CD values in the case of the distribution chips. As part of the analysis, the "raw" AFM measurements were smoothed with an equally weighted 7-point moving average to reduce the effects of measurement noise. The results of applying the moving average model to the raw measurements that are shown in Fig. 7 are shown in Fig. 8. The results illustrated in Fig. 7 and Fig. 8 graphically indicate typical levels of CD uniformity of different features in the same target, and the impact of smoothing by taking 7-point moving averages.
The raw AFM measurements were processed differently for the HRTEM chips and the distribution chips. In the case of the HRTEM chips, AFM data centered around a 0.5 µm feature-segment length where the HRTEM measurements were taken were used to compute the apparent AFM CD. Thus, the apparent AFM CD that was used for each calibration-curve point included data from 21 adjacent line scans. In the case of the distribution chips, the AFM data were centered at the intersection of the reference line, which was illustrated in Fig. 5, with the respective designated feature. By comparison, the apparent AFM value, which was specified for each feature that was calibrated for distribution, used data from only five adjacent line scans. Further details of the analyses of apparent AFM values from the respective sets of measurement data are provided in Secs. 8.1 and 8.3.
Instrument Calibration
The AFM measurements on all the SCCDRM chips, for both HRTEM and for distribution, were performed using the same procedure [8]. Since the X3D has been implemented as an RMS, its performance and uncertainties have been well characterized. It was critical that the instrument scale calibration and offset (i.e., bias of the apparent width relative to the SI meter) be the same for all of the measurements, because this is an assumption of the analysis discussed in Sec. 8.
The same traceable scale calibration, which has a standard uncertainty of ± 0.1 %, was used for all of the SCCDRM measurements. While the absolute standard uncertainty of the routine AFM tip width calibration that was available at the time of the measurements was ± 5 nm, it was possible to measure relative widths much more accurately. For features with vertical sidewalls and good uniformity, it is possible to measure relative widths with an expanded uncertainty of approximately ± 1 nm. Since tip wear during measurements directly increases the relative uncertainty, it was necessary to perform measurements on a "monitor" specimen before and after every measurement on an SCCDRM chip. The same monitor specimen was used for both the HRTEM chips and the distribution chips. In this manner, it was possible ensure that all the apparent AFM widths, although performed using different tips at different times, were measured using the same relative calibration of tip width. In other words, all of the measured tip widths, and thus the apparent feature widths, shared a common bias relative to the SI meter to within an expanded uncertainty of ± 1 nm.
HRTEM Imaging
HRTEM images of thin cross-section membranes generate phase-contrast fringes that correspond to the (111) lattice planes which constitute the linewidths of the designated features on SCCDRM chips. An example of an image of (111) fringes is shown in Fig. 9. The lattice plane pitch has a pitch that is traceable to the SI meter [9]. The lattice-plane counts revealed by the fringes thus enable tracing the linewidths of the designated features on the HRTEM chips to the SI meter [10].
Platinum Ribbon Deposition
The designated target of the HRTEM chip is prepared for HRTEM imaging with a process that has been optimized to ensure that the surfaces of the reference features are not damaged [11]. The first step is deposition by sputtering or evaporation of a gold-palladium coating to protect the surfaces of the reference feature during the process steps that follow. After coating, the specimen is placed in a focused ion beam/scanning electron microscope (FIB/SEM) tool to mark the location to be cross sectioned with an electron-beam-assisted platinum deposition. The resulting platinum ribbon mark is approximately 0.5 µm by 8 µm, as illustrated in Fig. 10. The platinum ribbon also serves to protect the reference features during the next step, which is deposition of a protective platinum box, 8 µm by 20 µm.
At this point, the specimen is removed from the FIB/SEM and tripod polished to a thickness of 30 µm. The 30 µm thick membrane is then silver mounted on a half grid and returned to the FIB/SEM and thinned. At the beginning of this process, a 30 kV gallium beam is used for rapid thinning; the final thinning uses a 10 kV beam to prevent damage to the reference feature. This thinning process targets the center of the 0.5 µm region defined by the electron-beam-assisted platinum deposition and continues until the reference feature becomes electron transparent, at which point it has a thickness typically between 25 nm and 30 nm.
Extraction of the HRTEM CDs of Designated Features on the HRTEM Chips
In previous work, we reported a task to develop an automated procedure for determining the fringe counts [12]. However, to further reduce the calibrated-AFM CD uncertainties of the distributed-chips' designated reference features, we have now implemented an expanded manual counting procedure. Specifically, each of four operators independently counted the fringes at three heights in each reference feature. Each operator averaged his or her three linewidth measurements for each feature. In a few cases in which larger than expected disagreements between operators were observed, the operators were directed to repeat their fringe counts. No interaction between the operators during this entire process took place. If asked to recount the fringes on a particular HRTEM image, the operator was not informed whether his or her prior average measurement was higher or lower than the corresponding ones made by the other operators. In each case in which a recount was requested, a counting error was found and, generally, after each requested recount the agreement between operators was less than 1 nm. Since the HRTEM must account for the native silicon dioxide on the sidewalls of the feature, beyond the easily countable fringes produced by the silicon lattice, the operators were asked to use adjacent fringes as a ruler to measure the thickness of the native oxide. Fig. 10. The HRTEM target is placed in a focused ion beam/scanning electron microscope tool to mark the location to be crosssectioned with an electron-beam-assisted platinum deposition. The length of the resulting platinum ribbon mark is approximately 8 µm.
Calibration
After the HRTEM and the AFM images of the twelve features on the two chips that were selected for HRTEM had been captured, their HRTEM CDs were extracted according to the description in Sec. 7.2 above. The method by which the corresponding AFM CDs were obtained is now described in Sec. 8.1 below. The two sets of measurements are reconciled with the generation of the calibration curves to be described in Sec. 8.2 below.
Extracting Apparent AFM CD Values and Uncertainties for the Calibration Curve
The extraction of AFM CD values from the respective sets of line scans was performed as described in Sec. 6.2 above. Four chips were originally measured by HRTEM, but analysis of the calibration data from the four chips indicated that two of the chips may have been affected by sidewall-surface contamination. These two were subsequently not used in the final calibration analysis. The possible presence of permanent surface contamination on the distribution chips is not a major concern because the X3D AFM measures the total CD of each feature, which includes contributions from both the crystalline silicon core, and the native oxide films on each of the two sidewalls, as well as any residual contamination. The following sections describe in more detail the procedure that was unique to analysis of the AFM measurements that were made exclusively on the HRTEM chips.
Apparent AFM CD Values for HRTEM Chips
In order to determine, for each designated feature on the HRTEM chips, an appropriate AFM CD to associate with its HRTEM CD, the location of the imaged cross section membrane relative to the markers on the respective targets was established from inspection of the topdown SEM images. A montage of examples of two of these is shown in Fig. 10. Since the position of each AFM line scan with respect to the reference line between the markers is known, the range of possible locations from which the corresponding HRTEM image was extracted was available. Therefore, for each designated feature on the HRTEM chips, a set of CDs was obtained by averaging the 21 adjacent AFM line scans, smoothed with a 7-point moving average, as discussed in Sec. 6.2, of a 0.5 µm feature segment matching the observed location of the centerline of the platinum ribbon. However, the HRTEM vendor asserted that the HRTEM images were much more likely to represent locations nearer to the centerline of the platinum ribbon. Therefore, during averaging, a weighting function that weighted each smoothed line-scan value according to the inverse of its distance from the centerline was employed to estimate the appropriate AFM CD to associate with the corresponding HRTEM CD.
AFM-Value Uncertainties for HRTEM Chips
Uncertainties attributed to each AFM CD, that were obtained according to the preceding paragraph, arose from the random variation observed in the AFM measurements, the reproducibility of the relative tip-width calibration, and the interaction of feature non-uniformity as observed with the AFM and possible errors in navigating to the membrane where the HRTEM measurement was made with the atomic force microscope. The uncertainties due to the relative tip width calibration and the CD non-uniformity/navigation were treated as Type B uncertainties according to ISO-published methods [13,14]. Since it is believed to be more likely that the HRTEM measurements were made nearer to the center of the platinum ribbon rather than to its edges, and that any navigational errors were small, a triangular probability distribution was used to convert an upper bound on the range of possible CDs in the AFM window to a standard uncertainty by dividing the range by 2√ 6. The standard uncertainty for the AFM CD was then computed by combining the uncertainties of the weighted mean of the AFM measurements, the relative tip width calibration, and the CD non-uniformity/navigation using propagation of uncertainty. In this case, that is calculated by combining the uncertainties by "root-sum-squares." The combined standard uncertainty of the AFM CD is assumed to have infinite degrees of freedom for several reasons [13,14].
Calibration-Curve Construction and Statistics
The apparent AFM CD values for HRTEM chips, obtained as described in Sec. 8.1.1, and their corresponding HRTEM CD values, obtained as described in Sec. 7.2, are shown in Fig. 11. Because of variations in the uniformity of the AFM CDs of the features, the initial calibration model, where β 0 is the intercept of the regression line and β 1 is its slope, was fitted by weighted least-squares regression with weights inversely proportional to the variances of the respective AFM CD values. The latter were obtained according to the descriptions in Sec. 8.1.2. These weights, although estimated individually, should be reasonably stable since each weight is based primarily on an estimate from a bootstrap re-sampling proce-dure with approximately 78 data points per feature [15,16]. The calibration curve from the fit of the regression model is also shown in Fig. 11. The numeric output is shown in Table 3. Table 3. The statistics of the regression that generated the calibration curve in Fig. 11 Coefficient standard Value Standard Degrees of uncertainty freedom The fact that the slope of the linear calibration, 0.996 +/ -0.012, 5 does not differ significantly from unity indicates that the independent traceable scale calibration of the AFM agrees with the HRTEM results, as expected prior to the analysis of the measurements. Because of this, and to take into account the uncertainty in the HRTEM CD measurements directly, a slope of unity was assumed and a simpler "offset-only" model was used to estimate the difference between the apparent AFM CDs and the HRTEM CDs. Physically, this offset corresponds to the bias in the CD-AFM tip width calibration that was used when the data were acquired.
To estimate the offset of the atomic force microscope, a weighted average of the difference between each apparent AFM CD and the corresponding HRTEM CD was used. The weights were inversely proportional to the square of the combined standard uncertainty for each difference. Using individually estimated weights is usually problematic because one of the key assumptions underlying the use of weighted averages is that the weights are known without error and there are usually not enough data to justify the estimation of an individual weight for each data point. In this case, however, since the minimum number of effective degrees of freedom over all sources of uncertainty for each difference is 66 degrees of freedom (from the pooled estimate of the HRTEM CD uncertainty) the assumption that the weights are known without error is quite reasonable.
Use of the offset-only model and the assumption of individual known weights provided an estimated offset of the AFM of 1.03 nm, this offset having a combined standard uncertainty of ± 0.29 nm. The standard uncertainty of the estimated AFM offset includes the uncertainty of each apparent AFM CD and the standard uncertainty of the associated HRTEM CD. A plot that compares the individually estimated AFM offsets and the weighted-mean offset is shown in Fig. 12. The uncertainties shown in the figure are expanded uncertainties (k = 2). Volume 111, Number 3, May-June 2006 Journal of Research of the National Institute of Standards and Technology 198 5 The estimate of the expanded uncertainty of the slope is computed by multiplying the standard uncertainty of the slope (0.0053) by a coverage factor of k = 2.228 obtained from the Student's t-distribution with 10 degrees of freedom. This may be an under-estimate because the uncertainty in the HRTEM CDs may not be fully accounted for in parameter estimates of the linear regression. Since the 95 % confidence interval for the slope includes the value 1 with this smaller estimate of the uncertainty, however, it would also include 1 when using an unbiased uncertainty estimate.
Distribution-Chip CD and Uncertainty Distributions
For each designated feature of the distribution chips, calibrated AFM CD values are determined by subtracting the AFM offset from the apparent AFM CDs for each feature. The uncertainty of each calibrated CD is estimated using propagation of uncertainty, which reduces to summing the uncertainties by "root-sum-ofsquares" in this case. The combined uncertainties of the calibrated AFM CDs include the respective uncertainties in the apparent AFM CD, as discussed in Sec. 8.1.2, and the uncertainty of the estimated AFM offset, as described in Sec. 8.2. Figure 13 and Fig. 14 provide an overview of the calibrated AFM CDs of the distribution chips and their expanded uncertainties. In Fig. 13 and Fig. 14, the breadth of the distribution results from the fact that the plot depicts results extracted from six features from each of multiple targets. The content of the data attachment that would have accompanied the delivery of the distribution chip K153-HH is shown as an example in Table 4. The data attachment itself is reproduced in Appendix A. Table 5 shows an example of the uncertainty budget for feature K153-HH-T1-7p3-F1. In addition, the photo-lithography that generates the pocket's features can readily produce any desired pattern of reference marks with sub-micrometer placement accuracy that is unobtainable by any other known means [17]. Figure 16 shows a cross section through the (100) starting wafers were first oxidized to provide an in-situ hard-mask material for TMAH lattice-plane selective etching. Photolithography of one side of the wafer was conducted to replicate the SCCDRM-chip pocket at its desired location. The next step was selective removal of the hard-mask oxide by 17 % buffered oxide etch solution. The pockets were then generated with extended lattice-plane-selective TMAH etching.
Carrier-Wafer Implementation
Co-planarity of the upper surfaces of the referenceartifact test-chip and the carrier wafer was achieved by careful application of an optical flat after adhesive was applied between the lower surface of the SCCDRM chip and the floor of the recessed pocket.
Recommendations for Use of the SCCDRM
To use one of the designated SCCDRM features, the user must determine the CD at the center of the feature as indicated by the built-in reference markers shown in Fig. 4. This could be done by scanning along the line for some distance (say 0.25 µm) around the center of the feature and averaging the results or by fitting an appropriate model to the linewidth results by scan line and predicting the CD at the location aligned with the marker. The measurements of the SCCDRM features should be made near the middle of the line height, where the AFM calibration data were taken. After the user has determined the CD of the reference feature, the offset for the measurements of new specimens can be obtained by comparing the result obtained from measuring the reference feature with its calibrated value. When a new specimen is measured, that offset can be used to correct the new measurement result to the traceable value. Note that the method used to measure the new samples should be as similar as possible as the method used to measure the reference feature on the SCCDRM. Any differences in the measurement procedures may preclude establishing traceability. The overall uncertainty of the measurement of a new specimen depends on uncertainties from several sources that include the user's measurement of the RM8111 reference feature and the uncertainty in its calibrated value. All such sources must all be accounted for in the reported uncertainty for the CD of the new specimen.
If the SCCDRM is being used only to evaluate the AFM tip width calibration offset (or bias), then the AFM scale should be independently calibrated using a traceable pitch standard prior to measuring the SCCDRM. In principle, the SCCDRM could then be used directly for traceable tip width calibration by subtracting the calibrated width value from the raw apparent width (i.e., the width measured using no tip correction). Subsequent width measurements are then traceable. Typically, however, users may find it more practical to use the SCCDRM to determine the bias of their existing tip width calibration specimen. In this case, the user should measure the SCCDRM using his or her current tip width calibration procedure. The difference between this apparent width and the calibrated value gives a measure of the bias in the user's existing tip calibration. This offset should then be used to correct the assumed width value of the user's tip calibration standard to a traceable value. Subsequent tip width calibrations are then traceable. Because there is an additional measurement step, this would generally result in slightly larger uncertainties than direct tip width calibration using the SCCDRM. However, this contribution may not be significant, and the convenience of this approach may outweigh other considerations.
To establish uncertainty of width measurements on their own specimens, users should consider all the relevant sources of uncertainty. These sources should include: (1) the uncertainty in the calibrated value of the reference feature obtained from the data attachment accompanying this report, (2) the statistical (type A) uncertainty in the user's measurement of the reference feature, (3) the statistical uncertainty in the user's measurement of the new specimen, (4) the statistical uncertainty of the user's routine tip calibration (unless the SCCDRM is being used directly for this), and (5) the uncertainty of the scale calibration. In addition, there may be other sources unique to the user's circumstances and application that should be considered.
Evaluation of the AFM tip-calibration offset can be performed using results on only a single SCCDRM feature. However, if more features are measured, the additional information can be used to advantage. The most apparent possibility is to use the results from multiple features to obtain a more accurate estimate of the bias. However, depending upon the accuracy of the user's existing scale calibration, it might also be useful to use results on multiple features as a check on scale calibration and linearity. As a final caveat, please note that the reported values represented the CDs at the time of measurement in April 2004. Although we fully expect the CDs to remain stable over time, measurements of a selection of SCCDRM CDs will continue to be monitored by NIST. If changes are observed, this information will be reported to all known users of these SCCDRMs. The table below shows the measured critical dimension (CD) and expanded uncertainty (k = 2) for each feature listed. The features are designated as F1 thru F6, and this convention is maintained even when not all features have reported CDs. When the chip is oriented with the lettering up, F1 is closest to the bottom of the chip, and F6 is closest to the top. Normally, F1 is the narrowest and F6 is the widest. | 8,943.2 | 2006-05-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Resource-Saving Method of Forming Information Infrastructure of Sorting Stations
The paper proposes a resource-saving method of conceptual design of the information infrastructure of railway objects on the example of a distributed computer control system of a marshalling yard. This method allows determining the economically optimal degree of decentralization of the technical structure of the management system.
Introduction
The current stage of development of industrial and transport systems is characterized by a grace of technologies and automated functions based on the territorial distribution of resources and the wide use of artificial intelligence methods for solving problems of energy and technical resources, logistics and environmental safety. The integration of processes and systems leads to a synergistic effect and the formation of large intelligent transport systems [1]. In such systems, an important parameter affecting their efficiency is the multi-structure and its component -the technical structure [2]. For railway objects, information infrastructure, including servers, computers, controllers and information channels, are of particular importance. This is due to economic and environmental problems arising in case of failure of individual elements of the structure. For example, for a hump yard it could be a wagon battle, damage to environmentally hazardous goods, and so on. These circumstances increase the role of conceptual design of the information infrastructure of railway objects and use in the development process of resource-saving methods of system design [2].
The paper proposes such a method to determine the rational degree of decomposition of the computer system of the marshalling yard, minimizing production losses due to the unreliability of equipment.
The development of automation systems for marshalling yards in Ukraine and abroad was based on the creation of the first centralized computer systems based on mini-computers (SM-2) to build functionally distributed systems based on industrial microcomputers and microcontrollers (micro-DAT, SM-1800 / 1810, Advantech microcontrollers) [3,4]. With the transition to decentralized (hierarchical) systems with a variety of options for distributed structures, the issue of the justification for the transition to decentralized management remains unresolved. Despite numerous attempts to find an answer to this question [5,6,7,8], it remains relevant in the development of technical means of automation of hump yard. When considering the optimization of the structure of the hierarchical system, we will assume that the hump yard, as a technological object of control of the marshalling yard, has local automation devices. The centralized control system, optimizing the overall work of the hump yard, adjusts the work of local automation devices, forming adjustments for regulators (for example, retarder positions and switch operating apparatus). On this principle, the ASC was built for the Yasinovata station of the Donetsk railway [3].
With the growth of power of management computers (MC) there is an interest to consider the expediency of transferring the functions of local automation devices to higher-rank microcontrollers that provide local optimization of the subsystems [8].
The possibilities of automation systems were limited in terms of the implementation of complex control algorithms earlier, and MC did not differ greatly, then there are now technical possibilities to increase the power of both microcontrollers and MC. Therefore, the transfer of functions of local regulators on MC does not significantly affect its power (memory, speed, etc.).
However, such centralization of management significantly increases the responsibility of MC. Obviously, the failure of the MC violates the management of a whole hump yard, which leads almost always to great economic losses. Therefore, many authors consider [9,10,11], which determines the reliability of MC in the solution of this issue.
Problem statement and method of its solution
The hump yard consists of n controlled objects i M , whose status O i can be characterized by two parameters: i y -adjustable parameter; i x -control action ( j z -controlled, g f -uncontrolled parameters, see . The vector-set Y is a task in support of a regulated parameter and in general can differ from its true value -the vector Y. If the system does not have local regulators, then MC directly forms the vector X and issues it to the executive bodies. In this case, we have a centralized system (Fig. 2). In the case of local regulators (LCs) implementing local control functions, MC's functions are simplified to formulate only the settings Y for the LC. In this case, the hierarchical structure of the MS is discussed (Fig. 3). The implementation of the management algorithm is reduced to minimizing (maximizing) the function of the species (for example, minimizing the deviation of the speed of exit from the retarder position from the set by the control algorithm). Optimization involves searching for such an optimal value of the control vector U*, so that the management function (1) at the given X takes the optimal value The evaluation of the variants of structures will be carried out on the criterion of the full cost of resources.
To simplify the calculations we assume that all n control circuits are the same, the costs of operating systems are also the same and the unreliability of executive bodies can be neglected.
System management will be presented in the mode of time distribution as follows. According to the scheme in In the second situation, the damage 1 p will consist of two parts (Fig. 4) . At the moment time 1 t , it is proportional to the difference between the optimal and actual values of efficiency: At the moment time 2 t there is an output parameter ( ) and local protection disables the object, i.e., losses in this area 12 p are determined by the total loss of efficiency: Complete losses for the second situation B -Efficiency in case of not optimal control in the centralized system (manual control of sorting out of cars). In the hierarchical system, when MC refuses, the control is not lost completely, only its effectiveness is reduced. As a result of the refusal, she takes on meaning 2 B . In this case, the loss from the loss of optimal control This damage, as a rule, is much less than the damage that is obtained when a CM system fails in a centralized system. Reducing the damage to a system with a hierarchical structure is achieved by increasing its value due to the presence of local regulators. In addition, the effect of loss reduction is reduced due to additional losses caused by the unreliability of local regulators. Let's take the following for further analysis Assumption: Then the full costs for centralized 1 P and hierarchical 2 P systems can be estimated as follows: * * c e n t r where 0 -the intensity of bugs CM in hierarchical and centralized systems; i -intensity of failure of the i-th local regulator; 0 0 , , cen dcen i C C с -the cost of CM respectively in the centralized system, in a decentralized (hierarchical) system and the cost of the microcontroller system of the i-th local regulator.
We will assume that 0 0 To simplify further analysis, we assume that the system is homogeneous in its composition, i.e. ; ; ; 1 ,2 ,. . . , For the i-th control circuit, taking into account the possible failure of the regulator by analogy with the centralized control system (Fig. 4), we can write In the first approximation we will accept that For further research, we introduce the coefficient 1 2 , Then we have from (6) and (7) Let's turn to the integral index of reliability -the readiness factor where mtf T -mean time to failure and 0 0 0 Then let's introduce the coefficient : We substitute (14) and (15) in (10), (11), normalize full costs and introduce the notation As a result, we get In Fig. 5 under the following assumptions 0,9 r and 1,1 From the graphs it follows that with the increase of the coefficient of readiness of loss in the hierarchical decentralized system, in comparison with the centralized decreases ( (1) (2) (1 ) / 0 a К К a n a .
Conclusions
It should be borne in mind that the results obtained are valid only for the case of finite and additive losses. For objects, the complete refusal of automatic devices which can lead to accidents such as catastrophes or associated with the possibility of human victims, the choice undoubtedly must be resolved in favor of the combined system. | 1,954.4 | 2019-01-01T00:00:00.000 | [
"Computer Science",
"Economics"
] |
Computational wind analysis of an open air-in fl ated membrane structure
The current research aimed to obtain mean pressure distribution over an air-inflated membrane structure using Computational Wind Engineering tools. The steady-state analysis applied the Reynolds-Averaged Navier-Stokes equations with the k − ε standard turbulence model. The pressure coef fi cients were compared with former experimental results to validate the numerical solution. Signi fi cant errors were detected close to the critical fl ow separation points when comparing the numerical results with the wind tunnel tests. However, these errors are local, and the numerical methodology provides accurate results in those areas with minor turbulence motion in fl uence. In general, the numerical solution provided good approximation of the pressure coef fi cient fi elds.
INTRODUCTION
Membrane structures offer lightweight, environmentally friendly, and cost-effective construction solutions to cover large public spaces like concert halls or stadiums [1].Moreover, lightweight houses can be built like the ones described by [2], representing cheap shelters for emergency situations.Membrane structures can be permanent or temporary buildings; some even possess the deployable characteristic that allows the change of their configuration whenever needed e.g., deployable umbrellas [3].
Tensile membrane structures are composed of flexible textile membrane and a tensioning system that might include arches, masts, rings, and cables [4].
Pneumatic membrane structures are supported and stabilized by the enclosed, compressed air; therefore, they are extremely light structures.Pneumatic structures can be divided into subgroups: air-supported and air-inflated [4][5][6].In the case of air-supported structures, the covered air volume is closed and compressed; therefore, special entrances are built to enter or exit the construction.Air-inflated structures are tensioned by the enclosed, pressurized air in their walls; the structure and the covered space can be open, and no special entrances are needed.
The design procedure of membrane structures must consider the strong relationship between geometry and membrane forces.Special numerical methods are applied to determine the equilibrium shape according to the material properties, tensioning system, and external load actions.With their geometry and prestress, the membrane structures must carry downward and upward wind actions or any snow/rain load without wrinkling, ponding, or fluttering problem [1].
The structural design for wind requires knowledge about the pressure distribution on the structure, which can be given by the dimensionless pressure coefficient ðC p Þ.Because of the complex and unusual shapes, those parameters are not given in the Standards for membrane structures.The pressure coefficients can be obtained by experimental studies or numerical simulations.The Wind Tunnel test (WT) is the most reliable technique related to the wind engineering of structures; however, it is timeconsuming and costly.Usually, this technique implies measurements on scaled, rigid models wherein material properties and deformations cannot be considered.Wind tunnel investigations of different membrane roofs are presented in [7][8][9][10].
Numerical approaches, also known as Computational Wind Engineering (CWE), apply the governing equations that describe the fluid eddy motions to solve the aerodynamic wind effects around the building [11][12][13].This technique may include steady-state or time-dependent analysis where material properties and deformations can also be analyzed (fluid-structure interaction).It should be mentioned that CWE cannot be considered a completely reliable technique, so its validation with experimental results is strongly recommended.Numerical investigations to analyze the aerodynamic behavior of different tensile membrane structures are described in [13][14][15].The current research validated the CWE analysis of an open air-inflated membrane.
Former wind tunnel tests
The structure selected for the CWE analysis was previously tested in a wind tunnel [16].The prototype structure consists of six inflated tubes with a diameter of 3 m (Fig. 1); its total length (L) and height (H) are 13 m; meanwhile, the width (W) is 26 m.The experiments were completed in the open-circuit wind tunnel with a test section of 1 3 1 3 1.5 m of the Autonomous University of Yucatan, Mexico.The 3D printed model had a scale ratio of 1:72.5.The pressure on the external and internal surface of the model was measured at 102 points for three wind directions 08, 458, and 908.
Numerical analysis
The 3D steady-state analysis followed the recommendations of [16][17][18].The numerical simulations were based on the socalled Reynolds-Averaged Navier-Stokes (RANS) equations, which are suitable for analyzing mean pressure distribution, discarding any transient flow effect.The applied turbulence model was k − ε standard [19], which was selected based on a previous grid sensitivity analysis.The preliminary results showed that the k − ε standard turbulence model provides acceptable solutions, which are almost independent of the grid resolution.
Domain size dimensions and mesh resolution
The creation of the flow domain with double inlet/outlet faces followed the suggestions presented in [21][22][23].The double inlet/outlet allows a more straightforward application in the case of non-orthogonal wind directions.The distance from the structure to the inlet and to the top faces was 5H; to the outlet surfaces was 15H, based on [17,18].
The semi-structured mesh integrated tetrahedral elements close to the structure and hexahedral elements in the rest of the domain (Fig. 2).Around the membrane surface, inflation layers were also considered.The minimum element size was 0.5 m, whereas the maximum was 5.0 m; the total number of nodes and elements were 5.0$10 6 and 5.5$10 6 , respectively.
Boundary conditions and convergence criterion
The following boundary conditions were applied: a) Inlet face: the inlet velocity profile followed the power law equation (Eq. 1) to characterize a flat type terrain [23][24][25]: where U is the wind velocity at a certain point; Z is the point height; Z h is a reference height; and U h is the wind velocity at the reference height (15 m s À1 ); finally, α is the velocity exponent that depends on the terrain roughness, based on scalable wall functions approximation.The lower yþ limit for those cells near the walls was 30.
For all wind directions, the membrane surface and the ground surface had the no-slip wall condition, whereas the top domain surface had the symmetry condition.The other abovementioned boundary conditions depended on the current wind direction.For example, for the 08 wind flow, the inlet boundary was the surface perpendicular to the Y-axis on the windward side of the structure.In contrast, the wall on the positive side of the Y-axis (and perpendicular to it) was the outlet boundary face.Additionally, for this wind direction, both boundary walls perpendicular to the X-axis followed the free-slip boundary condition.
Similarly, for the 908 wind direction, the inlet and outlet boundaries were the domain walls perpendicular to X-axis.Finally, for oblique wind directions, the X and Y components of the wind velocity vectors were set at the walls on the negative side of the X and Y axes; then, the outlet boundaries were the walls on the positive side.Figure 3 depicts the boundary conditions for orthogonal and oblique wind directions.
The solution method involved the SIMPLEC scheme for the pressure-velocity coupling and second-order spatial discretization.The convergence criterion was a limit of 1$10 −5 for all residuals, but the analyses were stopped if they reached 5,000 iterations.
RESULTS AND DISCUSSION
This section introduces the results based on CWE simulations and their validation with experimental results.The pressure distributions on the surfaces are described by dimensionless C p values: where p is the pressure at a specific point on the membrane surface; p 0 is the upstream pressure; ρ is the air density; and U is the free-stream velocity.The notations C pe and C pi mean pressure coefficients on the external and internal surfaces of the structure, respectively (Fig. 1).The validation included error measurement methods that compared the WT-based and the CWE-based C p values.The accuracy assessment included the Mean Absolute Error (MAE) and the Mean Square Error (MSE) calculation (Table 1): where n is the total number of points to compare Figures 4c and 5c depict C p in the point set B for the 908 wind direction.(The dashed lines represent the C p of the deformed structure shape; this analysis is introduced in the following section.)On the external surface, CWE gave highly accurate results at the windward side of the model; however, there are significant differences from the WT results at the top of the structure.On the internal surface, there is relatively small wind suction, and the numerical solution underestimated the C pi values.
Figure 6 and Table 2 give a more general view of the pressure distribution.Figure 6 includes the pressure coefficient fields based on CWE results for all analyzed wind directions.Table 2 summarizes the maximum and minimum C p values for every analyzed wind direction.The following conclusions are drawn.
On the external membrane surface, the most significant suction C pe ¼ −1:752 was found at 308 wind direction, whereas the maximum positive C pe ¼ 0:785 was found at 908 wind direction.The critical wind direction for the inner membrane surface was 608, which presented the most In the case of the 458 wind direction, the largest suction determined by CWE (C pi ¼ −1:861) was 56% larger than the C pi at the same point according to the WT (C pi ¼ −1:196), which represented the most significant error in the negative pressure coefficients.The most significant error in the positive pressure coefficients was much smaller, approximately 16%.
DEFORMED SHAPE WIND ANALYSIS
The large displacements of membrane structures and their effect on the pressure distribution cannot be considered during the conventional WT tests.Previous research proved that the impact of the displacements could be significant on the pressure distribution and on the according membrane forces as well [26].
In the current research, following the method presented in [26], the deformed shape of the structure according to 908 wind direction was determined by the Dynamic Relaxation Method (DRM).During the DRM analysis of the membrane structure the considered dynamic pressure was ¼ 1:52 kN=m 2 , recommended by the Mexican Standard [24].The internal pressure in the inflated arches was p ¼ 25 mbar.The warp direction in the membrane was supposed to be "parallel" with the centerline of the inflated arches, and the fill direction was perpendicular to the warp direction.Linear elastic, orthotropic material model was taken into account with the same modulus of elasticity in warp and fill directions (E w ¼ E f ¼ 400 kN=m, G ¼ 10 kN=m).The maximum displacement was approximately 2.7 m, and it was detected at the windward side of the structure.The deformed shape was 3D printed for a second WT test (Fig. 7).
The pressure distribution over the deformed shape was determined by WT test and CWE analysis following the methodology introduced at the analysis of the undeformed structure.MAE and MSE factors show that the CWE analysis provides a good approximation to the experimental results (Table 1).
Table 3 compares the maximum and minimum C p values, Fig. 8 shows the pressure coefficient fields based on WT test and CWE approach.The results show that there is a significant difference at the top of the structure between the numerical and the experimental results, but with the exception of that area, the CWE results give a good approximation.There is a very good agreement on the windward side of the model, and the results show positive pressure on a significantly larger area on the external surface compared with the undeformed surface (Fig. 5).
CONCLUSION
This paper presented the pressure coefficient maps for an open air-inflated membrane structure based on 3D steadystate numerical simulation.The RANS equations with the k − ε standard turbulence model were used to describe the turbulence flows.The pressure coefficients on the external and internal surfaces were determined for five wind directions.The CWE results were compared with experimental results and the error measurement, based on MAE and MSE factors showed that there is a good general agreement.However, significant, local discrepancies were found in some areas highly influenced by large eddy motions (close to flow separation regions).The analysis of the deformed shape according to one of the analyzed wind directions proved that the effect of the displacements could have a significant impact on the pressure coefficient maps.
Fig. 1 .
Fig. 1.Dimensions of the prototype structure and the wind directions considered during the CWE analysis
(n 5
102 according to the number of measurement points during the WT experiments); C p;WT and C p;CWE are the pressure coefficients based on WT tests and CWE simulation, respectively.Two sets of measurement points were selected for further comparison.Point set A represents six points on the top and on the bottom of the arches, in the symmetry plane that contains the central axis of the structure.Point set B corresponds to the points on the external and internal surfaces of the two central arches.Figure4depicts the pressure coefficients in these set of points on the external surface, and Fig.5on the internal one.Figures4a and 5ashow the C p in point set A for 08 wind direction.The most significant differences between the CWE and WT results were detected at the first arch, close to the flow separation area.CWE provided the best results at the last two arches.Figures4b and 5brepresent the C p values in the set of points A for 458 wind direction.At most of the points on the internal surface, the numerical solution is accurate.Meanwhile, the most significant difference on the external surface
Fig. 3 .
Fig. 3. Domain dimensions and boundary conditions for all wind directions
Fig. 7 .
Fig. 7. Wind tunnel model of the deformed shape
Fig. 8 .
Fig. 8. WT-based (left) and CWE-based (right) pressure coefficients on the external (top) and internal (bottom) surfaces of the deformed shape
Table 1 .
Accuracy factors for different wind directions
Table 2 .
Maximum and minimum C p values on the membrane surface based on WT tests and CWE approximations
Table 3 .
Maximum and minimum pressure coefficients on the deformed structure | 3,157 | 2023-07-04T00:00:00.000 | [
"Engineering"
] |
Mobile Professional Voluntarism and International Development ‘Aid’
Chapter 1 sets the research on which this book is based in context. It discusses the relationship that Aid has with concepts of equality and poverty, and distinguishes humanitarian (emergency) relief contexts from those focused on capacity building. It also questions the efficacy of Aid and raises the possibility that Aid itself may have damaging consequences. Moving from Aid to the wider concept of ‘global health’, the chapter discusses the role that forms of highly skilled migration, such as professional voluntarism, can play in capacity building. Finally, it discusses the methodological approach taken in this action-research study.
INTRODUCTION
This book reports on our experiences of managing and researching the deployment of professionals employed in the UK, primarily but not exclusively in the National Health Service (NHS), to public health facilities in Uganda. 1 The authors have been involved in interventions focused on improving maternal and newborn health in Uganda for the past 7 years through the work of a British charity known as the Liverpool-Mulago-Partnership (LMP 2 ). LMP is one of many health partnerships active in Uganda and linked, in recent years, under the umbrella of the Ugandan Maternal and Newborn Hub. 3 In 2012, LMP received funding from the Tropical Health and Education Trust for the 'Sustainable Volunteering Project' (SVP). The 'SVP' was funded in the first instance for 3 years, during which time it deployed around 50 long-term and many more short-term volunteers. The whole project has been subject to intense ongoing evaluation focused both on volunteer learning and the returns to the NHS 4 and on the impact on hosting Ugandan healthcare facilities and health workers. This book focuses on the second dimension capturing the impacts of these kinds of intervention on the receiving/hosting country or the 'development' perspective.
Whilst the work is deeply and necessarily contextualised, the results create important opportunities for knowledge transfer and lesson learning in other fields of health and social policy and in other low-and mediumincome countries.
DEVELOPMENT, AID AND INEQUALITY
Countries, such as Uganda, are often described either as 'developing' (in contrast to the 'developed') or, more recently, as 'lowerand middleincome countries' (LMICs) in contrast to high-income or resource-rich economies. This characterisation suggests binaries: the 'haves' and 'have nots' or at least a continuum from high to low resource. Of course, you may have high-income economies (such as the USA or the emerging economies of India and China) with very high and increasing levels of absolute poverty and inequality. Even 'low-income' economies such as Uganda provide a comfortable and lucrative home to many very rich and highly cosmopolitan people with access to high-quality private health facilities both at home and across the world.
The complexity and relative character of inequality and its spatial dynamics are somewhat lost in this characterisation of 'international development'. The project we are reporting on is focused on the public health system in Uganda and, more specifically, on the delivery of services to those Ugandan people whose only claim to health care is on the basis of their citizenship. Or, put differently, those citizens who lack the income to access a wide range of other options. In Uganda (as in India or in the USA), health status is related directly to ability to pay; the more money you have the higher your opportunities. Only those with no other options will turn to the residualised 'safety net', that is, the public health system. Perhaps the only factor distinguishing a country like Uganda from other countries is the fact that this is the case for a majority of its population and countries such as the USA (or China and India) have the resources, if not the political will, to significantly reduce health inequalities.
According to published data, Uganda has one of the highest levels of maternal mortality in the world. The Ugandan Ministry of Health's Strategic Plan suggests that little, if any, progress has been made in terms of improvements in Maternal Health (Millennium Development Goal 5) and, more specifically, in reducing maternal mortality (MOH 2010: 43). A United Nations report on the MDGs describes Uganda's progress as 'stagnant' (UNDP 2013: iii). Figures on maternal mortality in Uganda vary considerably depending on the source. The World Health Organisation reports maternal mortality ratios (MMRs) of 550 per 100,000 live births (WHO 2010). 5 However, the benchmarking exercise undertaken as part of the Sustainable Volunteering Project (McKay and Ackers 2013: 23) indicated wide variation between facilities in MMRs reported to the Ministry of Health. Perhaps of greater significance, it reiterated the very poor quality of reporting and records management resulting in significant underreporting. The figures for Hoima Regional Referral Hospital likely reflect improvements in records management following the intervention of a UK Health Partnership (the Hoima-Basingstoke Health Partnership) rather than a greater incidence of mortality. Indeed, more detailed audit of case files by an SVP volunteer indicated levels in Mbale regional referral hospital of over 1000 (more than double reported levels) ( Fig. 1.1). 6 These figures are shocking indeed. However, it is important not to gain the impression that all women in Uganda face an equal prospect of dying in childbirth. Data collected from the private ward in Mulago National Referral Hospital paint quite a different picture with only one maternal death recorded between January 2011 and October 2012 compared to 183 deaths on the public ward. Interestingly, the caesarean section rate on the private ward is more than double that on the main public ward (51.6 % compared to 25.4 %) (Ackers 2013: 23). Inter-sectoral inequalities within the country are as alarming as intercountry comparisons. And, in case of Mulago Hospital, the health care staff treating patients on the private ward are exactly the same as those on the public ward. 7 The simple but important point we are trying to make at the outset is that the context within which the Sustainable Volunteering Project is deploying volunteers is best described as one of profound social inequality rather than poverty per se. And, the 'low-resource setting' we refer to in this book is the public healthcare system in Uganda and not Uganda or Ugandan health care, as a whole.
One of the problems with the popular use of the word 'poverty', or even more so, 'the poor', is that they infer the kind of passivity displayed in media fund-raising campaigns with images of human 'victims' needing 'help' splashed across posters and television screens. And, the corollary of this is, of course, the 'helpers' or good-doers who dip into their pockets. This 'donor-recipient' model of development AID continues to taint international relationships. It is convenient and valuable to distinguish at this juncture two forms of intervention or perhaps, to avoid caricature, two contexts. Bolton suggests that, 'broadly speaking, AID can have two aims. It either provides humanitarian relief in response to emergencies, or it tries to stimulate longer-term development' (2007: 75). Humanitarian or emergency AID then seeks to provide an immediate response to catastrophic events such as famine, earthquakes or wars. In such situations, immediate service intervention is easier to justify and concerns around unintended consequences or collateral damage less pressing. This type of activity could, in theory and out of necessity, be achieved by foreign volunteers in the absence of local staff. The deployment of 'Mercy Ships,' for example, is designed to 'fill the gaps in health care systems' through service delivery. 8 And emergency AID may be provided in any context without in-depth analysis of a country's economic status or political decision making.
Bolton calculates that around 95 % of AID falls into the alternative category of 'development aid'a form of investment which is both 'much better value' (in terms of promoting resilience) and 'harder to get right' (2007: 76). This AID comes from a diversity of sources including, as Bolton indicates, charitable donations and philanthropy (of which a sizeable component are linked to religious organisations pursuing their own agendas); National AID provided by governments and International AID provided by organisations such as the World Bank and the United Nations. The boundaries between these forms of AID are fuzzy and the political imperatives (underlying national and international AID and its links with diplomacy and trade) combined with the marketing functions of charitable fund-raisers together result in an opaqueness and lack of honesty about impacts. Bolton argues that the pressure to raise funds results in a tendency to simplify and exaggerate the effectiveness of AID and concludes that, 'the outcome is probably the most unaccountable multibillion dollar industry in the world' (p. 79).
To put AID in perspective, the Ugandan Ministry of Health published figures indicating an annual spend of 1281.14 billion shillings (about £156.5 million). Of this, 68 % (£150 million) is provided by the Ugandan government and 32 % (£106.5 million) by 'donors'. The growth in donor share is quite alarming, almost doubling from 13. 9 Health Partnerships are largely funded as local charities, and whilst the amount of money involved may be quite high, this is dwarfed by the real costs of in-kind contributions through volunteer labour.
Moyo's book, with its stark and 'incendiary' 10 title, Dead Aid (subtitled: Why AID is not working and how there is another way for Africa), had a major impact on the design of the SVP. Moyo argues that the culture of AID derives from 'the liberal sensibility that the rich should help the poor and that the form of this help should be Aid' (p. xix). With reference to its impact on 'systemic poverty' (as opposed to humanitarian crises), Moyo concludes that AID has been and continues to be 'an unmitigated political, economic and humanitarian disaster for most parts of the developing world' (2009: xix). She goes beyond many other writers who express similar concern at the efficacy of AID to contend that AID is not only ineffectual but, of far greater concern, it also generates externality effects that actually cause damage. AID is 'consumed' rather than invested: Were AID simply innocuousjust not doing what it claimed it would dothis book would not have been written. The problem is that AID is not benignit's malignant. No longer part of the potential solution, it's part of the problemin fact, AID is the problem. (p. 47) AID has been described as an 'industry' by actors in high-income (donor) settings; it is also seen very much as an industry in low-resource settings. Indeed, poverty is a magnet for AID and the more overtly poor and destitute the case, the greater the prospect of attracting investment.
Sadly, in the Ugandan context, this creates a vested interest for local leaders in the deliberate preservation and presentation of impoverishment and chaos in order to suck in cash and create opportunities for embezzlement. In that sense, poverty is both functional and profitable.
FROM 'AID' TO GLOBAL HEALTH These kinds of anxieties, about the effectiveness of AID, fuelled by political correctness about the use of the term 'development' have led to new concepts to capture the investment dimension and focus on longer-term systemic change. The Tropical Health and Education Trust is one of a growing number of intermediaries funded by the UK's DFID and focusing on 'capacity building' and 'sustainability'. Locating itself within the 'global health' agenda, THET describes its mission as building long-term resilient health systems to promote improved access to essential health care as a basic human right (THET 2015). At the centre of this strategy is the concept of 'human resources for health' or 'HRH'.
The global health agenda has usefully shifted attention from the haves-have nots and donor-recipient binaries referred to before, talking instead, somewhat hopefully, of partnerships and 'win-win' relationships. Lord (Nigel) Crisp has pushed this agenda forward arguing quite forcefully that the UK's National Health Service has as much to learn from low-resource settings as vice versa. Focusing again on health systems (rather than poor people per se), Crisp suggests that the concept of global health 'embraces everything that we share in health terms globally' (2010: 9). Crisp's approach rest on two ideas. First, that health systems in high-resource settings are facing (growing) challenges in terms of resources and sustainability and, second, that globalisation is itself creating complex mobilities (both human and microbial) and interdependencies that effectively challenge the autonomy and resilience of nation states: we are all increasingly connected, whether we like it or not. The growing mobility of health workers or the spread of Ebola are prime examples. It is interesting also to see how Crisp and THET have started to slip the word 'innovation' alongside development, although they shy away from the language of competition in this fluffy consensual world.
In the context of global health, at least the growing emphasis on human resources has usefully shifted the debate from one about providing 'top-down' cash injections in the form of national or international financial support to (corrupt) governments to supporting forms of knowledge exchange through grounded partnerships.
THET describes its focus on reducing health inequalities in low-and middle-income countries with a particular emphasis on improving access to essential health care (as a basic human right). Achieving this requires significant improvement in health systems and this in turn places the emphasis on human resources: The lack of human resources for health is a critical constraint to sustainable development in many lower-and middle-income countries. (THET 2015: 10) This leads naturally on to what they describe as 'a unique partnership approach that harnesses the skills, knowledge and technical expertise of health professionals to meet the training and education needs identified in low-resource settings'. And 'international volunteering' is one of the key mechanisms it supports to achieve this skills harnessing process. 11 The Health Partnership Scheme (HPS) managed by THET was launched in 2011 to 'build the capacity of healthcare workers and the faculty needed to train them with a focus on 'lasting improvements to healthcare [ . . . ] and service innovation' (THET 2015: 10). The scheme is funded by DFID at a cost of £30 million over 6 years. It is under this scheme and specifically the 'Long-Term Volunteering Programme', that the SVP was funded. THET guidelines set out the following objectives: HPS Volunteering Grants aim to leverage the knowledge and expertise of UK health professionals by funding efficient, high-quality long-term volunteering programmes linked to development projects. HPS Volunteering Grants should [ . . . ] strengthen health systems through building the capacity of human resources for health (THET 2011).
In direct response to these objectives, the SVP set out the following objectives: • To support evidence-based, holistic and sustainable systems change through improved knowledge transfer, translation and impact. • To promote a more effective, sustainable and mutually beneficial approach to international professional volunteering (as the key vector of change).
These goals were then formulated as an action-research question framing the wider intervention: To what extent, and under what circumstances, can mobile professional voluntarism promote the kinds of knowledge exchange and translation capable of improving the effectiveness of public health systems in LMICs? (THET 2011) With these thoughts in mind we designed the SVP evaluation around three potential 'scenarios:' Scenario 1: Partial Improvement (Positive Change) Under this scenario, evidence will indicate that the professional volunteering interventions we are engaging in are at least partially effective in promoting systems change. It is important that even this 'partial effect' relates to incremental long-term progress and is not short-lived. Moyo suggests that project evaluations often identify the 'erroneous' impression of AID's success in the shorter termwhilst 'failing to assess longterm sustainability' (2009: 45). Policy Implications: Any positive collateral benefits to individual service recipients (patients), UK volunteers/health systems are to be identified and encouraged.
Scenario 2: Neutral Impact (No Change)
Under this scenario, evidence will indicate that the professional volunteering interventions we are engaging are generally neutral in terms of systems impact. They neither facilitate nor undermine systems change.
Policy Implications: Positive outcomes for individual service recipients (patients), volunteers (and the UK), free of unintended consequences, may be identified and supported.
Scenario 3: Negative Impact (Collateral Damage)
Under this scenario, evidence will indicate that the professional volunteering interventions we are engaging are generally counter-productive /damaging in terms of promoting long-term (sustainable) improvements in public health systems.
Policy Implications: Any positive gains to individuals (including Ugandan patients) or systems in the UK are tainted with unintended consequences and, on that basis, are unethical and should not be supported.
Our . ] who believe all they need to do is turn up and make a difference' (p. 89). Of course, there is some truth behind his concerns about 'foreigners coming from outside' to intervene in people's lives (p. 90). However, his response to his own question, 'is charity capable of providing the help that Africa needs to pull itself out of poverty? Unequivocally, the answer is no' (p. 92) indicates a failure to understand the skills base of many volunteers and the role that volunteers play within organisations (such as health partnerships). Furthermore, it fails to acknowledge the very real monetary value and costs associated with voluntarism and the role that nation states are playing in funding these processes (through intermediaries such as THET). The costs of the HPS scheme (30 million pounds) are dwarfed by the costs to individuals and NHS employers providing cover for released staff. The concept of 'volunteer' tends to detract from the significant economic costs of this form of 'AID'.
These concerns and experiences imposed a huge sense of personal responsibility on us as project managers deploying volunteers. Whilst we could understand the concerns about large volumes of taxpayers' cash being tipped into foreign governments and relate to Moyo's conclusion that this constituted 'Dead Aid'we were less sure about the effects of voluntarism as AID. The immediate association of voluntarism with altruism, religiosity and 'giving' and the less obvious (but perhaps no less real) relationships with diplomacy and trade lead us to question whether volunteering ultimately had the same effectshence, the subtitle for our book: 'Killing me Softly?' This is represented in Scenario 3.
PROFESSIONAL VOLUNTARISM AS HIGHLY SKILLED MIGRATION
We opened this chapter with a discussion about development and AID. Not because this is where we located ourselves as 'experts', but because it is the dominant discourse within which our work is generally situated-and has been funded. Neither of us (as authors) came to this work with backgrounds in international development or global health. Ackers' background as a geographer and social scientist is in highly skilled migration and the role that internationalisation plays in shaping the mobilities of scientists as individuals and scientific capacity. It is interesting to note that the emphasis in this field is more often on the role that the mobility of the highly skilled plays in promoting scientific competitiveness and innovation. The role of human mobility is increasingly recognised as critical to the formulation of the kinds of knowledge relationships that lie at the heart of economic growth. It is important to point out that the processes of international mobility here are by no means unilateral, as is often inferred, echoing the haves (cosmopolitan northern professionals with extensive mobility capital) and have-nots (internationally isolated and parochial) binary. Our research experience suggests that Ugandan health workers and, especially, but not exclusively, doctors have access to very wide and varied international experience. Indeed, it is possible that the (funded) opportunities available to them exceed those open to their peers in the UK. 12 The Ugandan health workforce is surprisingly cosmopolitan and internationally connected especially but not only at senior levels.
Viewed through these disciplinary lenses, both the SVP volunteers and the many Ugandans who have spent time in the UK are first and foremost highly skilled migrants or, if the language of migration is offputting for some, 13 people exercising forms of professional mobility. The label 'volunteer' (defined simply by the absence of a formal employment relationship or remuneration) does little to capture the motivations of the diverse groups of people involved and has an unfortunate tendency to characterise them, within the donor-recipient model, as 'helpers' (Bolton's Sunday Drivers) or, worse still, in an environment still dominated by religious values, as 'missionaries'. As noted earlier, our research has embraced the motivations, experiences and learning outcomes of volunteers. The findings of this are reported elsewhere (Chatwin et al. 2016). The point here is to consider what added value such volunteers bring to the host society and its public health system. Ackers-Johnson's background, on the other hand, is in financial and human resource management. The emergence of the HRH agenda in global health immediately demands an understanding of complex human resource dynamics in terms of both ensuring a supply of appropriate 'volunteers' and creating the structures and relationships that support optimal knowledge capture. The Human Resource Management perspective encourages us to view both volunteers and the Ugandan health workers they are engaging with from the perspective of employment quality and career decision making and accentuates commonality rather than difference in human ambitions and the barriers to knowledge mobilisation. It will become clear as the story unfolds that maternal mortality in Uganda is as much about human resource management as it is about clinical skills.
For the purpose of the SVP and this book, we have coined the term 'professional volunteer' to overcome some of our concerns about the (value-laden) concept of volunteer and emphasise the fact that first and foremost the people we are referring to are highly skilled (mobile) professionals. Characterising them as professionals who are engaging in Uganda with fellow professionals (many of whom are also involved in various forms of international mobility) helps us to situate the project within the frame of both international knowledge mobilisation and human resource management. This is the frame within which we have previously engaged in international teams as research collaborators and not as donors. The word 'professional' also hints at motivational dynamics and the fact that, for the majority of 'volunteers', motivation is a complex concept combining altruistic, touristic and career progression components (amongst others).
RESEARCHING COMPLEX INTERVENTIONS: THE SVP AS ACTION-RESEARCH
Whilst the Crisp report 14 outlines the important potential strategic role of 'Global Health Partnerships' in the 'massive scaling-up of training, education and employment of health workers in developing countries' (Crisp 2007: 2), it also reflects on the very disappointing historical picture with 'any number of well-intentioned initiatives foundering after a few years ' (2007: 5). This, argues Crisp, 'leads to a counsel of despair that, despite all the effort over the years, nothing has really changed' (p. 5). He concludes that there has been, 'very little systematic application of knowledge and learning from successfuland failedprojects' (p. 9) and calls for more international studies that, 'show what impact they can make and how they should best be used' (p. 14). The Academy of Medical Royal Colleges' Statement on Volunteering (2013) similarly expresses concern at the quality of evidence on the impacts of volunteering: Monitoring and evaluation of volunteering activities does exist but is at present limited. The same is true of research on long-term impacts. There is a pressing need to develop consistent approaches to robust monitoring and evaluation. (p. 2) And Bolton takes the argument further suggesting that AID organisations (or funding bodies) have a vested interest in showing that AID works: Most of the information we get about aid is from charities [who] need to convince us that aid is effective so they can get their hands on our money . . . most charities simplify and exaggerate how much effect aid can have. (2007: 78) This echoes common criticisms of evaluation processes conducted in-house and within projects and, as such, tilted in favour of proving that interventions are both necessary and effective. And, most project evaluations in health partnership work are conducted by people who have little if any research experience in the evaluation of complex social processes. The SVP is perhaps somewhat distinct in this respect to the extent that the co-coordinator is an experienced researcher occupying an established academic post, embedded within an active research team, and as such not personally reliant upon the demonstration of project 'success'. Whilst this distance supported a degree of independence and objectivity, the fact that the authors were simultaneously designing, implementing and evaluating the intervention distinguishes the project from classical research. We were not seeking to 'measure' controlled static phenomenon, and reduce 'contamination' to a minimum (MRC 2008), but rather to institute change processes and capture their impacts longitudinally.
The emphasis on change processes in a program such as the SVP, coupled with the paucity of reliable secondary data, demanded an innovative and iterative multi-method approach. Building on many years' experience of research on highly skilled mobilities and knowledge transfer processes, the evaluation strategy embraced a range of methods complementing and balancing each other through the process of triangulation (Iyer et al. 2013). As researchers, we were acutely aware at the outset of the limitations of facility-generated secondary data. Accurate, reliable data on maternal and newborn health simply do not exist in Uganda. We therefore conducted a major benchmarking exercise across the ten HUB facilities (including health centres and hospitals). This was an interactive process in itself and was as much about improving data collection and record keeping as it was about data capture; indeed, the process included training of record keeping staff. These data should be regarded with caution (as noted earlier). 15 As Gilson et al. note (2011), even in this 'hard data' context there is no single reality, no simple set of undisturbed facts and the data that we do see are essentially socially constructs.
The project has also used simple before-and-after testing schemes using Likert scales to assess learning and skills enhancement during formal training programmes. Capturing the impacts of volunteer engagement on health workersand more specifically on behaviour and systems changeis far more complex. We have utilised a range of measures including qualitative interviewing of volunteers, structured monthly reporting schedule for all volunteers and bi-annual workshops. Wherever possible, volunteers have been interviewed at least three times (depending on their length of stay with interviews prior to, during and post-return). We have over 150 16 verbatim transcripts drawn from all 10 HUB locations. Most of these have been conducted face to face in Uganda or the UK with some taking place via Skype. Where appropriate, email has also been used to discuss issues.
The research has also involved interviews and focus groups with Ugandan health workers, line managers and policy makers (about 50 to date). 17 The authors have also spent many months in Ugandan health facilities and working with Uganda health workers in the UK. The project coordinator and manager each makes regular visits (around four per year) ranging from 2 weeks to 5 months in duration and we have deployed two social scientists as long-term volunteers embedded within the SVP. This intense ethnographic work is recorded in project notes and diaries and is perhaps the most insightful of all of our methods. The qualitative material has been coded into a software package for qualitative analysis (NVIVO10) and subjected to inductive thematic analysis. 18 In addition to this, volunteers have been encouraged, where appropriate, to develop specific audits to support contextualisation and highly focused interventions. This has included audits on, for example, maternal deaths, triage and early warning scoring systems, antibiotic use and C-section rates. These audits are small scale and necessarily inherit the same problems with the accuracy of data and of medical records as the wider study.
We have described the study as an example of action-research. It is necessarily iterative and as such we did not set out to achieve a specific sample size or end point but continue to spend time in Uganda interviewing and observing work in public health facilities and facilitating active workshops to encourage discussion around key issues. 19 Indeed, it is through this iterative process that we have come to identify the challenges that we believe are central to understanding both resistance to change in Ugandan health systems and the efficacy of professional voluntarism.
McCormack concludes his chapter on action-research with the reflection that 'context is a constant tussle between conflicting priorities where everyday practice is challenging, often stressful, sometimes chaotic and largely unpredictable' (2015: 310). Understanding context is a labour-intensive longitudinal process that unfolds to inform and respond to interventions over time. For the action-researcher, there is no convenient chronological start and end point. Somekh (2006) echoes this sentiment suggesting that action-research is cyclical and evolves until the point at which, 'a decision is taken to intervene in this process in order to publish its outcomes to date ' (2006: 7). And, 'it is unlikely to stop when the research is written up'. Both these sentiments capture perfectly our interventions in Uganda. The publication of this book marks a stage in a journey and what we have learnt up to this point.
The remainder of the book guides the reader through our own learning processes as 'action-researchers' reflexively managing and evaluating the Sustainable Volunteering Project.
THE STRUCTURE OF THE BOOK
Chapter 2 discusses the first part of our journey in operationalising the SVP. This contextual learning predated the SVP and framed our initial application for funding. Our experience of deploying long-term volunteers in the context of the Liverpool-Mulago-Partnership made us acutely aware of the damaging effects of labour substitution. Years of missionary or 'helper' style volunteering have shaped a culture within which the dominant expectation in Uganda was that volunteers were there to gap-fill and substitute for local staff, enabling them to take time off work. And many volunteers, influenced by similar discourses, are often quite (naively) happy to respond to these expectations. Clinical volunteers have a much more powerful ethical commitment to the prioritisation of immediate patient needs over systems' needs. The chapter title 'First do no Harm' is taken directly from the Hippocratic Oaththe ethical statement governing the conduct of the medical profession and prioritising patient needs. 20 Chapter. 2 reflects on the balancing and persuasive process that this has involved and how the SVP has developed and operationalised the 'co-presence' principle to guard against the systems damaging effects of labour substitution.
Chapter 3 takes a chronological step forward to the point at which the SVP was actively deploying professional volunteers into roles focused on training and capacity building based on the co-presence principle. Our experience of the project's progress began to raise concerns that the emphasis on and conceptualisation of 'training' imposed by most organisations funding volunteering (and embedded in indicators of success) fostered a kind of fetishismwith training. Our research suggested that, in practice, and in the context of Ugandan human resource management systems, training was failing in many respects to translate into active learning and was, in itself, generating worrying externality effects. Rather than generating empowerment and improving health worker behaviour, it was tending to compound the kinds of dependencies and corruptions identified by Moyo (2009). Chapter. 3 draws on research evidence to expose the unintended consequences of interventions focused on forms of continuing professional development (CPD) 'training'. It describes the SVP approach favouring on-the-job co-working and mentoring over formal off-site courses. This approach increases opportunities for genuine learning and confidence in deploying new knowledge. More importantly, this reduces the collateral damage caused by traditional CPD interventions. Notwithstanding these 'successes', our research suggests that the effects of even these interventions can be short-lived. It was at this stage in the project journey that we realised that co-presence, whilst essential, was not sufficient to guarantee knowledge translation and sustained impact. Ugandan public health systems are highly and actively resistant to change.
Chapter 4 marks the shift in conceptualisation emerging from both our own evaluation and learning but coinciding with wider policy agenda. The missing piece of the jigsaw it seems was the failure to understand both conceptually and in terms of operational dynamics, the step from training through learning to individual behaviour change. We have learnt that knowledge mobilisation does not automatically derive from learning; knowledge in itself is not empowering and may, indeed, be disempowering. Knowledge mobilisation is highly contextualised and needs to be understood within the frame of wider human resource management systems. In marked contrast to the approaches favoured in health sciences focusing on 'systematic reviews' of published research on similar (identical) interventions, we undertook a much broader horizon-scanning research review process. Our aim here was to identify any knowledge or ideas that could facilitate our understanding of the intervention-failure or systems stasis we were witnessing. Chapter. 4 reviews some of the work we identified and its impact on our learning and volunteer deployment model.
Chapter 5 applies the newly combined knowledge discussed in Chapter. 4 to two illustrative case studies. As we have noted, interventions in a project like the SVP take place and are modified over time. In many ways they represent a simple 'trial and error' approach underpinned by intensive grounded research to facilitate our understanding of change processes or change resistance. Tracking the identification of a 'need' and our experience of designing and monitoring the evaluation of that process, in the light of the new knowledge gained through ongoing research review improves our understanding of the complexity of social processes. Chapter. 5 redefines the objectives of our action-research project from the starting point where we believed we were setting out to capture the ingredients of positive change to one of pro-actively understanding and learning from failure. It attempts, in the context of this potentially debilitating reality, to take stock and identify the characteristics of least harm interventions to chart the next stage of our journey. NOTES 1. Annex 1 provides information on volunteer deployment in the SVP. In practice, SVP volunteers are drawn from a broad family of disciplines/cadres including clinicians, engineers and social scientists. 2. www.liverpoolmulagopartnership.org.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/ licenses/by/4.0/), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, a link is provided to the Creative Commons license, and any changes made are indicated.
The images or other third party material in this book are included in the work's Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work's Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material. | 8,033.2 | 2017-01-01T00:00:00.000 | [
"Economics"
] |
INFLUENCE OF SUBGRADE SOIL ON PAVEMENT PERFOMANCE: A CASE STUDY OF AGO-IWOYE – ILISHAN ROAD, SOUTHWESTERN NIGERIA
Ago-Iwoye –Ilisan road is the major road that links Abeokuta the state capital of Ogun –state to Ijebu towns. The road has always been experiencing pavement failure, which occurs inform of cracks and potholes. Being the major road, the effect of the failure has negative impact on the soico-economic growth of Ijebu –areas. The primary objective of the study was to determine the influence of the geotechnical properties of the sub-grade materials on the pavement performance of Ago-Iwoye – Ilishan Road. Eleven (11) soil samples were collected at eight (8) different locations with the aid of hand auger and were air-dried before taken to the laboratory for determination of engineering properties. The Liquid limit and the Plastic limits ranged from 13.9 – 46.2% and 8.1 – 32.7%, with the Plasticity index from 10.6 – 15.9% and Shrinkage limit from 6.2 – 27.7% respectively. The soaked CBR values of the subgrade materials is between 67% and 75% compared with 30% minimum specified by FMWH, 1997. The soils were classified by ASSHTO under the A-6 and A-7 category which shows that the soils are fair to poor as a sub-grade material and USCS classification shows that the soil falls into the SM and SC group. The comparison of all the results with the Nigeria specification (Federal Ministry of Works and Housing general guidelines) for the sub grade materials along the Ago-Iwoye-Ilisian road show that the materials underlain the pavement do satisfy the Nigeria standard. Therefore, the perennial failure frequently experience along the road route
INTRODUCTION
A pavement section may be generally defined as the structural material placed above a subgrade layer (Woods and Adcox, 2006).The characteristic of the soil bed over which the entire pavement system rests on represents subgrade soils (Mcghee, 2010).Pavement failures are common features of Nigerian roads.Despite several rehabilitation attempts, the reason for their occurrence seems not to be well understood.The sub-grade provides a foundation for supporting the pavement structure.As a result, the required pavement thickness and the performance obtained from the pavement during its design life will depend largely upon the strength and uniformity of the sub-grade.Hence, a thorough investigation of the sub-grade should be made so that the design and construction will ensure uniformity of support for the pavement structure and realization of the maximum strength potential from the particular sub-grade soils.Ago-Iwoye -Ilisan road is the major road that links Abeokuta the state capital of Ogun -state to Ijebu towns.The road has always been experiencing pavement failure, which occurs inform of cracks and potholes (see Plate 1&2).Being the major road, the effect of the failure has negative impact on the socio-economic growth of Ijebu -areas.The primary objective of the study was to determine the influence of the geotechnical properties of the sub-grade materials on the pavement performance of Ago-Iwoye -Ilishan Road.
LOCATION OF THE STUDY AREA
The study area Ago-Iwoye -Ilishan road, is situated in the south-western part of Nigeria within latitudes 06 0 53'00"-06 0 57'00' and longitudes 3 0 44'00'-3 0 56'00' (Fig. 1.0).The road traverses Irolu, Ijesha-Ijebu, Oladele, Ajegunle settlements with other small villages.Akanni, (1991) stated that Ago -Iwoye -Ilishan road can be specifically placed in the humid tropical region.The rainy season ranges from mid-March to early November with double maxima of rainfall whose peaks occur in June and September while the dry season lasts from November to early March with the month of December and January relatively dry.The mean monthly rainfall varies from less than 50mm in January to over 200mm in June and July.A relatively lower amount of about 140mm in August is due to the little dry season or August break, which is the normal form of climatic condition throughout Nigeria.The vegetation of the area is characterized by the rain forest type influenced by human activities such as construction of roads and houses, farming etc. River Ome is the main river that forms the drainage.The river flows through a N-S direction, parallel or sub-parallel to the strike of the rock with dendritic drainage pattern.
GEOLOGY OF THE STUDY AREA
The half of the study area falls within the Basement Complex of southwestern Nigeria and it is predominantly underlain by gneisses of various grades and suites.These rocks are porphyroblastic gneisses, Biotite gneisses, granite gneisses and banded gneisses.There is an occurrence of a massive intrusive body of quartz schist at the southern part of the area.Some other part of the study area falls within the Sedimentary terrain which is Ise formation of Abeokuta group as shown in Figure 2.
MATERIALS AND METHODS
Eleven (11) samples were collected at eight (8) different locations with the aid of hand auger and digging tools for trial pits.The soil samples were air-dried before being subjected to the laboratory tests such as the grain size analysis, moisture content, consistency limits, compaction test and the California Bearing Ratio (CBR).The results of the grain size analysis were presented as grain size distribution curves, that is, plotting of percentage of soil passing against the grain size on a semi-log graph.Typical grain size distribution curves are shown at Fig. 3 (a&b).The soils were classified by using the American Association of State Highway and Transport Official (AASHTO) and USGS methods.The AASHTO classification identified two major soil types along the road route which are A -6 and A -7 category which rated the soils fair to poor subgrade materials.The USGS classification method identifies silty sand (SM) and clayey sand (SC) soil types along the road route.The laboratory tests were conducted accordance with the procedure specified by the American Society for Testing and Materials (ASTM 1289, 1979) and the British Standard Institute (BSI 1377, 1990).
RESULTS AND DISCUSSION
The results of the laboratory analysis of the soil samples were presented as distribution curves and summary of geotechnical properties of the subgrade soils.A typical distribution curve is as shown in figure 3.0 (a&b).
Figure 3a shows the grain size distribution curve for sample 1 where the percentage of soil passing is plotted against the grain size (mm).This shows that percentage of coarse soil is 86%, while that of fine soil is 14%.For figure 3b, the percentage of coarse soil is 98% and fine soil is 2%.Table 1 shows the summary of the geotechnical properties of the subgrade soils.The values represent the average for three replicate of each sample tested.The geotechnical properties of the subgrade soils along the highway route revealed that the liquid limit values ranges from 13.9% to 40.0% and plasticity index ranges from 10.6% to 15.9% against the FMWH, 1997 specify value of 40% maximum and 10% minimum for liquid limit and plasticity index respectively.
The soaked CBR values of the subgrade materials is between 67% and 75% compared with 30% minimum specified by FMWH, 1997.The laboratory maximum dry densities are between 1.47Mg/m 3 and 1.68Mg/m 3 while optimum moisture content ranges from 18.5% to 26.7%.These values are still within the acceptable values of FMWH, 1997 for good to fair subgrade materials.Averagely 75% of the samples satisfy the requirement that the LL and PI of sub-grade should not be more than 35 and 12% respectively (FMWH,1997).
CONCLUSION
The results of the subgrade soils investigation along Ago-Iwoye -Ilisian road revealed that the road pavement structures are underlain by are A -6 and A -7 category of soils, which rated the soils fair to poor subgrade materials.Averagely 75% of the samples collected satisfy the requirement that the LL and PI of sub-grade should not be more than 35 and 12% respectively.The soaked CBR values of the subgrade materials is between 67% and 75%.The CBR values are relatively high, which is an indication that the subgrade soils are good to fair for pavement structures.The comparison all the results with the Nigeria specification (Federal Ministry of Works and Housing general guidelines) for the subgrade materials along the Ago-Iwoye-Ilisian road show that the materials underlain the pavement do satisfy the Nigeria standard.Therefore, the perennial failure frequently experience along the road route is not significantly influenced by subgrade materials.Hence, influence of other factors such as poor drainage courses, level of groundwater table, variation of geologic materials along the road route and poor construction materials should be thoroughly addressed before embarking on future rehabilitation of the highway.
Plate 1 :Figure 1
Figure 1.0: Location map of the study area
Figure 3a .
Figure 3b.Grain size distribution curve for Sample 4 | 1,954.4 | 2015-05-06T00:00:00.000 | [
"Engineering"
] |
The Unpredictable Critical Threshold in COVID-19 Pandemic and Climate Change
: In the real world we are confronted with situations where tiny variations in initial conditions can have major influence on unfolding events within the natural systems. We call it “sensitive dependence on initial conditions”. When predictions are virtually impossible, we have to be capable of detecting in advance the patterns and qualitative features of the natural systems behaviour. But, the moment of truth, unpredictable, can appear in the form of a drastic change, when a critical threshold (tipping point) is reached. It is by no means clear that the dioxide gas accumulation and the greenhouse effect will follow, as of now, a gradually increasing path. More probable we will face, at some not distant point in the near future, a moment when a critical threshold is reached and then, a dramatic and more dangerous change happens. Another example clearly indicates the same tipping unpredictability: a major Antarctic glacier is at risk of disintegrating irreversibly if it passes a key tipping point. The COVID-19 pandemic is the most recent case in point. Within this framework of ideas and concepts a different kind of question is needed: Does humanity have property rights and in the meantime the subject of the present day is global coordination and even more: cultural evolution. Worried about the effects of climate change, we need to remember that every single action within a global system depends for its success on cooperative behaviour
Introduction
The present paper is focused on the unpredictability of natural phenomena and the occurrence of critical thresholds (tipping points) in the dynamics of natural systems, i.e. dynamic (nonlinear) systems which are sensitive to the change of initial conditions.As a result of such sensitivity they develop chaotic behaviour.
Unpredictability, namely the lack of absolutely complete information, present in the very essence of nature, assumes limitations, as well, manifested in humans' projects and actions, but not in their thinking.Limits induce uncertainty.Solving uncertainty, a mandatory step in decision-making, needs a vast and complex image of thinking.
However, science is not about certainty.All we have are only provisional pieces of truth.Consequently, one should get accustomed to uncertainty and unpredictability.
As a matter of fact, science advances, always under uncertainty, towards more profound knowledge, yet without fully eliminating uncertainty.Briefly, science appears as the final sum of a large number, of a multitude of concepts and fundamental laws compatible with unpredictability.
Once confronted with uncertainty, thinking does not fly freely anymore, but it makes efforts to escape "astonishment" and to develop a strategy.This is the expression of lucidity and of the need of certainty.
More specifically, the accidental side of things appears to be unpredictability itself.Further on, uncertainty appears from unpredictability, once the latter occurs in the heart of nature, while uncertainty exists through the feeling of people, also perceived at society or community level.
Our destiny is manifesting in the struggle with hostile uncertainty, a struggle in which the solution does exist; however, this lies by no means in ignorance."I will not permit hazard to judge me", stated Seneca; "luck involves no moral judgement".The master of the unknown is the poet of uncertainty.
Sensitive dependence on initial conditions serves not to destroy but to create.Chaos, for instance, doesn't respond to a particular line of a scientific discipline; chaos seems to be everywhere.Deterministic systems could generate randomness.There is a limit on how much initial information can be gathered.Intriguingly, the chaotic behaviour of simple dynamic systems act as a creative process.
A natural system displays an average behaviour for a long period of time and then, for no apparent reason it shifts into a very different behaviour.It is a new average but it could be chaotic.
A well-defined scientific discipline aims to the resolution of well-defined problems.The nonlinear nature of natural systems (or economic, biological, chemical or even social) makes the task of coping with unpredictable events much more complex.Assembling a lot of information is the response to the sensitivity to initial conditions and it has to be an interdisciplinary effort.
The purpose of my work in the last two years (I wrote a book "Unpredictability&Decision", published this year, 2021, in Romanian) is to show that the impact of unpredictability on decisional thinking could be, or even should be, analyzed from a variety of scientific perspectives: physics of the natural environment, logic of mathematics, logic of truth, quantum mechanics, economics, neurosciences, psychology and philosophy.Although such a diverse interdisciplinarity is difficult to grasp, not to mention the intent to wield several courses of scientific thinking into the process of economic decision-making, it is necessary to remember that under conditions of uncertainty and disorder we do not have models of quantitative prediction of the outcomes; we are, then, strongly interested in the patterns and qualitative features of the dynamics of the situations we are confronted with.
Unpredictability and Order
In physics, including that of the atmosphere or oceans, the random events may arise from the deeply complex dynamics.Henri Poincaré [1] left us a famous phrase: "Chance is only the measure of our ignorance", because, he said, "Fortuitous phenomena are by definition those whose laws we do not know".The essential question which derives from this definition is how the unpredictable events can be harnessed for the applications with the greatest impact on the real world.In the real world we are confronted with situations where tiny variations in initial conditions can have major influence on unfolding events within the natural systems.We call it "sensitive dependence on initial conditions".It was demonstrated by Edward Lorenz in 1963.[2] His conclusion, based on a computer simulation of the dynamics of the weather, was that long term weather prediction is impossible.Before him, in the late 19 th century, the Russian mathematician Aleksandr Lyapunov, invented the exponential numbers which describe the sensitivity of a system to its starting point.If a situation can be accurately predicted, it has a Lyapunov exponent of 0. Above that threshold of zero lies unpredictability.The weather is a case in point.
The well-known cases are those of disruptive phenomena like storms, tornadoes, hurricanes, tsunami, heat waves, torrential rains and even the outbreak of virus epidemics like the Covid-19 these days.
"Predictability requires perfect knowledge of the Universe and exact laws of nature" says James Gleick [3].The causes of random events are physically determined but so numerous and complex that they (the events) are unpredictable."Instead of predictability, there is chaos".It doesn't mean that there is no order whatsoever in the natural system; there is interaction between order and randomness [see 4 & 5], not a straightforward one though.We should add what Kurt Gödel proved in 1931, that there must be truths, that is, statements that can never be proved.[6] Science is not about certainty.Human knowledge itself is not certain.We can have only provisional truths.Therefore, we need to reach an accommodation with uncertainty and unpredictability.The words of Bertrand Russell need to be remembered: "Uncertainty in the presence of vivid hopes is painful, but must be endured...to teach how to live without certainty and yet without being paralysed by hesitation, is perhaps the chief thing" [7].
It means that uncertainty and unpredictability should not be defined in terms of the lack of something positive or better.Thinking does shine when it works in the area that "lies between too much certainty and too much doubt".[8] The chaos theory came some years ago to rescue our natural propensity to predict behaviour of the natural systems.While we cannot attach precise values of their variables at a particular time, we can predict qualitative features of the system's behaviour.The shift from quantity and formulas to quality and pattern is in the meantime a shift to an essential way of thinking with practical consequences to the present human activity.
While humans are able to change the natural conditionsand in doing so they have a strong impact on climate -it would be impossible to determine what climate would have done otherwise.The events cannot be predicted under the sensitive dependence on initial conditions; nevertheless, they can be explained.This became the conventional wisdom of business as well as political leaders.There lies probably the exponential growth of public relations.
Policies meant to mitigate the consequences of climate change should be tools to make possible changes in the society, i.e., to set the foundation of global decisions which aim at sustainable development conditions to prevail.
When predictions are virtually impossible, we have to be capable of detecting in advance the patterns and qualitative features of the natural systems behaviour.And such attempts are not easy.Benoit Mandelbrot indicated, in 1967, that for any problem we try to solve using the methods of building models of physics "we noticed everyday more that by adapting these methods to a new context, we ended up with results of extremely different form" [9].For instance, all the predictions of the global economic and financial behaviour failed to signal the dramatic 2008 crisis.
The Critical Threshold in the Natural Phenomena
In January this year (2020), the Davos Economic Forum organizer, Klaus Schwab, told the press: "We do not want to reach the critical tipping point of the irreversibility of climate change."This statement, which seems to me to be at the opposite of the "Davos spirit" which prevailed in many previous years, implies that we know when the tipping point might occur.Well, we don't.It's unpredictable.
On the same line of thought, President Trump said that "fear and doubt are not a good thought process".I think the doubt itself, regarding the geological and anthropogenic evolution of the Earth, should be an obligatory element of the thought process.Our faith in the strengths of facts is the foundation of our rationality, guiding the progress of civilization.
Climate optimists believe that nature's resilience is (almost) unbreakable and, as a consequence, the damages inflicted to the natural environment by human activities are very limited.Their optimism is based on their impression that the damages are gradual and very often are also invisible.But, the moment of truth, unpredictable, can appear in the form of a drastic change, when a critical threshold (tipping point in the American terminology) is reached.
From my own experience as a scientist in the field of hydrology and environmental quality, I can bring a specific example [10].
In 1975, an ample yearly program of sampling and hydrodynamic measurements was initiated in order to determine the quality of Danube water all along the 1075 km of the Romanian sector, from the discharge of the Nera River into the Danube, at the frontier with Serbia, and downstream up until the discharge of the Danube into the Black Sea.I led the hydrological research, and together with my colleagues, chemists and biologists, we took samples of water and simultaneously measured the speed and location of dozens of points of the dozens of sections selected along the 1075 km of the Danube.To our great surprise, we found that in 76% of the samples, the water quality was not just good; the water was drinkable.We could see people on the shore taking water from the Danube and drinking it.We were amazed, but they knew better.Ten years later, in 1985, on the order of the then President of the National Council for Science and Technology, Elena Ceaușescu, the dictator's wife, the whole program of research was cancelled.But in that year, we found that from 76% of drinkable water, the quality dramatically went down to 33%.What happened in those ten years?
The most probable explanation would be that a critical threshold was reached, beyond which the low grading of the water quality went from gradual to drastic.The Danube has resisted "heroically" the assaults of the massive pollution inflicted by the big European cities (Vienna, Bratislava, Budapest, Belgrade) until its capacity of self-purification was doomed.
By transposition -acceptable (at least) on scientific grounds -the situation we registered then in the Danube River, one of the largest natural bodies on the planet, we can imagine the same type of dynamics of the present climate change.It is by no means clear that the dioxide gas accumulation and the greenhouse effect will follow, as of now, a gradually increasing path.More probable we will face, at some not distant point in the near future, a moment when a critical threshold (tipping point) is reached and then, a dramatic and more dangerous change happens.Another example clearly indicating the same tipping point feature was recently published in New Scientist [11], presenting the conclusions of scientific studies on the unpredictability of the melting of crucial glaciers.A major Antarctic glacier is at risk of disintegrating irreversibly if it passes a key tipping point.
A domestic glacier retreat could let water get under the ice and thus collapse the entire ice sheet, leading to more than 3 meters of sea level rise, over a long period of time.We may be closer than we thought to Earth's dangerous tipping points."It's highly likely that things might happen over a quicker period of time" says the study.
Does Humanity Have Property Rights
A recent result [12] of the investigation on the ice core records from the Himalayas aimed at understanding the onset and timing of the human impact of the atmosphere of the "roof of the world".In 1997, at the altitude of 8013 meters, the research team extracted three ice core samples measuring 150 meters depth.The successive layers of snow revealed the information from the past, as if they were tree rings; from the year 1499 to the present day (~500 years).Until 1780, the composition of the samples, determined with the latest techniques of investigation, showed only traces of metals of natural origin.
The results from 1780 onward suggest a strong contamination with toxic metals (augmented factors of 2 to 6) which was the consequence of the combustion of coal, likely from the Western Europe during the first Industrial Revolution in the 19 th century.
In the last 50 years the samples indicate more specifically traces of lead which obviously originates in the combustion of vehicle's engines.
In the meantime, the study detected particles emitted by the massive deforestation of the 19 th century, made not by logging but by burning the forests.
Western Europe lost 19 million hectares of forest, Russia another 33 million, and Romania nearly two million.
On average, in a European country today, the emission of CO 2 per year and per person is about 14 tonnes, while a tree absorbs on average 22 kilograms per year.The balance is achieved if every person would plant approximately 680 trees per year.It certainly exceeds the real possibility of using this method to mitigate the carbon imprint.But by the development of an ample reforestation program (in Romania, for instance, there are more than 1 million hectares of degraded land) we could probably achieve an essential result: delaying significantly the occurrence of the critical threshold in the climate change.
Today, under the huge stress imposed by the COVID-19 pandemic, it became clear, once more, that the exit strategy should be a global effort since no country has a monopoly on science.In this context I want to refer briefly to the ability of scientific expertise to guide governmental policies.It is important to understand what are the real limits of treating the huge array of data which presumably should be correlated in order to offer a valuable response to the many questions underpinning the strategy elaboration.Unweaving the true connection between cause and effect is crucial.The problem is to correctly distinguish correlation from causation.A great deal of scientific practice is based on using statistical tools.Data and correlation are essential to indicate which method, among several different ones, lead to good results.But it doesn't mean that we know why or how to improve the methods.As the scientists say, we need to have a causal understanding.Science without causality doesn't make sense.As a matter of fact, intervening on the cause will change the effect, but not vice versa.We are always seeking to improve decision-making and that necessarily leads to understanding the cause-effect sequence.So, every leader in the world should ask: if I were to do this, how would the world change, not just my country and my people.The COVID-19 pandemic is the most recent case in point while better understanding climate change is increasingly important for the future of humanity.The global science effort today is to get a causal picture of biosphere and atmosphere interactions.These interactions have potentially dramatic consequences for our plans to tackle the global health problems and climate change effects.Urgency demands patience.
In the functioning of democracy there is rational ignorance.Voters do not need to know all the aspects of political, economic and social life to vote.They do have a clear idea of their own expectations.In the functioning of politics today we can observe political decisions which often indicate irrational ignorance, in the sense that political leaders do not want to know more because-they believe -it is useless.And they are wrong; at least because they ignore that we live in a world where both the expectations and the degree of confidence of the people, in many areas of the world, are situated at very low levels compared to not-so-distant times in the past.The cumulative interaction of the state of confidence with people's expectations at any moment can transform small disorders or disturbances of the "rules of the game" into major events with a tendency to break the existing systems.We can see already that the economy and the political systems are increasingly under the influence of climate change but it seems they do not seem to master strong enough stabilizing forces.
Let's use property rights as an example.To the economists, property rights mean something similar to the "rules of the game"."Property rights are rights to control the way in which particular resources will be used and to assign the resulting costs and benefits" and "Property rights create expectations.Expectations guide actions."[13] And I think that in our present world there is a demand for new definitions of property rights.
Adam Smith was probably the first and certainly the most important economist to consider the fact that ethics play an important role in economics.He expressed it by the fact that supply curves and demand curves depend on convictions and commitments that are fundamentally ethical in nature.
In another kind of approach, Nicholas Georgescu-Roegen stresses that [14,15], "the true economic process is not a material flow of waste, but an immaterial flux: the enjoyment of life" and that: "the complete data of any economic problem must also include cultural propensities (of the people)"; and also "if we deny the people's capacity for empathy, then our exercise doesn't have any meaning".(A detailed economic and mathematical analysis of the same is presented in [16]).
Whenever we attempt to resolve conflicting claims, we try to avoid unexpected decisions or outcomes.And the decisions we take within an economic system depend crucially on the property rights which were established and are accepted by the society.Let's remember that property rights create expectations which, on one hand, are very important in shaping economic decisions but, on the other hand, are confronted with the uncertainty of decision making and unpredictable events.
Within this framework of ideas and concepts a different kind of question is needed: Does humanity have property rights?
It is absolutely clear that the market forces cannot temper the perturbations and negative effects in the dynamics of climate; on the contrary, as a rule, they amplify them.Who then will take care of the property rights of humanity on the path of harmonious conviviality with the planet?
The subject of the present day is global coordination and even more: cultural evolution.Worried about the effects of climate change, we need to remember that every single action within a global system depends for its success on cooperative behaviour.There are still dramatic gaps between the reality of unpredictable climate dynamics and the expectations and state of confidence of the people.Governments should take steps to close the gaps.
Conclusion
When you can't say what a system is going to do next you are confronting a situation of unpredictability which just generates uncertainty and disorder.
A system normally functioning represents the routine, an ordered and hierarchized assembly, settled for its scheduled operation.Uncertainty results from changes in the context or in the data usually employed.This means a new state-of-theart, created by a change produced beyond our reach.Modification of the normal condition is imposed by uncertainty, which forces the decision: either changing in the system, or changing of the system itself.The decisional transformation of the system, not only for the sake of adaptation but, possibly, for qualitative improvement, is either very important, if it means transformation in the system, or decisive, if it transforms the system.A qualitatively new system, transformed from the old one, demonstrates not only what the old one used to demonstrate, but also its own stability.An absolutely new system needs a different consistency.
Transformation is transitional for assuring the system's stability and competitiveness.The change produced by a new uncertainty comes from something having occurred in the past, and yet it is a novelty.Instead, the change induced by decision becomes the -partial or total -future.
Thinking is a free game of the mind, which operates with ideas, without the obligation of proving something.By their very nature, human beings are manifested in a daily attitude which includes beliefs, judgements, opinions and theories about the world's reality and its full significance (as complete as possible, as a function of the available data).A phenomenological attitude involves distancing from this "natural" posture, a categorical refusal of illusions and of "bright" perspectives, alongwith assuming the concern for providing proofs and for rigour, for a permanent need of observing and accepting stratification, limpidity and concreteness, all these accompanied by the feeling that all we have at hand is a never-failing source of information.In this way, one will never be confined by either interpretation patterns, various prejudices or language.We know that logic operates with the language, and also that both philosophers and mathematicians felt an absolute need of fully and definitely formalizing the expression of thinking into language.
People have a natural propensity to want and to expect to live in an intelligible, comprehensible world.And yet, which would be the reason for which, or in what manner could one plainly declare that "I know"?Indeed, we permanently and eternally harbour in us both the strive for an intrinsic cohesion of the world, and the compulsive limits of our knowledge. | 5,191.2 | 2021-07-29T00:00:00.000 | [
"Computer Science"
] |
Epistasis Creates Invariant Sites and Modulates the Rate of Molecular Evolution
Abstract Invariant sites are a common feature of amino acid sequence evolution. The presence of invariant sites is frequently attributed to the need to preserve function through site-specific conservation of amino acid residues. Amino acid substitution models without a provision for invariant sites often fit the data significantly worse than those that allow for an excess of invariant sites beyond those predicted by models that only incorporate rate variation among sites (e.g., a Gamma distribution). An alternative is epistasis between sites to preserve residue interactions that can create invariant sites. Through computer-simulated sequence evolution, we evaluated the relative effects of site-specific preferences and site-site couplings in the generation of invariant sites and the modulation of the rate of molecular evolution. In an analysis of ten major families of protein domains with diverse sequence and functional properties, we find that the negative selection imposed by epistasis creates many more invariant sites than site-specific residue preferences alone. Further, epistasis plays an increasingly larger role in creating invariant sites over longer evolutionary periods. Epistasis also dictates rates of domain evolution over time by exerting significant additional purifying selection to preserve site couplings. These patterns illuminate the mechanistic role of epistasis in the processes underlying observed site invariance and evolutionary rates.
Introduction
Half a century ago, Uzzell and Corbin (1971) showed that the distribution of the number of substitutions over amino acid sites had a much larger dispersion than expected from the same evolutionary rate across all sites. That is, the frequency of sites with different numbers of substitutions did not follow the Poisson distribution expected if the evolutionary rate was the same across sites (single [S] rate model). A negative binomial distribution better describes the observed substitution frequencies spectrum across sites (SFSS), which can arise when the evolutionary rates vary from site to site and are drawn from a Gamma (G) distribution (Uzzell and Corbin 1971). Subsequently, it was reported that the number of completely conserved sites (invariant sites, I-sites) across sequences could significantly exceed those predicted by a Gamma distribution of rates (Gu et al. 1995). A mixture model (I + G) containing a class of I-sites alongside a Gamma (G) distribution of site rates often fits the observed SFSS much better, e.g., Yang (1993) and Kumar (1996). The I + G rate model is frequently used in molecular evolutionary analyses.
The phenomenon of excess of I-sites can arise due to site-specific amino acid preferences to conserve function (Fitch and Margoliash 1967;Kimura and Ohta 1974;Doud et al. 2015). Two contrasting possibilities are that 1) sites evolve largely independently with site-specific amino acid preferences exerting negative selective pressure (independent evolution model, IE model), and 2) substitutions at a site depend on amino acid residues found at other sites due to intramolecular couplings (coupled evolution model, CE model). The IE and CE models are not mutually exclusive because site-specific preferences for amino acid residues can be a byproduct of CE or occur along with site-wise couplings. The fundamental difference between the two alternatives is that the purifying selection operates directly and independently on individual residues in the IE model, whereas purifying selection operates to preserve epistasis among coupled sites in the CE model.
Potts models have been used for modeling CE (Shekhar et al. 2013;Couce et al. 2017;Gao et al. 2019;de la Paz et al. 2020). These models incorporate the strength of coupling between sites as well as more traditional individual site residue preferences. Direct coupling analysis (DCA) of large sequence alignments for protein domain families has been employed to estimate pairwise coupling constraints among all positions (Weigt et al. 2009). fig. 1 shows the DCA-inferred parameters of a Potts model for a protein domain derived from an alignment of thousands of sequences across the tree of life and genomes. The relative strength of pairwise epistatic coupling between any two positions is reflected in the color intensities of cells in fig. 1a. Each cell in the pairwise couplings matrix further consists of a 21 × 21 matrix (20 amino acid characters and one indel "−" character) whose coefficients reflect the probability of observing each given pair of amino acids among all sites ( fig. 1a inset).
The Potts model also includes additional terms corresponding to equilibrium frequencies of amino acids at each position ( fig. 1b). Importantly, site-specific residue preferences at a given time in the evolution of a domain are contextual in the Potts model, as they depend on specific residues present in other positions (Weigt et al. 2009;see Methods). Due to the formal analogy between Potts model probability and Gibbs-Boltzmann distribution, the log-likelihood is referred to as statistical or Hamiltonian energy and can be interpreted as an indirect measure of evolutionary fitness. In vivo assays of in silico evolved sequences show correlations between the biological fitness and Hamiltonian Energy for various enzymes in Escherichia coli (Russ et al. 2020;Bisardi et al. 2021). These observations support the use of the Potts model to study the impact of intramolecular epistasis on molecular sequence evolutionary patterns in protein domains (Rizzato et al. 2019;de la Paz et al. 2020;Patel and Kumar 2021).
In simulation studies using epistasis during sequence evolution, Rizzato et al. (2019) showed that epistasis creates heterogeneity of observed substitution rates among sites. Soon after, de la Paz et al. (2020) developed an extended simulation framework and showed that the overdispersion of evolutionary rates among evolutionary lineages and sites within domains are emergent properties of epistasis. Using de la Paz et al. (2020) framework, Patel and Kumar (2021) found that epistasis creates I-sites more readily than the Gamma-distributed rate variation among sites at biologically realistic evolutionary divergences. Local site-specific amino-acid preferences. In panel a, two dimensions visualized here represent a pair of sites in a sequence. The other two dimensions (inset) specify the coupling strength for specific residues at each site in the pair. Each cell of the larger matrices shows the Frobenius norm of the couplings among all residues at a given pair of sites. Darker, warmer colors represent stronger overall coupling between sites. The inset shows the coupling between 20 residues for the given pair of sites; more positive values indicate a more preferred combination and more negative values indicate a more undesired combination of residues. Zero values indicate that a specific combination of residues is neither favored nor rejected. Patel et al. · https://doi.org/10.1093/molbev/msac106 MBE While Patel and Kumar (2021) showed epistasis (CE model) to be one natural source of I-sites, the contribution of site-specific amino acid preferences alone without site couplings (IE model) is yet to be evaluated and contrasted with the CE model. This is important because site-specific preferences induce strong purifying selection at a position independent of pairwise residue constraints (Doud et al. 2015). Therefore, we used de la Paz's sequence evolution with epistatic constraints (SEECs) simulation framework to simulate coupled and non-coupled substitutions to dissect the net effect of site interactions on the creation of I-sites and exertion of purifying selection in protein evolution. We report that site couplings are more influential in maintaining I-sites than site-specific amino acid residue preferences. However, together, they provide natural mechanistic explanations for many evolutionary I-sites in protein domains and dictate their evolutionary rate.
Simulating Protein Sequence Evolution
We employed de la Paz et al.'s framework (2020) that uses a Potts model to simulate SEECs. It simulates the evolution of protein sequences in a stepwise manner in which each step is a generation that involves choosing a position at random and then attempting an amino acid substitution. The amino acid to be substituted is randomly selected from a conditional probability distribution calculated using each residue's relative contribution to the overall sequence fitness (statistical Hamiltonian Energy). In this substitution process, pre-existing residues at all other positions provide the context for the new substitution and dictate the probability of substitution at the selected position based on the parameters in the Potts model. A position selected for substitution may not receive a change because the residue selected for substitution is not allowed by the Potts model. This site would be considered invariant for as long as its amino acid found in the first generation is not changed (see Methods).
We used SEEC to simulate protein sequence evolution under a CE model that includes pairwise epistatic constraints and individual site-specific residue preferences, IE model with no pairwise epistasis but only individual sitespecific preferences, and a uniform evolution (UE) model of neither pairwise epistasis nor site-specific preferences. The UE model served as a null model of molecular evolution where all amino acid residues had an equal probability of substitution at all positions independent of the sequence at all other positions. There were no other sources of negative or positive selection (strictly neutral evolution). Under the UE model, the number of substitutions observed in a sequence is purely a function of the amount of time elapsed and the stochasticity of the evolutionary process. Thus, the numbers of substitutions and I-sites observed in a sequence under the UE model are simply due to mutational input that create amino acid substitutions, which is the baseline to generate the net contribution of CE and IE models.
Adjusting CE and IE models for expectations provided by the null UE simulations results in a simplified framework where we can tease apart the contributions of sitespecific and pairwise coupling effects on the patterns of substitutions observed during protein domain evolution. Differences in substitution patterns in CE simulations versus UE expectations at each site are attributed to the combination of pairwise site interactions (e) and local site-specific residue preferences (ℓ), which we term combined effects (ɛ = e + ℓ ⇔ CE − UE simulations). Similarly, differences between IE and UE simulations show substitutions due solely to local site-specific residue preferences (local effects; ℓ ⇔ IE − UE simulations). Thus, the difference between combined (ɛ) and local (ℓ) effects provides the net substitutions and I-sites contributed by pairwise epistasis only (e = ɛ − ℓ), i.e., no local site-specific residue preferences.
In all simulations, we tracked unsubstituted sites to count the number of I-sites during SEEC runs. We also counted the number of total substitutions in the sequence to estimate the evolutionary rate and, thus, the degree of negative selection pressure due to ℓ and ɛ. We analyzed ten protein domain models inferred via DCA by de la Paz et al. (2020). They spanned diverse sequences, structures, and biochemical compositions.
Excess of I-sites Created by Site Interactions
We first examined the number of I-sites observed during domain evolution. We compared the number of sites that remained invariant at increasingly later generations for each set of simulation parameters, corresponding to more time elapsed. As more substitutions were attempted in a domain over time, the number of I-sites decreased for CE and IE simulations ( fig. 2a and b, solid line). However, the decay rate of I-sites under the CE model ( fig. 2a) was much slower than that observed in IE simulations ( fig. 2b). These rates were lower than those for UE simulations ( fig. 2c).
In addition to all the I-sites, we tracked the number of sites that remained invariant despite being selected for amino acid substitution(s). That is, the attempted substitutions experienced negative selection. Dashed lines in fig. 2 show these trends (panels a-c), which are also recaptured in panels d, e, and f. The combined effect of pairwise couplings and local site-specific residue preferences, ɛ, for PF00001 created on average 16% more I-sites between 250 and 1000 generations of CE sequence evolution (fig. 2d, solid curve). In contrast, local effects, ℓ, for PF00001 created an average excess of 3.7% I-sites for the same periods of IE simulations ( fig. 2e, solid curve).
Fewer excess I-sites were created due to ɛ and ℓ at earlier and later generations than intermediate generations (figs. 2d and 2e). This is expected because earlier in an evolutionary trajectory, too few substitutions have occurred to differentiate between UE and CE (or IE) models. Later in an evolutionary course, too many substitutions have occurred for many sites to remain invariant. We find similar Epistasis Creates Invariant Sites and Modulates the Rate of Molecular Evolution · https://doi.org/10.1093/molbev/msac106 MBE trends in the nine other protein domains analyzed ( fig. 3a). Therefore, evolutionary divergence spanned in a comparison is a major determinant of the proportion of I-sites.
We then determined the excess of I-sites due solely to pairwise site-interactions, e ( fig. 2f). While the excess of I-sites created solely due to site-interactions is at a maximum at intermediate generations, we find that the mean proportion of I-sites due to e (e/ɛ) increases consistently from 70.7% to 80.0% over sequence evolution ( fig. 2g). This indicates that local site-specific residue preferences play a small but consistent role in maintaining I-sites over longer evolutionary periods. For the nine other protein domain families analyzed, ɛ values varied, but the general trend also holds ( fig. 3b).
Across analyses of 10 protein domain families, we see local site-specific residue preferences (ℓ) can produce a 3a) and, in many cases, create the majority of such sites, but, overall, site-couplings (e) always add substantially to this collection.
Site Couplings Modulate the Rate of Domain Evolution
Using domain sequences produced using SEEC, we compared the degree of negative selection imposed by 1) pairwise epistatic couplings, e and 2) local site-specific residue preferences, ℓ. Positions with less constraint will accept more substitutions due to less purifying selection, while those with greater constraints will accept fewer substitutions due to more purifying selection.
We analyzed the rate of evolution in simulations for the ten protein domain families. The evolutionary rate was measured as the number of substitutions per generation and adjusted to ensure that the same amount of mutational input was experienced by all ten protein domain families of different lengths; substitutions per generation = (substitutions/site)/(generations/site). Protein sequences undergoing UE do not experience negative selection, as all constraints from epistatic coupling and local residue preference are absent. Thus, sequences for all protein domain families show the same evolutionary rate: 0.952 substitutions per generation. Despite a uniform probability distribution used to select amino acids for substitution in each generation of the simulation, the UE evolutionary rate is not 1.0 substitutions per generation. This is because there is only a 95.2% chance of an amino acid being substituted when a replacement is chosen randomly with equal probability. Thus, on average, every one in 21 generations, the same amino acid as that of the previous generation, will be selected (see "Simulating protein sequence evolution") simply by chance, resulting in an "unsubstituted" position. Thus, in the absence of purifying selection (UE), simulated protein sequences are expected to allow 20/21 substitutions per generation (21 choices; 20 amino acids + 1 "−" alignment gap character). Negative selection pressures imposed by sequence constraints from pairwise epistasis (e) and local residue preferences (ℓ) will proportionally decrease the Epistasis Creates Invariant Sites and Modulates the Rate of Molecular Evolution · https://doi.org/10.1093/molbev/msac106 MBE evolutionary rate below the null UE rate. The rates of evolution for the ten protein domain families analyzed ( fig. 4a, right axis) using IE were as low as 0.66 and high as 0.86 substitutions per generation, while the evolutionary rates using CE are found to be between 0.52 and 0.71 substitutions per generation. Based on these expected rates of evolution for the protein domain sequences, we determined the extent of purifying selection during sequence evolution due to pairwise epistasis and local site-specific residue preferences. We measured this effect as an evolutionary rate ratio with the constrained model (CE or IE) to that for the unconstrained null, UE. We refer to this ratio as the allowed divergence for a given model, showing how much sequence change was allowed relative to that expected in unconstrained evolution. The complement of allowed divergence thus provides an estimate of the amount of sequence change prevented due to negative selection purging unacceptable amino acid replacement mutations. For example, evolution of PF00001 under IE was found to have 90.8% allowed divergence ( fig. 5a, left axis) (i.e., IE/UE = 0.864/0.952 = 90.8%). Here, purifying selection due to local site preferences (ℓ) prevented 9.2% (100% allowed under UE − 90.8% allowed under IE) of potential amino acid replacements that would otherwise be found as substitutions (purged replacement mutations). PF00001 evolution under CE allowed even less divergence, 71.7%, with purifying selection removing 28.3% of tested replacement mutations. So, the combined effect (ɛ = e + ℓ) of pairwise epistasis and local residue preferences in the PF00001 domain evolution resulted in .19% (19.1% = 28.3% − 9.2%) purifying selection relative to that caused by local residue preferences (ℓ) under IE alone. Examination of the other protein domain families showed similar results ( fig. 4a).
We also calculated the amount of purifying selection due solely to pairwise epistasis (e/ɛ) as the fraction of replacement mutations rejected under the combined effects (ɛ) of the CE model that were not due to local effects (ɛ − ℓ = ɛ). The component of purifying selection imposed (a) (b) FIG. 4. Allowed evolutionary divergence for constrained sequence evolutions. (a) Rates of evolution for IE (due to ℓ; green) and CE (due to ɛ; purple) are shown for all ten protein domain families analyzed. "Allowed divergence" (left axis) is a ratio of evolutionary rate for a given model relative to the null, UE evolutionary rate. Absolute rates of evolution are included on the right axis. (b) The fraction of negative selection due to pairwise epistasis relative to that from combined effects (e/ɛ; blue). Patel et al. · https://doi.org/10.1093/molbev/msac106 MBE solely due to pairwise epistasis ranged from 31.9% to 67.5% ( fig. 4b) among the ten protein domains analyzed. So, while IE leads to negative selection during sequence evolution due to site-specific residue preferences, the addition of pairwise epistasis in CE produces a considerable amount of negative selection as well, in many cases generating the majority of negative selection pressures during protein sequence evolution.
Pairwise Epistasis Directs Substitution Patterns Creating I-sites
The evolutionary rate of a protein sequence reflects the amount of negative selection constraining its evolution. Positions that are more constrained are expected to accumulate fewer substitutions (larger fraction of purged mutations) over time than positions that are less constrained. Thus, the number of unsubstituted sites in a protein sequence will vary with evolutionary time elapsed (see "Site couplings modulate the rate of domain evolution" and fig. 4) and with the degree of constraints across the entire sequence.
So, to adjust for the impact of differential negative selection across positions on the number of I-sites due to variation in constraints between CE and IE evolution, we examined the amount of excess I-sites created by epistasis under CE compared to IE having experienced the same amount of sequence divergence (substitutions per site). If pairwise epistasis did not impose additional sequence constraints that differed from constraints due to local sitespecific residue preferences only, then we would expect both CE and IE to have the same number of I-sites after having accumulated the same number of substitutions, regardless of how much time is required to do so in each case.
Here, we compared the excess of I-sites observed in CE evolution and IE evolution for each protein domain family, but at a fixed sequence divergence (substitutions per site) instead of evolutionary time (generations per site). We found that sequence evolution under CE still created more I-sites than IE ( fig. 5a). In fact, for seven of the ten protein domain families analyzed, more than 50% of the I-sites created were due to pairwise epistasis ( fig. 5b). Epistasis in the other three domains still contributed no less than 26.7% of I-sites. These patterns suggest that Epistasis Creates Invariant Sites and Modulates the Rate of Molecular Evolution · https://doi.org/10.1093/molbev/msac106 MBE pairwise epistasis provides unique constraints that substantially change the patterns of substitution that cause decreased substitution rate per site under increased constraints ( fig. 6), creating I-sites due to differential negative selection across sites in a protein domain.
Discussion
Two evolutionary factors contributing to the occurrence of I-sites are lack of mutational input and purging of alleles by differential negative selection. If enough evolutionary time has not elapsed, biological and evolutionary forces will not have had enough opportunity to create and test the effective fitness of possible mutations against natural selection. Both highly and un-constrained positions would appear to be lacking substitutions, but the source of such unsubstituted, I-sites would be ambiguous. With sufficient and constant mutational input at longer evolutionary timespans, the I-sites are expected to be unsubstituted due to stronger negative selection at positions with stronger sequence constraints than at positions with weaker (or no) constraints. MBE Some targets of natural selection based on structural and functional constraints have previously been proposed and developed into models. For example, protein structures are expected to be constrained based on environmental factors like the cellular environment; the protein solvent then chemically restricts the regions of the protein exposed to its surface. Similarly, the "internal" protein environment dictates the residues needed to ensure contacts that stabilize a protein's tertiary structure. Structural constraints also arise from protein flexibility and folding requirements (see review Echave et al. 2016). Sites in protein sequences can be conserved due to biochemical and functional importance, such as those required for proper protein-substrate interaction and binding. Further, even at the organismal level, sequence constraints can induce natural selection, as seen when tRNA availability limits codon usage.
Non-structural constraints have also affected natural selection during protein sequence evolution. For example, the intensity and tissue-specificity of gene expression have been shown to directly correlate with reduced evolutionary rates of proteins, e.g., (Subramanian and Kumar 2004). Genomic composition of genes (e.g., length of intronic sequence) and length of a protein's coding sequence (CDS) have also been associated with natural selection in sequence evolution, with more compact genes and shorter CDS having higher evolutionary rates, e.g., (Lipman et al. 2002;Liao et al. 2006). Thus, a wide variety of higher order biological features are expected to constrain protein sequence evolution. As the effects of higher-order constraints cascade through to the lower level sequence constraints, their effects cannot be statistically differentiated in pairwise-epistasis and site-specific amino acid preferences with protein sequence MSAs only. In fact, we can expect that even at the single-position level, sitespecific preferences estimated using MSAs are not independent but partly a result of pairwise-epistatic constraints. Indeed, site-specific preferences can be attributed to maintaining sequence and structural properties for proper biological function. However, very few such properties are independent of upstream requirements: single sites dictate secondary and tertiary protein structure, which define protein-protein interactions, etc.
As we previously mentioned, however, DCA-based models have provided a useful snapshot of a mechanism underlying protein sequence evolution that can recreate various statistical properties of sequences observed in empirical datasets, including rate heterogeneity and the occurrence of I-sites. Using Potts statistical models that incorporate epistasis (pairwise positional sequence constraints), we can describe protein sequence evolution without additional explicit parameterization of changes in the natural selection over both sequence position and evolutionary time.
In doing so, we find that pairwise epistatic constraints create variation in evolutionary rates both across positions and over time, more so than local site-specific constraints only. In fact, the importance of pairwise epistasis in affecting evolution increases relative to local amino acid preferences only with more sequence evolution (divergence). Highly constrained sites due to epistasis will remain unsubstituted over a large course of evolutionary time and classified as invariant. The number and positions of the I-sites are not constant, nor is the substitution rate of a given position constant in the presence of epistasis. Because Potts models provide an overarching statistical model for sequences of a protein domain family with a shared biological function, we can see that this change in evolution rates (and thus, site invariance state) over time does not require a change in function, making it compatible with the neutral theory of molecular evolution.
In summary, we examined the role of higher-order constraints due to pairwise amino acid interactions on protein sequence evolution properties and found that such interactions result in patterns of amino acid substitution not captured by lower order, independent site models. While the impact of such effects on phylogenetic inference with current methods may not substantially change outcomes at relatively modest sequence divergences (Magee et al. 2021), we show here that such a model begins to provide mechanistic insight into the processes underlying protein sequence evolution.
Data Collection
The relevant data for analysis of the ten protein domain families examined in our study was downloaded from the Datadryad.org data repository provided by de la Paz et al. (2020): https://doi.org/10.5061/dryad.2ngf1vhj8. Pairwise coupling and local field matrices, as well as the starting sequence used in simulations, for each protein domain family were extracted from the "Parameters_orig" MATLAB files available in the repository. The parameter matrices were for 21 amino acid states (20 amino acids + 1 "−" gap character); single nucleotide position and codon matrices were not inferred.
Potts Hamiltonian Model
Using the SEEC framework created by de la Paz et al.
(2020), we simulated protein sequence evolution at the amino acid level under a CE model that includes pairwise epistatic constraints and individual site-specific residue preferences, IE model without pairwise epistasis but with individual site-specific preferences, and a UE model of neither pairwise epistasis nor site-specific preferences. SEEC uses a Gibbs sampling approach to simulate sequence evolution by iteratively sampling from a sequence space described by a protein domain family's Potts model (see the methods section in de la Paz et al. 2020). Amino acid changes at a single position are sampled by conditioning on amino acids present in the remaining positions of the protein sequence. The conditional distribution of each of the 21 possible characters (20 amino acids + 1 indel character) is based on their relative fitness in the full sequence, Epistasis Creates Invariant Sites and Modulates the Rate of Molecular Evolution · https://doi.org/10.1093/molbev/msac106 MBE derived from the Potts model: Here, i and j are sites in the protein sequence, Z is a normalizing constant, J is the pairwise site coupling matrix ( fig. 1a), with J ij (a i , a j ) representing the coupling value for sites i and j when each has residues a i and a j , respectively. h is the local field matrix ( fig. 1b) with h i (a i ) indicating the local field at site i when the current residue is a i ; N is the length of the sequence. In the CE model, the J term is set to the protein domain family-specific coupling constraints, and the h term is similarly set to the family-specific site-specific constraints (local fields). The IE model is nested in the CE model with the J term is set to 0 for all possible pairs of residues a i and a j , at all pairs of sites i and j, such that pairwise epistasis does not contribute to sequence and residue change probabilities. The UE model is nested in the IE model, with the h term additionally being set to 0 for all possible residues a i at all sites i.
Simulations
For each protein domain family analyzed, we simulated 500 replicates of protein sequence evolution for each model (CE, IE, and UE). We initialized the CE, IE, and UE models to have the same random number generator seed to ensure that the same positions were tested for substitutions in each generation across the three models in a given simulation replicate. The same starting sequence, native to each respective protein domain family, was used in all simulations. These "native" sequences were annotated in the MATLAB data files provided by de la Paz et al. (2020) per protein domain family. Each simulation was run for 30,000 generations, and the first 5,000 generations were discarded as burn-in to ensure a steady state. The sequence at the steady state was our reference sequence in each replicate.
Substitutions were then tracked at each position separately. A site was considered invariant as long as the residue did not change from the first generation after burn-in (I all ). While a site can substitute away, then back to the residue found in the first generation and be identical-by-state, we did not consider such sites invariant, as they had successfully accepted a substitution throughout the tracked evolutionary history. We also tracked an adjusted count of I-sites (I adj ), which imposed the following additional criterion to be considered "invariant": the site must have been randomly selected for a possible substitution sampling at least one time. This adjustment accounted for conditions where a site may appear invariant because it was never randomly selected for substitution testing. Thus, it retained the starting residue state by virtue of the simulation scheme as opposed to rejected substitutions under the Potts Hamiltonian model. | 6,731.2 | 2022-05-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Vibration characteristics and stability of a moving membrane with variable speed subjected to follower force
In this paper, the vibration characteristics and stability of a moving membrane with variable speed subjected to follower force are studied. The vibration differential equation of the moving membrane is deduced by the D’Alembert principle. The intermediate variables of the differential equation are discretized by the Differential Quadrature Method, and the state equation with periodic coefficients is obtained.The state equation is solved by the implicit Runge-Kutta method. According to the Floquet theory, the dynamic stability region and instability region of the membrane are obtained, and the influence of the tension ratio, aspect ratio, average velocity and the follower force on the unstable region of the moving membrane are analyzed. This result provides theoretical guidance and basis for the design, manufacture and stability of the printing press.
Introduction
During the printing process, the air significantly influences the vibration characteristics related to the overprinting accuracy of the printing membrane. The membrane is moving at a variable speed in engineering projects. In recent years, many scholars have studied the stability of the moving system subjected to the follower force. Hasanshahi and Azadi [1] studied the flutter vibration of beam subjected to the follower force. Robinson [2] studied the dynamic stability of viscoelastic rectangular plates subjected to the uniform tangential force and triangular tangential force. Chen [3] analyzed the steady-state periodic transverse responses and stabilities of axially accelerating viscoelastic strings. Lewandowski [4] investigated the nonlinear vibration of beams, harmonically excited by harmonic forces. Alidoost [5] proposed an analytical solution for instability of a composite beam with a single delamination subjected to concentrated follower force. Ma [6] proved that the follower force effect decreased the natural frequencies in lower modes, and increased them in higher modes. Zhou al. [7] analyzed the vibration stability of a uniform orthogonal tangential rectangular plate, a functionally graded material plate and a rectangular thin plate with intermediate support and thermo elastic coupling rectangle boards. Azadi [8] analyzed and controlled the flutter vibration of a thermo elastic Functionally Graded Material (FGM) beam subjected to follower force. Figure 1 shows the mechanical model of the moving membrane subjected to the tangential uniform distribution. The follower force is q0 , v is the moving speed of the membrane in x direction, and the transverse vibration displacement of membrane is z direction. ( , , ) w x y t is transverse vibration displacement , x T and y T represent the tension on the boundary, a and b are the length and width of the membrane, respectively. The membrane density is ρ. Assuming that the membrane is subjected to an external force F(x, y, t) along the z direction, the dynamic equation of the moving membrane is obtained according to the D'Alembert principle.
Dynamic model and establishment of vibration equation
Assuming that the axial velocity of the moving membrane [9] has a small harmonic fluctuation with respect to the average velocity Where 0 v is the axial average velocity, 1 v and are the amplitude and frequency of the axial velocity fluctuation respectively. Introduce the dimensionless quantities The dimensionless form of the differential equation is The dimensionless form of the boundary condition is
Application of Differential Quadrature Method to establish complex characteristic equation
The differential quadrature form [10] of the vibration equation of the moving membrane with variable speed is The weight coefficient The differential quadrature form of the boundary condition is In order to address the problem, the equation (7) is transformed into a first-order differential form, and the periodic coefficients of the motion equation are derived. Equation (7) can be written as The formula of y can be written as T y W W , then the equation (8) can be expressed as , T is the period of the sinusoidal velocity and the periodic function ( ) G .
Solution of differential equations and Determination of system stable region
Second-order four-level implicit R-K method is applied to solve the rigid equation (9).
Based on the previous derivation, matrix A and B can be obtained The number of nodes is taken as 12 x y N N , and the appropriate step size is h. y can be obtained by the program in MATLAB.
According to the Floquet theory [11], the dynamic stability region and instability region of the membrane are determined .when 1 i ,the system is stable .when 1 i ,the system is unstable. When =1 i , the system is in a critical state. is the eigenvalue of the dynamic stability equation.
Numerical calculation and analysis
As can be seen from the figure 2, when the tension ratio 0.5 , the average velocity 0 0.5 c , and the follower force 1 Q , the aspect ratio is 0.5 r and 1 r , the boundary of the stable region of the system moves to the upper right of the plane 1 c as the aspect ratio increases. Therefore, the stable region increases. As is shown in figure 4, when the aspect ratio 1 r , the average velocity 0 0.5 c , and the follower force 1 Q , the tension ratio is 0.8 and 1 ,the stable region decreases as the tension ratio increases.
As is shown in figure 5, when the aspect ratio 1 r , the tension ratio 0.5 , and the average velocity 0 0.5 c , the follower force is 0.2 Q and 0.5 Q ,the stable region gradually decreases as the follower force increases.
Conclusions
The vibration characteristics and stability of a moving membrane with variable speed subjected to follower force are studied. The conclusions are as follows: (1) For moving membrane with variable speed, the stable region becomes larger when aspect ratio increases.
(2) For moving membrane with variable speed, the stable region becomes larger when the tension ratio, the average speed and the follower force decreases. | 1,435 | 2020-05-01T00:00:00.000 | [
"Engineering",
"Physics"
] |